{ "pages": [ { "page_number": 1, "text": "" }, { "page_number": 2, "text": " Computer and Information \nSecurity Handbook \n" }, { "page_number": 3, "text": " The Morgan Kaufmann Series in Computer Security \n Computer and Information Security Handbook \n John Vacca \n Disappearing Cryptography: Information Hiding: Steganography & Watermarking, Third Edition \n Peter Wayner \n Network Security: Know It All \n James Joshi, et al. \n Digital Watermarking and Steganography, Second Edition \n Ingemar Cox, Matthew Miller, Jeffrey Bloom, Jessica Fridrich, and Ton Kalker \n Information Assurance: Dependability and Security in Networked Systems \n Yi Qian, David Tipper, Prashant Krishnamurthy, and James Joshi \n Network Recovery: Protection and Restoration of Optical, SONET-SDH, IP, and MPLS \n Jean-Philippe Vasseur, Mario Pickavet, and Piet Demeester \n For further information on these books and for a list of forthcoming titles, \nplease visit our Web site at http://www.elsevierdirect.com \n" }, { "page_number": 4, "text": " Computer and Information \nSecurity Handbook \n Edited by \n John R. Vacca \n \nAMSTERDAM • BOSTON • HEIDELBERG • LONDON • NEW YORK \nOXFORD • PARIS • SAN DIEGO • SAN FRANCISCO \nSINGAPORE • SYDNEY • TOKYO\nMorgan Kaufmann Publishers is an imprint of Elsevier\n" }, { "page_number": 5, "text": " Morgan Kaufmann Publishers is an imprint of Elsevier. \n 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA \n \n This book is printed on acid-free paper. \n \n Copyright © 2009 by Elsevier Inc. All rights reserved. \nException to the above text:\n Chapter 29: © 2009, The Crown in right of Canada. \n \n Designations used by companies to distinguish their products are often claimed as trademarks or registered trademarks. \nIn all instances in which Morgan Kaufmann Publishers is aware of a claim, the product names appear in initial capital \nor all capital letters. All trademarks that appear or are otherwise referred to in this work belong to their respective \nowners. Neither Morgan Kaufmann Publishers nor the authors and other contributors of this work have any \nrelationship or affiliation with such trademark owners nor do such trademark owners confirm, endorse or approve the \ncontents of this work. Readers, however, should contact the appropriate companies for more information regarding \ntrademarks and any related registrations. \n \n No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by \nany means — electronic, mechanical, photocopying, scanning, or otherwise — without prior written \npermission of the publisher. \n \n Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, \nUK: phone: ( \u0002 44) 1865 843830, fax: ( \u0002 44) 1865 853333, E-mail: permissions@elsevier.com . You may also \ncomplete your request online via the Elsevier homepage ( http://elsevier.com ), by selecting \n “ Support & Contact ” then “ Copyright and Permission ” and then “ Obtaining Permissions. ” \n \n Library of Congress Cataloging-in-Publication Data \n Application submitted \n British Library Cataloguing-in-Publication Data \n A catalogue record for this book is available from the British Library. \n \n ISBN: 978-0-12-374354-1 \n \n For information on all Morgan Kaufmann publications, \n visit our Web site at www.mkp.com or www.elsevierdirect.com \n \n Printed in the United States of America \n 09 10 11 12 13 5 4 3 2 1 \n \n \n" }, { "page_number": 6, "text": " This book is dedicated to my wife, Bee. \n" }, { "page_number": 7, "text": "This page intentionally left blank\n" }, { "page_number": 8, "text": " Contents \n Foreword \nxxi\nPreface \nxxiii\nAcknowledgments \nxxvii\nAbout the Editor \nxxix\nContributors \nxxxi\nPart I \nOverview of System and Network \nSecurity: A Comprehensive \nIntroduction\n1. Building a Secure Organization \n3\nJohn Mallery\n1. Obstacles to Security \n3\nSecurity Is Inconvenient \n3\nComputers Are Powerful and Complex \n3\nComputer Users Are Unsophisticated \n4\nComputers Created Without a Thought \n to Security \n4\nCurrent Trend Is to Share, Not Protect \n4\nData Accessible from Anywhere \n4\nSecurity Isn’t About Hardware \n and Software \n5\nThe Bad Guys Are Very Sophisticated \n5\nManagement Sees Security as a Drain \n on the Bottom Line \n5\n2. Ten Steps to Building a Secure Organization 6\n A. Evaluate the Risks and Threats \n6\n B. Beware of Common Misconceptions \n8\n C. Provide Security Training for \nIT Staff—Now and Forever \n9\n D. Think “Outside the Box” \n10\n E. Train Employees: Develop a Culture \nof Security \n12\n F. Identify and Utilize Built-In Security \nFeatures of the Operating System and \nApplications \n14\nG. Monitor Systems \n16\nH. Hire a Third Party to Audit Security \n17\n I. Don’t Forget the Basics \n19\n J. Patch, Patch, Patch \n20\n2. A Cryptography Primer \n23\nScott R. Ellis\n1. What is Cryptography? \nWhat is Encryption? \n23\nHow Is Cryptography Done? \n24\n2. Famous Cryptographic Devices \n24\nThe Lorenz Cipher \n24\nEnigma \n24\n3. Ciphers \n25\nThe Substitution Cipher \n25\nThe Shift Cipher \n26\nThe Polyalphabetic Cipher \n29\nThe Kasiski/Kerckhoff Method \n30\n4. Modern Cryptography \n31\nThe Vernam Cipher (Stream Cipher) \n31\nThe One-Time Pad \n32\nCracking Ciphers \n33\nThe XOR Cipher and Logical Operands \n34\nBlock Ciphers \n35\n5. The Computer Age \n36\nData Encryption Standard \n36\nTheory of Operation \n37\nImplementation \n38\nRivest, Shamir, and Adleman (RSA) \n38\nAdvanced Encryption Standard \n (AES or Rijndael) \n38\n3 Preventing System Intrusions \n39\nMichael West\n 1. So, What is an Intrusion? \n39\n 2. Sobering Numbers \n40\n 3. Know Your Enemy: Hackers Versus \nCrackers \n40\n 4. Motives \n41\n 5. Tools of the Trade \n41\n 6. Bots \n42\n 7. Symptoms of Intrusions \n43\n 8. What Can You Do? \n43\nKnow Today’s Network Needs \n44\nNetwork Security Best Practices \n45\n 9. Security Policies \n45\n10. Risk Analysis \n46\nVulnerability Testing \n46\nAudits \n47\nRecovery \n47\n11. Tools of Your Trade \n47\nFirewalls \n47\nIntrusion Prevention Systems \n47\nApplication Firewalls \n48\nAccess Control Systems \n48\nUnified Threat Management \n49\n12. Controlling User Access \n49\nAuthentication, Authorization, \n and Accounting \n49\nWhat the User Knows \n49\n" }, { "page_number": 9, "text": "Contents\nviii\nWhat the User Has \n50\nThe User Is Authenticated, \n But Is She Authorized? \n50\nAccounting \n51\nKeeping Current \n51\n13. Conclusion \n51\n4. Guarding Against Network \nIntrusions \n53\n Tom Chen and Patrick J. Walsh \n1. Traditional Reconnaissance and Attacks \n53\n2. Malicious Software \n56\nLures and “Pull” Attacks \n57\n3. Defense in Depth \n58\n4. Preventive Measures \n59\nAccess Control \n59\nVulnerability Testing and Patching \n59\nClosing Ports \n60\nFirewalls \n60\nAntivirus and Antispyware Tools \n61\nSpam Filtering \n62\nHoneypots \n62\nNetwork Access Control \n63\n5. Intrusion Monitoring and Detection \n63\nHost-Based Monitoring \n64\nTraffic Monitoring \n64\nSignature-Based Detection \n64\nBehavior Anomalies \n65\nIntrusion Prevention Systems \n65\n6. Reactive Measures \n65\nQuarantine \n65\nTraceback \n66\n7. Conclusions \n66\n5. Unix and Linux Security \n67\n Gerald Beuchelt \n1. Unix and Security \n67\nThe Aims of System Security \n67\nAchieving Unix Security \n67\n2. Basic Unix Security \n68\nTraditional Unix Systems \n68\nStandard File and Device Access \n Semantics \n69\n4. Protecting User Accounts \nand Strengthening Authentication \n71\nEstablishing Secure Account Use \n71\nThe Unix Login Process \n71\nControlling Account Access \n71\nNoninteractive Access \n72\nOther Network Authentication \n Mechanisms \n73\nRisks of Trusted Hosts and Networks \n73\nReplacing Telnet, rlogin, and FTP \n Servers and Clients with SSH \n73\n5. Reducing Exposure to Threats by \nLimiting Superuser Privileges \n74\nControlling Root Access \n74\n6. Safeguarding Vital Data by Securing \nLocal and Network File Systems \n76\nDirectory Structure and Partitioning \n for Security \n76\n6. Eliminating the Security Weakness\nof Linux and Unix Operating \nSystems \n79\n Mario Santana \n1. Introduction to Linux and Unix \n79\nWhat Is Unix? \n79\nWhat Is Linux? \n80\nSystem Architecture \n82\n2. Hardening Linux and Unix \n84\nNetwork Hardening \n84\nHost Hardening \n88\nSystems Management Security \n90\n3. Proactive Defense for Linux and Unix \n90\nVulnerability Assessment \n90\nIncident Response Preparation \n91\nOrganizational Considerations \n92\n7. Internet Security \n93\n Jesse Walker \n1. Internet Protocol Architecture \n93\nCommunications Architecture Basics \n94\nGetting More Specific \n95\n2. An Internet Threat Model \n100\nThe Dolev-Yao Adversary Model \n101\nLayer Threats \n101\n3. Defending Against Attacks on \nthe Internet \n105\nLayer Session Defenses \n106\nSession Startup Defenses \n113\n4. Conclusion \n117\n8. The Botnet Problem \n119\n Xinyuan Wang and Daniel Ramsbrock \n1. Introduction \n119\n2. Botnet Overview \n120\nOrigins of Botnets \n120\nBotnet Topologies and Protocols \n120\n3. Typical Bot Life Cycle \n122\n4. The Botnet Business Model \n123\n5. Botnet Defense \n124\nDetecting and Removing \n Individual Bots \n124\nDetecting C&C Traffic \n125\nDetecting and Neutralizing \n the C&C Servers \n125\nAttacking Encrypted C&C Channels \n126\nLocating and Identifying the Botmaster \n128\n6. Botmaster Traceback \n128\nTraceback Challenges \n129\n" }, { "page_number": 10, "text": "Contents\nix\nTraceback Beyond the Internet \n130\n7. Summary \n132\n 9. Intranet Security \n133\n Bill Mansoor \n 1. Plugging the Gaps: NAC \nand Access Control \n136\n 2. Measuring Risk: Audits \n137\n 3. Guardian at the Gate: Authentication \nand Encryption \n138\n 4. Wireless Network Security \n139\n 5. Shielding the Wire: Network \nProtection \n141\n 6. Weakest Link in Security: \nUser Training \n142\n 7. Documenting the Network: \nChange Management \n142\n 8. Rehearse the Inevitable: \nDisaster Recovery \n143\n 9. Controlling Hazards: Physical \nand Environmental Protection \n145\n10. Know Your Users: \nPersonnel Security \n146\n11. Protecting Data Flow: \nInformation and System Integrity \n146\n12. Security Assessments \n147\n13. Risk Assessments \n148\n14. Conclusion \n148\n10. Local Area Network Security \n149\n Dr. Pramod Pandya \n 1. Identify network threats \n150\nDisruptive \n150\nUnauthorized Access \n150\n 2. Establish Network Access Controls \n150\n 3. Risk Assessment \n151\n 4. Listing Network Resources \n151\n 5. Threats \n151\n 6. Security Policies \n151\n 7. The Incident-handling Process \n152\n 8. Secure Design Through Network \nAccess Controls \n152\n 9. Ids Defined \n153\n10. NIDS: Scope and Limitations \n154\n11. A Practical Illustration of NIDS \n154\nUDP Attacks \n154\nTCP SYN (Half-Open) Scanning \n155\nSome Not-So-Robust Features \n of NIDS \n156\n12. Firewalls \n158\nFirewall Security Policy \n159\nConfiguration Script for sf Router \n160\n13. Dynamic Nat Configuration \n160\n14. The Perimeter \n160\n15. Access List Details \n162\n16. Types of Firewalls \n162\n17. Packet Filtering: IP Filtering Routers \n162\n18. Application-layer Firewalls: \nProxy Servers \n163\n19. Stateful Inspection Firewalls \n163\n20. NIDS Complements Firewalls \n163\n21. Monitor and Analyze \nSystem Activities \n163\nAnalysis Levels \n164\n22. Signature Analysis \n164\n23. Statistical Analysis \n164\n24. Signature Algorithms \n164\nPattern Matching \n164\nStateful Pattern Matching \n165\nProtocol Decode-based Analysis \n165\nHeuristic-Based Analysis \n166\nAnomaly-Based Analysis \n166\n11. Wireless Network Security \n169\n Chunming Rong and Erdal Cayirci \n1. Cellular Networks \n169\nCellular Telephone Networks \n170\n802.11 Wireless LANs \n170\n2. Wireless Ad Hoc Networks \n171\nWireless Sensor Networks \n171\nMesh Networks \n171\n3. Security Protocols \n172\nWEP \n172\nWPA and WPA2 \n173\nSPINS: Security Protocols for \n Sensor Networks \n173\n4. Secure Routing \n175\nSEAD \n175\nAriadne \n176\nARAN \n176\nSLSP \n177\n5. Key Establishment \n177\nBootstrapping \n177\nKey Management \n178\nReferences \n181\n12. Cellular Network Security \n183\n Peng Liu , Thomas F. LaPorta and \n Kameswari Kotapati \n1. Introduction \n183\n2. Overview of Cellular Networks \n184\nOverall Cellular Network \n Architecture \n184\nCore Network Organization \n185\nCall Delivery Service \n185\n3. The State of the Art of Cellular \nNetwork Security \n186\nSecurity in the Radio Access \n Network \n186\nSecurity in Core Network \n187\nSecurity Implications of Internet \nConnectivity \n188\nSecurity Implications of PSTN \n Connectivity \n188\n" }, { "page_number": 11, "text": "Contents\nx\n4. Cellular Network Attack Taxonomy \n189\nAbstract Model \n189\nAbstract Model Findings \n189\nThree-Dimensional Attack \n Taxonomy \n192\n5. Cellular Network Vulnerability \nAnalysis \n193\nCellular Network Vulnerability \n Assessment Toolkit (CAT) \n195\nAdvanced Cellular Network \n Vulnerability Assessment \n Toolkit (aCAT) \n198\nCellular Network Vulnerability \n Assessment Toolkit for evaluation \n (eCAT) \n199\n6. Discussion \n201\nReferences \n202\n13. RFID Security \n205\n Chunming Rong and Erdal Cayirci \n1. RFID Introduction \n205\nRFID System Architecture \n205\nRFID Standards \n207\nRFID Applications \n208\n2. RFID Challenges \n209\nCounterfeiting \n209\nSniffing \n209\nTracking \n209\nDenial of Service \n210\nOther Issues \n210\nComparison of All Challenges \n212\n3. RFID Protections \n212\nBasic RFID System \n212\nRFID System Using Symmetric-Key \n Cryptography \n215\nRFID System Using Public-key \n Cryptography \n217\nReferences \n219\nPart II\nManaging Information Security\n14. Information Security Essentials \nfor IT Managers, Protecting \nMission-Critical Systems \n225\n Albert Caballero \n1. Information Security Essentials \nfor IT Managers, Overview \n225\nScope of Information Security \n Management \n225\nCISSP Ten Domains of Information \n Security \n225\nWhat is a Threat? \n227\nCommon Attacks \n228\nImpact of Security Breaches \n231\n2. Protecting Mission-critical Systems \n231\nInformation Assurance \n231\nInformation Risk Management \n231\nDefense in Depth \n233\nContingency Planning \n233\n3. Information Security from \nthe Ground Up \n236\nPhysical Security \n236\nData Security \n237\nSystems and Network Security \n239\nBusiness Communications Security \n241\nWireless Security \n242\nWeb and Application Security \n246\nSecurity Policies and Procedures \n247\nSecurity Employee Training \n and Awareness \n248\n4. Security Monitoring \nand Effectiveness \n249\nSecurity Monitoring Mechanisms \n250\nIncidence Response and Forensic \nInvestigations \n251\nValidating Security Effectiveness \n251\nReferences \n252\n15. Security Management Systems \n255\n Joe Wright and Jim Harmening \n 1. Security Management \nSystem Standards \n255\n 2. Training Requirements \n256\n 3. Principles of Information Security \n256\n 4. Roles and Responsibilities \nof Personnel \n256\n 5. Security Policies \n256\n 6. Security Controls \n257\n 7. Network Access \n257\n 8. Risk Assessment \n257\n 9. Incident Response \n258\n10. Summary \n258\n16. Information Technology Security \nManagement \n259\n Rahul Bhasker and Bhushan Kapoor \n1. Information Security Management \nStandards \n259\nFederal Information Security \n Management Act \n259\nInternational Standards Organization \n260\nOther Organizations Involved \n in Standards \n260\n2. Information Technology \nsecurity aspects \n260\nSecurity Policies and Procedures \n261\nIT Security Processes \n263\n3. Conclusion \n267\n" }, { "page_number": 12, "text": "Contents\nxi\n17. Identity Management \n269\n Dr. Jean-Marc Seigneur and Dr. Tewfiq El \nMalika \n1. Introduction \n269\n2. Evolution of Identity Management \nRequirements \n269\nDigital Identity Definition \n270\nIdentity Management Overview \n270\nPrivacy Requirement \n272\nUser-Centricity \n272\nUsability Requirement \n273\n3. The Requirements Fulfilled \nby Current Identity Management \nTechnologies \n274\nEvolution of Identity Management \n274\nIdentity 2.0 \n278\n4. Identity 2.0 for Mobile Users \n286\nMobile Web 2.0 \n286\nMobility \n287\nEvolution of Mobile Identity \n287\nThe Future of Mobile User-Centric \n Identity Management in an Ambient \n Intelligence World \n290\nResearch Directions \n292\n5. Conclusion \n292\n18. Intrusion Prevention and \nDetection Systems \n293\n Christopher Day \n 1. What is an “Intrusion,” Anyway? \n293\nPhysical Theft \n293\nAbuse of Privileges (The Insider Threat) \n293\n 2. Unauthorized Access by an \nOutsider \n294\n 3. Malware Infection \n294\n 4. The Role of the “0-day” \n295\n 5. The Rogue’s Gallery: \nAttackers and Motives \n296\n 6. A Brief Introduction to TCP/IP \n297\n 7. The TCP/IP data Architecture and \nData Encapsulation \n298\n 8. Survey of Intrusion Detection \nand Prevention Technologies \n300\n 9. Anti-Malware Software \n301\n10. Network-based Intrusion \nDetection Systems \n302\n11. Network-based Intrusion \nPrevention Systems \n303\n12. Host-based Intrusion \nPrevention Systems \n304\n13. Security Information \nManagement Systems \n304\n14. Network Session Analysis \n304\n15. Digital Forensics \n305\n16. System Integrity Validation \n306\n17. Putting it all Together \n306\n19. Computer Forensics \n307\n Scott R. Ellis \n1. What is Computer Forensics? \n307\n2. Analysis of Data \n308\nComputer Forensics and Ethics, \n Green Home Plate Gallery View \n309\nDatabase Reconstruction \n310\n3. Computer Forensics in the Court \nSystem \n310\n4. Understanding Internet History \n312\n5. Temporary Restraining Orders \nand Labor Disputes \n312\nDivorce \n313\nPatent Infringement \n313\nWhen to Acquire, When to \n Capture Acquisition \n313\nCreating Forensic Images Using \n Software and Hardware \n Write Blockers \n313\nLive Capture of Relevant Files \n314\nRedundant Array of Independent \n (or Inexpensive) Disks (RAID) \n314\nFile System Analyses \n314\nNTFS \n315\nThe Role of the Forensic Examiner \n in Investigations and File \n Recovery \n315\nPassword Recovery \n317\nFile Carving \n318\nThings to Know: How Time stamps \n Work \n320\nExperimental Evidence \n321\nEmail Headers and Time stamps, \n Email Receipts, and Bounced \n Messages \n322\nSteganography “Covered Writing” \n324\n5. First Principles \n325\n6. Hacking a Windows XP Password \n325\nNet User Password Hack \n325\nLanman Hashes and Rainbow \n Tables \n325\nPassword Reset Disk \n326\nMemory Analysis and the Trojan \n Defense \n326\nUser Artifact Analysis \n326\nRecovering Lost and Deleted Files \n327\nEmail \n327\nInternet History \n327\n7. Network Analysis \n328\nProtocols \n328\nAnalysis \n328\n8. Computer Forensics Applied \n329\nTracking. Inventory, Location \n of Files, Paperwork, Backups, \n and So On \n329\nTestimonial \n329\nExperience Needed \n329\nJob Description, Technologist \n329\n" }, { "page_number": 13, "text": "Contents\nxii\nJob Description Management \n330\nCommercial Uses \n330\nSolid Background \n330\nEducation/Certification \n330\nProgramming and Experience \n331\nPublications \n331\n 9. Testifying as an Expert \n332\nDegrees of Certainty \n332\nCertainty Without Doubt \n334\n10. Beginning to End in Court \n334\nDefendants, Plaintiffs, \n and Prosecutors \n334\nPretrial Motions \n335\nTrial: Direct and Cross-Examination \n335\nRebuttal \n335\nSurrebuttal \n335\nTestifying: Rule 702. Testimony \n by Experts \n335\nCorrecting Mistakes: Putting Your \n Head in the Sand \n336\n20. Network Forensics \n339\n Yong Guan \n1. Scientific Overview \n339\n2. The Principles of Network Forensics \n340\n3. Attack Traceback and Attribution \n341\nIP Traceback \n341\nStepping-Stone Attack Attribution \n344\n4. Critical Needs Analysis \n346\n5. Research Directions \n346\nVoIP Attribution \n346\n21. Firewalls \n349\n Dr . Errin W. Fulp \n 1. Network Firewalls \n349\n 2. Firewall Security Policies \n350\nRule-Match Policies \n351\n 3. A Simple Mathematical Model \nfor Policies, Rules, and Packets \n351\n 4. First-match Firewall Policy \nAnomalies \n352\n 5. Policy Optimization \n352\nPolicy Reordering \n352\nCombining Rules \n353\nDefault Accept or Deny? \n353\n 6. Firewall Types \n353\nPacket Filter \n354\nStateful Packet Firewalls \n354\nApplication Layer Firewalls \n354\n 7. Host and Network Firewalls \n355\n 8. Software and Hardware Firewall \nImplementations \n355\n 9. Choosing the Correct Firewall \n355\n10. Firewall Placement and \nNetwork Topology \n356\nDemilitarized Zones \n357\nPerimeter Networks \n357\nTwo-Router Configuration \n357\nDual-Homed Host \n358\nNetwork Configuration Summary \n358\n11. Firewall Installation and \nConfiguration \n358\n12. Supporting Outgoing Services \nThrough Firewall Configuration \n359\nForms of State \n359\nPayload Inspection \n360\n13. Secure External Services \nProvisioning \n360\n14. Network Firewalls for Voice and \nVideo Applications \n360\nPacket Filtering H.323 \n361\n15. Firewalls and Important \nAdministrative Service Protocols \n361\nRouting Protocols \n361\nInternet Control Message \n Protocol \n362\nNetwork Time Protocol \n362\nCentral Log File Management \n362\nDynamic Host Configuration \n Protocol \n363\n16. Internal IP Services Protection \n363\n17. Firewall Remote Access \nConfiguration \n364\n18. Load Balancing and \nFirewall Arrays \n365\nLoad Balancing in Real Life \n365\nHow to Balance the Load \n365\nAdvantages and Disadvantages \n of Load Balancing \n366\n19. Highly Available Firewalls \n366\nLoad Balancer Operation \n366\nInterconnection of Load Balancers \n and Firewalls \n366\n20. Firewall Management \n367\n21. Conclusion \n367\n22. Penetration Testing \n369\n Sanjay Bavisi \n 1. What is Penetration Testing? \n369\n 2. How does Penetration Testing \nDiffer from an Actual “Hack?” \n370\n 3. Types of Penetration Testing \n371\n 4. Phases of Penetration Testing \n373\nThe Pre-Attack Phase \n373\nThe Attack Phase \n373\nThe Post-Attack Phase \n373\n 5. Defining What’s Expected \n374\n 6. The Need for a Methodology \n375\n 7. Penetration Testing \nMethodologies \n375\n 8. Methodology in Action \n376\nEC-Council LPT Methodology \n376\n 9. Penetration Testing Risks \n378\n10. Liability Issues \n378\n11. Legal Consequences \n379\n" }, { "page_number": 14, "text": "Contents\nxiii\n12. “Get out of jail free” Card \n379\n13. Penetration Testing Consultants \n379\n14. Required Skill Sets \n380\n15. Accomplishments \n380\n16. Hiring a Penetration Tester \n380\n17. Why Should a Company \nHire You? \n381\nQualifications \n381\nWork Experience \n381\nCutting-Edge Technical Skills \n381\nCommunication Skills \n381\nAttitude \n381\nTeam Skills \n381\nCompany Concerns \n381\n18. All’s Well that Ends Well \n382\n23. What Is Vulnerability \nAssessment? \n383\n Almantas Kakareka \n 1. Reporting \n383\n 2. The “It Won’t Happen to Us” Factor \n383\n 3. Why Vulnerability Assessment? \n384\n 4. Penetration Testing Versus \nVulnerability Assessment \n384\n 5. Vulnerability Assessment Goal \n385\n 6. Mapping the Network \n385\n 7. Selecting the Right Scanners \n386\n 8. Central Scans Versus Local Scans \n387\n 9. Defense in Depth Strategy \n388\n10. Vulnerability Assessment Tools \n388\nNessus \n388\nGFI LANguard \n389\nRetina \n389\nCore Impact \n389\nISS Internet Scanner \n389\nX-Scan \n389\nSara \n389\nQualysGuard \n389\nSAINT \n389\nMBSA \n389\n11. Scanner Performance \n390\n12. Scan Verification \n390\n13. Scanning Cornerstones \n390\n14. Network Scanning \nCountermeasures \n390\n15. Vulnerability Disclosure Date \n391\nFind Security Holes Before \n They Become Problems \n391\n16. Proactive Security Versus Reactive \nSecurity \n392\n17. Vulnerability Causes \n392\nPassword Management Flaws \n392\nFundamental Operating \n System Design Flaws \n392\nSoftware Bugs \n392\nUnchecked User Input \n392\n18. DIY Vulnerability Assessment \n393\n19. Conclusion \n393\n Part III\nEncryption Technology \n24. Data Encryption \n397\n Dr. Bhushan Kapoor and Dr. Pramod \n Pandya \n1. Need for Cryptography \n398\nAuthentication \n398\nConfidentiality \n398\nIntegrity \n398\nNonrepudiation \n398\n2. Mathematical Prelude to Cryptography 398\nMapping or Function \n398\nProbability \n398\nComplexity \n398\n3. Classical Cryptography \n399\nThe Euclidean Algorithm \n399\nThe Extended Euclidean Algorithm \n399\nModular Arithmetic \n399\nCongruence \n400\nResidue Class \n400\nInverses \n400\nFundamental Theorem \n of Arithmetic \n400\nCongruence Relation Defined \n401\nSubstitution Cipher \n401\nTransposition Cipher \n402\n4. Modern Symmetric Ciphers \n402\nS-Box \n403\nP-Boxes \n403\nProduct Ciphers \n404\n5. Algebraic Structure \n404\nDefinition Group \n404\nDefinitions of Finite and Infinite \n Groups (Order of a Group) \n404\nDefinition Abelian Group \n404\nExamples of a Group \n404\nDefinition: Subgroup \n405\nDefinition: Cyclic Group \n405\nRings \n405\nDefinition: Field \n405\nFinite Fields GF(2n) \n405\nModular Polynomial Arithmetic \n Over GF(2) \n406\nUsing a Generator to Represent \n the Elements of GF(2n) \n406\nGF(23) Is a Finite Field \n407\n6. The Internal Functions of Rijndael \nin AES Implementation \n407\nMathematical Preliminaries \n408\nState \n408\n7. Use of Modern Block Ciphers \n412\nThe Electronic Code Book (ECB) \n412\nCipher-Block Chaining (CBC) \n412\n8. Public-key Cryptography \n412\nReview: Number Theory \n412\n9. Cryptanalysis of RSA \n416\nFactorization Attack \n416\n" }, { "page_number": 15, "text": "Contents\nxiv\n10. Diffie-Hellman Algorithm \n417\n11. Elliptic Curve Cryptosystems \n417\nAn Example \n418\nExample of Elliptic Curve Addition \n418\nEC Security \n419\n12. Message Integrity and \nAuthentication \n419\nCryptographic Hash Functions \n419\nMessage Authentication \n420\nDigital Signature \n420\nMessage Integrity Uses a Hash \n Function in Signing the Message \n420\nRSA Digital Signature Scheme \n420\nRSA Digital Signature and \n the Message Digest \n420\n13. Summary \n421\nReferences \n421\n25. Satellite Encryption \n423\n Daniel S. Soper \n1. The Need for Satellite Encryption \n423\n2. Satellite Encryption Policy \n425\n3. Implementing Satellite Encryption \n426\nGeneral Satellite Encryption Issues \n426\nUplink Encryption \n428\nExtraplanetary Link Encryption \n428\nDownlink Encryption \n429\n4. The Future of Satellite Encryption \n430\n26. Public Key Infrastructure \n433\n Terence Spies \n1. Cryptographic Background \n433\nDigital Signatures \n433\nPublic Key Encryption \n434\n2. Overview of PKI \n435\n3. The X.509 Model \n436\nThe History of X.509 \n436\nThe X.509 Certificate Model \n436\n4. X.509 Implementation Architectures \n437\n5. X.509 Certificate Validation \n439\nValidation Step 1: Construct the \n Chain and Validate Signatures \n439\nValidation Step 2: Check Validity \n Dates, Policy and Key Usage \n439\nValidation Step 3: Consult \n Revocation Authorities \n440\n6. X.509 Certificate Revocation \n440\nOnline Certificate Status Protocol \n441\n7. Server-based Certificate \nValidity Protocol \n442\n8. X.509 Bridge Certification \nSystems \n443\nMesh PKIs and Bridge CAs \n443\n9. X.509 Certificate Format \n444\nX.509 V1 and V2 Format \n445\nX.509 V3 Format \n445\nX.509 Certificate Extensions \n445\nPolicy Extensions \n446\nCertificate Policy \n446\n10. PKI Policy Description \n447\n11. PKI Standards Organizations \n448\nIETF PKIX \n448\nSDSI/SPKI \n448\nIETF OpenPGP \n448\n12. PGP Certificate Formats \n449\n13. PGP PKI Implementations \n449\n14. W3C \n449\n15. Alternative PKI Architectures \n450\n16. Modified X.509 Architectures \n450\nPerlman and Kaufman’s User-Centric \n PKI \n450\nGutmann’s Plug and Play PKI \n450\nCallas’s Self-Assembling PKI \n450\n17. Alternative Key Management Models 450\n27. Instant-Messaging Security \n453\n Samuel J. J. Curry \n1. Why Should I Care About \nInstant Messaging? \n453\n2. What is Instant Messaging? \n453\n3. The Evolution of Networking \nTechnologies \n454\n4. Game Theory and Instant Messaging \n455\nYour Workforce \n455\nGenerational Gaps \n456\nTransactions \n457\n5. The Nature of the Threat \n457\nMalicious Threat \n458\nVulnerabilities \n459\nMan-in-the-Middle Attacks \n459\nPhishing and Social Engineering \n459\nKnowledge Is the Commodity \n459\nData and Traffic Analysis \n460\nUnintentional Threats \n460\nRegulatory Concerns \n461\n6. Common IM Applications \n461\nConsumer Instant Messaging \n461\nEnterprise Instant Messaging \n461\nInstant-Messaging Aggregators \n462\nBackdoors: Instant Messaging \n Via Other Means (HTML) \n462\nMobile Dimension \n462\n7. Defensive Strategies \n462\n8. Instant-messaging Security Maturity \nand Solutions \n463\nAsset Management \n463\nBuilt-In Security \n463\nContent Filtering \n463\nClassic Security \n463\nCompliance \n464\nData Loss Prevention \n464\nLogging \n464\nArchival \n464\n" }, { "page_number": 16, "text": "Contents\nxv\n 9. Processes \n464\nInstant-Messaging Activation \n and Provisioning \n464\nApplication Review \n464\nPeople \n464\nRevise \n464\nAudit \n464\n10. Conclusion \n465\nExample Answers to Key Factors \n466\n Part IV\n Privacy and Access Management \n28. NET Privacy \n469\n Marco Cremonini , Chiara Braghin and Claudio \nAgostino Ardagna \n1. Privacy in the Digital Society \n469\nThe Origins, The Debate \n469\nPrivacy Threats \n471\n2. The Economics of Privacy \n474\nThe Value of Privacy \n474\nPrivacy and Business \n475\n3. Privacy-Enhancing Technologies \n476\nLanguages for Access Control \n and Privacy Preferences \n476\nData Privacy Protection \n478\nPrivacy for Mobile Environments \n480\n4. Network Anonymity \n482\nOnion Routing \n483\nAnonymity Services \n484\n5. Conclusion \n485\n29. Personal Privacy Policies \n487\n Dr. George Yee and Larry Korba \n1. Introduction \n487\n2. Content of Personal Privacy Policies \n488\nPrivacy Legislation and Directives \n488\nRequirements from Privacy Principles \n488\nPrivacy Policy Specification \n490\n3. Semiautomated Derivation \nof Personal Privacy Policies \n490\nAn Example \n492\nRetrieval from a Community of Peers \n493\n4. Specifying Well-formed Personal \nPrivacy Policies \n494\nUnexpected Outcomes \n494\nOutcomes From the Way the \n Matching Policy Was Obtained \n494\n5. Preventing Unexpected Negative \nOutcomes \n496\nDefinition 1 \n496\nDefinition 2 \n496\nRules for Specifying Near \n Well-Formed Privacy Policies \n496\nApproach for Obtaining Near \n Well-Formed Privacy Policies \n497\n6. The Privacy Management Model \n497\nHow Privacy Policies Are Used \n497\nPersonal Privacy Policy Negotiation \n499\nPersonal Privacy Policy Compliance \n502\n7. Discussion and Related Work \n502\n8. Conclusions and Future Work \n505\n30. Virtual Private Networks \n507\n Jim Harmening and Joe Wright \n1. History \n508\n2. Who is in Charge? \n511\n3. VPN Types \n512\nIPsec \n512\nL2TP \n512\nL2TPv3 \n513\nL2F \n513\nPPTP VPN \n513\nMPLS \n514\nMPVPN™ \n514\nSSH \n514\nSSL-VPN \n514\nTLS \n514\n4. Authentication Methods \n515\nHashing \n515\nHMAC \n515\nMD5 \n515\nSHA-1 \n515\n5. Symmetric Encryption \n516\n6. Asymmetric Cryptography \n516\n7. Edge Devices \n516\n8. Passwords \n516\n9. Hackers and Crackers \n517\n31. Identity Theft \n519\nMarkus Jacobsson and Alex Tsow\n1. Experimental Design \n520\nAuthentic Payment Notification: \n Plain Versus Fancy Layout \n522\nStrong Phishing Message: Plain \n Versus Fancy Layout \n525\nAuthentic Promotion: Effect of \n Small Footers \n525\nWeak Phishing Message \n527\nAuthentic Message \n528\nLogin Page \n528\nLogin Page: Strong and Weak \n Content Alignment \n529\nLogin Page: Authentic and Bogus \n (But Plausible) URLs \n532\nLogin Page: Hard and Soft \n Emphasis on Security \n532\nBad URL, with and without SSL \n and Endorsement Logo \n535\nHigh-Profile Recall Notice \n535\n" }, { "page_number": 17, "text": "Contents\nxvi\nLow-Profile Class-Action Lawsuit \n535\n2. Results and Analysis \n535\n3. Implications for Crimeware \n546\nExample: Vulnerability of Web-Based \nUpdate Mechanisms \n547\nExample: The Unsubscribe \nSpam Attack \n547\nThe Strong Narrative Attack \n548\n4. Conclusion \n548\n32. VoIP Security \n551\nDan Wing and Harsh Kupwade Patil\n1. Introduction \n551\nVoIP Basics \n551\n2. Overview of Threats \n553\nTaxonomy of Threats \n553\nReconnaissance of VoIP Networks \n553\nDenial of Service \n554\nLoss of Privacy \n555\nExploits \n557\n3. Security in VoIP \n558\nPreventative Measures \n558\nReactive \n559\n4. Future Trends \n560\nForking Problem in SIP \n560\nSecurity in Peer-to-Peer SIP \n561\nEnd-to-End Identity with SBCs \n563\n5. Conclusion \n564\nPart V\n Storage Security\n33. SAN Security \n567\n John McGowan, Jeffrey Bardin and \nJohn McDonald \n 1. Organizational Structure \n567\nAAA \n568\nRestricting Access to Storage \n569\n 2. Access Control Lists (ACL) \nand Policies \n570\nData Integrity Field (DIF) \n570\n 3. Physical Access \n571\n 4. Change Management \n571\n 5. Password Policies \n571\n 6. Defense in Depth \n571\n 7. Vendor Security Review \n571\n 8. Data Classification \n571\n 9. Security Management \n572\nSecurity Setup \n572\nUnused Capabilities \n572\n10. Auditing \n572\nUpdates \n572\nMonitoring \n572\nSecurity Maintenance \n572\n11. Management Access: Separation of \nFunctions \n573\nLimit Tool Access \n573\nSecure Management Interfaces \n573\n12. Host Access: Partitioning \n573\nS_ID Checking \n574\n13. Data Protection: Replicas \n574\nErasure \n574\nPotential Vulnerabilities and Threats \n575\nPhysical Attacks \n575\nManagement Control Attacks \n575\nHost Attacks \n575\nWorld Wide Name Spoofing \n576\nMan-in-the-Middle Attacks \n576\nE-Port Replication Attack \n576\nDenial-of-Service Attacks \n577\nSession Hijacking Attacks \n577\n15. Encryption in Storage \n577\nThe Process \n577\nEncryption Algorithms \n578\nKey Management \n579\nConfiguration Management \n580\n16. Application of Encryption \n580\nRisk Assessment and Management \n580\nModeling Threats \n580\nUse Cases for Protecting Data \n at Rest \n581\nUse Considerations \n582\nDeployment Options \n582\n17. Conclusion \n588\nReferences \n589\n34. Storage Area Networking \nDevices Security \n591\n Robert Rounsavall \n1. What is a SAN? \n591\n2. SAN Deployment Justifications \n591\n3. The Critical Reasons for SAN Security \n592\nWhy Is SAN Security Important? \n592\n4. SAN Architecture and Components \n593\nSAN Switches \n593\n5. SAN General Threats and Issues \n594\nSAN Cost: A Deterrent to Attackers \n594\nPhysical Level Threats, Issues, \n and Risk Mitigation \n594\nLogical Level Threats, Vulnerabilities, \n and Risk Mitigation \n596\n6. Conclusion \n603\n35. Risk Management \n605\n Sokratis K. Katsikas \n1. The Concept of Risk \n606\n2. Expressing and Measuring Risk \n606\n3. The Risk Management Methodology \n609\nContext Establishment \n609\n" }, { "page_number": 18, "text": "Contents\nxvii\nRisk Assessment \n610\nRisk Treatment \n612\nRisk Communication \n614\nRisk Monitoring and Review \n614\nIntegrating Risk Management into the \n System Development Life Cycle \n614\nCritique of Risk Management \n as a Methodology \n615\nRisk Management Methods \n616\n4. Risk Management Laws and \nRegulations \n620\n5. Risk Management Standards \n623\n6. Summary \n625\nPart VI\nPhysical Security\n36. Physical Security Essentials \n629\n William Stallings \n1. Overview \n629\n2. Physical Security Threats \n630\nNatural Disasters \n630\nEnvironmental Threats \n631\nTechnical Threats \n633\nHuman-Caused Physical Threats \n634\n3. Physical Security Prevention \nand Mitigation Measures \n634\nEnvironmental Threats \n634\nTechnical Threats \n635\nHuman-Caused Physical Threats \n635\n4. Recovery from Physical Security \nBreaches \n636\n5. Threat Assessment, Planning, \nand Plan Implementation \n636\nThreat Assessment \n636\nPlanning and Implementation \n637\n6. Example: A Corporate Physical \nSecurity Policy \n637\n7. Integration of Physical and \nLogical Security \n639\nReferences \n643\n37. Biometrics \n645\n Luther Martin \n1. Relevant Standards \n646\n2. Biometric System Architecture \n647\nData Capture \n648\nSignal Processing \n648\nMatching \n649\nData Storage \n649\nDecision \n649\nAdaptation \n652\n3. Using Biometric Systems \n652\nEnrollment \n652\nAuthentication \n653\nIdentification \n654\n4. Security Considerations \n655\nError Rates \n655\nDoddington’s Zoo \n656\nBirthday Attacks \n656\nComparing Technologies \n657\nStorage of Templates \n658\n5. Conclusion \n659\n38. Homeland Security \n661\n Rahul Bhaskar Ph.D. and Bhushan Kapoor \n1. Statutory Authorities \n661\nThe USA PATRIOT Act of 2001 \n (PL 107-56) \n661\nThe Aviation and Transporation \n Security Act of 2001 (PL 107-71) \n663\nEnhanced Border Security and \n Visa Entry Reform Act of 2002 \n (PL 107-173) \n663\nPublic Health Security, Bioterrorism \n Preparedness & Response Act \n of 2002 (PL 107-188) \n664\nHomeland Security Act of 2002 \n (PL 107-296) \n665\nE-Government Act of 2002 \n (PL 107-347) \n666\n2. Homeland Security Presidential \nDirectives \n667\n3. Organizational Actions \n669\nDepartment of Homeland \n Security Subcomponents \n669\nState and Federal Organizations \n669\nThe Governor’s Office of Homeland \n Security \n670\nCalifornia Office of Information \n Security and Privacy Protection \n670\nPrivate Sector Organizations \n for Information Sharing \n670\n4. Conclusion \n674\n39. Information Warfare \n677\n Jan Eloff and Anna Granova \n1. Information Warfare Model \n677\n2. Information Warfare Defined \n678\n3. IW: Myth or Reality? \n678\n4. Information Warfare: Making \nIW Possible \n680\nOffensive Strategies \n680\n5. Preventative Strategies \n685\n6. Legal Aspects of IW \n686\nTerrorism and Sovereignty \n686\nLiability Under International Law \n686\nRemedies Under International Law \n687\nDeveloping Countries Response \n689\n" }, { "page_number": 19, "text": "Contents\nxviii\n7. Holistic View of Information \nWarfare \n689\n8. Conclusion \n690\nPart VII\n Advanced Security \n40. Security Through Diversity \n693\n Kevin Noble \n 1. Ubiquity \n693\n 2. Example Attacks Against Uniformity \n694\n 3. Attacking Ubiquity With Antivirus Tools \n694\n 4. The Threat of Worms \n695\n 5. Automated Network Defense \n697\n 6. Diversity and the Browser \n698\n 7. Sandboxing and Virtualization \n698\n 8. DNS Example of Diversity \nthrough Security \n699\n 9. Recovery from Disaster is Survival \n699\n10. Conclusion \n700\n41. Reputation Management \n701\n Dr. Jean-Marc Seigneur \n1. The Human Notion of Reputation \n702\n2. Reputation Applied to the \nComputing World \n704\n3. State of the Art of Attack-resistant \nReputation Computation \n708\n4. Overview of Current Online \nReputation Service \n711\neBay \n711\nOpinity \n713\nRapleaf \n714\nVenyo \n715\nTrustPlus \u0002 Xing \u0002 ZoomInfo \u0002 \n SageFire \n716\nNaymz \u0002 Trufina \n717\nThe GORB \n719\nReputationDefender \n720\nSummarizing Table \n720\n5. Conclusion \n720\n42. Content Filtering \n723\n Peter Nicoletti \n1. The Problem with Content \nFiltering \n723\n2. User Categories, Motivations, \nand Justifications \n724\nSchools \n725\nCommercial Business \n725\nFinancial Organizations \n725\nHealthcare Organizations \n725\nInternet Service Providers \n725\nU.S. Government \n725\nOther Governments \n725\nLibraries \n725\nParents \n726\n3. Content Blocking Methods \n726\nBanned Word Lists \n726\nURL Block \n726\nCategory Block \n726\nBayesian Filters \n727\nSafe Search Integration to Search \n Engines with Content Labeling \n727\nContent-Based Image Filtering \n (CBIF) \n727\n4. Technology and Techniques for \nContent-Filtering Control \n728\nInternet Gateway-Based Products/\n Unified Threat Appliances \n728\n5. Categories \n732\n6. Legal Issues \n735\nFederal Law: ECPA \n735\nCIPA: The Children’s Internet \n Protection Act \n735\nThe Trump Card of Content \n Filtering: The “National Security \n Letter” \n736\nISP Content Filtering Might Be \n a “Five-Year Felony” \n736\n7. Issues and Problems with Content \nFiltering \n737\nBypass and Circumvention \n737\nClient-Based Proxies \n737\nOpen Proxies \n739\nHTTP Web-Based Proxies \n (Public and Private) \n739\nSecure Public Web-Based Proxies \n739\nProcess Killing \n739\nRemote PC Control Applications \n739\nOverblocking and Underblocking \n740\nBlacklist and Whitelist \n Determination \n740\nCasual Surfing Mistake \n740\nGetting the List Updated \n740\nTime-of-Day Policy Changing \n740\nOverride Authorization Methods \n740\nHide Content in “Noise” or Use \n Steganography \n740\nNonrepudiation: Smart Cards, \n ID Cards for Access \n740\nWarn and Allow Methods \n740\nIntegration with Spam Filtering tools \n740\nDetect Spyware and Malware \n in the HTTP Payload \n740\nIntegration with Directory Servers \n740\nLanguage Support \n741\nFinancial Considerations Are \n Important \n741\nScalability and Usability \n741\nPerformance Issues \n742\nReporting Is a Critical Requirement \n742\nBandwidth Usage \n742\n" }, { "page_number": 20, "text": "Contents\nxix\nPrecision Percentage and Recall \n742\n 9. Related Products \n743\n10. Conclusion \n743\n43. Data Loss Protection \n745\n Ken Perkins \n 1. Precursors of DLP \n747\n 2. What is DLP? \n748\n 3. Where to Begin? \n753\n 4. Data is Like Water \n754\n 5. You Don’t Know What You \nDon’t Know \n755\nPrecision versus Recall \n756\n 6. How Do DLP Applications Work? \n756\n 7. Eat Your Vegetables \n757\nData in Motion \n757\nData at Rest \n758\nData in Use \n758\n 8. It’s a Family Affair, Not Just \nit Security’s Problem \n760\n 9. Vendors, Vendors Everywhere! \nWho Do You Believe? \n762\n10. Conclusion \n762\nPart VIII\nAppendices\nAppendix A Configuring Authentication \nService on Microsoft \nWindows Vista \n765\n John R. Vacca \n1. Backup and Restore of Stored \nUsernames and Passwords \n765\nAutomation and Scripting \n765\nSecurity Considerations \n765\n2. Credential Security Service Provider \nand SSO for Terminal Services Logon \n765\nRequirements \n766\nConfiguration \n766\nSecurity Considerations \n766\n3. TLS/SSL Cryptographic \nEnhancements \n766\nAES Cipher Suites \n766\nECC Cipher Suites \n767\nSchannel CNG Provider Model \n768\nDefault Cipher Suite Preference \n769\nPrevious Cipher Suites \n769\n4. Kerberos Enhancements \n769\nAES \n769\nRead-Only Domain Controller \n and Kerberos Authentication \n770\n5. Smart Card Authentication Changes \n770\nAdditional Changes to Common \n Smart Card Logon Scenarios \n771\n6. Previous Logon Information \n773\nConfiguration \n774\nSecurity Considerations \n774\nAppendix B Security Management \nand Resiliency \n775\n John R. Vacca \nAppendix C List of Top Security \nImplementation and \nDeployment Companies 777\nList of SAN Implementation \n and Deployment Companies \n778\nSAN Security Implementation \n and Deployment Companies: \n778\nAppendix D List of Security \nProducts \n781\nSecurity Software \n781\nAppendix E List of Security \nStandards \n783\nAppendix F List of Miscellaneous \nSecurity Resources \n785\nConferences \n785\nConsumer Information \n785\nDirectories \n786\nHelp and Tutorials \n786\nMailing Lists \n786\nNews and Media \n787\nOrganizations \n787\nProducts and Tools \n788\nResearch \n790\nContent Filtering Links \n791\nOther Logging Resources \n791\nAppendix G Ensuring Built-in \nFrequency Hopping \nSpread Spectrum \nWireless Network \nSecurity \n793\nAccomplishment \n793\nBackground \n793\nAdditional Information \n793\nAppendix H Configuring Wireless \nInternet Security \nRemote Access \n795\nAdding the Access Points as RADIUS \n Clients to IAS \n795\nAdding Access Points to the first \n IAS Server \n795\n" }, { "page_number": 21, "text": "Contents\nxx\nScripting the Addition of Access Points to \n IAS Server (Alternative Procedure) \n795\nConfiguring the Wireless Access Points \n796\nEnabling Secure WLAN Authentication \n on Access Points \n796\nAdditional Settings to Secure \n Wireless Access Points \n797\nReplicating RADIUS Client Configuration \n to Other IAS Servers \n798\nAppendix I Frequently Asked \nQuestions \n799\nAppendix J Glossary \n801\nIndex \n817\n \n" }, { "page_number": 22, "text": " Foreword \n The Computer and Information Security Handbook is an \nessential reference guide for professionals in all realms \nof computer security. Researchers in academia, industry, \nand government as well as students of security will find \nthe Handbook helpful in expediting security research \nefforts. The Handbook should become a part of every \ncorporate, government, and university library around the \nworld. \n Dozens of experts from virtually every industry have \ncontributed to this book. The contributors are the leading \nexperts in computer security, privacy protection and man-\nagement, and information assurance. They are individu-\nals who will help others in their communities to address \nthe immediate as well as long-term challenges faced in \ntheir respective computer security realms. \n These important contributions make the Handbook \nstand out among all other security reference guides. I \nknow and have worked with many of the contributors \nand can testify to their experience, accomplishments, and \ndedication to their fields of work. \n John Vacca, the lead security consultant and managing \neditor of the Handbook , has worked diligently to see that \nthis book is as comprehensive as possible. His knowl-\nedge, experience, and dedication have combined to create \na book of more than 1400 pages covering every important \naspect of computer security and the assurance of the con-\nfidentiality, integrity, and availability of information. \n The depth of knowledge brought to the project by all \nthe contributors assures that this comprehensive hand-\nbook will serve as a professional reference and provide a \ncomplete and concise view of computer security and pri-\nvacy. The Handbook provides in-depth coverage of com-\nputer security theory, technology, and practice as it relates \nto established technologies as well as recent advance-\nments in technology. Above all, the Handbook explores \npractical solutions to a wide range of security issues. \n Another important characteristic of the Handbook is \nthat it is a vendor-edited volume with chapters written by \nleading experts in industry and academia who do not sup-\nport any specific vendor’s products or services. Although \nthere are many excellent computer security product and \nservice companies, these companies often focus on pro-\nmoting their offerings as one-and-only, best-on-the-\nmarket solutions. Such bias can lead to narrow decision \nmaking and product selection and thus was excluded \nfrom the Handbook . \n Michael Erbschloe \n Michael Erbschloe teaches information security courses \nat Webster University in St. Louis, Missouri. \n" }, { "page_number": 23, "text": "This page intentionally left blank\n" }, { "page_number": 24, "text": " Preface \n This comprehensive handbook serves as a professional \nreference to provide today’s most complete and concise \nview of computer security and privacy available in one \nvolume. It offers in-depth coverage of computer security \ntheory, technology, and practice as they relate to estab-\nlished technologies as well as recent advancements. It \nexplores practical solutions to a wide range of security \nissues. Individual chapters are authored by leading experts \nin the field and address the immediate and long-term chal-\nlenges in the authors ’ respective areas of expertise. \n The primary audience for this handbook consists of \nresearchers and practitioners in industry and academia as \nwell as security technologists and engineers working with \nor interested in computer security. This comprehensive \nreference will also be of value to students in upper-divi-\nsion undergraduate and graduate-level courses in compu-\nter security. \n ORGANIZATION OF THIS BOOK \n The book is organized into eight parts composed of 43 \ncontributed chapters by leading experts in their fields, as \nwell as 10 appendices, including an extensive glossary \nof computer security terms and acronyms. \n Part 1: Overview of System and Network \nSecurity: A Comprehensive Introduction \n Part 1 discusses how to build a secure organization; gen-\nerating cryptography; how to prevent system intrusions; \nUNIX and Linux security; Internet and intranet security; \nLAN security; wireless network security; cellular net-\nwork security, and RFID security. For instance: \n Chapter 1, “ Building a Secure Organization, ” sets the \nstage for the rest of the book by presenting insight \ninto where to start building a secure organization. \n Chapter 2, “ A Cryptography Primer, ” provides an over-\nview of cryptography. It shows how communications \nmay be encrypted and transmitted. \n Chapter 3, “ Preventing System Intrusions, ” discusses how \nto prevent system intrusions and where an \nunauthorized penetration of a computer in your enter-\nprise or an address in your assigned domain can occur. \n Chapter 4, “ Guarding Against Network Intrusions, ” \nshows how to guard against network intrusions by \nunderstanding the variety of attacks, from exploits to \nmalware and social engineering. \n Chapter 5, “ UNIX and Linux Security, ” discusses how \nto scan for vulnerabilities; reduce denial-of-service \n(DoS) attacks; deploy firewalls to control network \ntraffic; and build network firewalls. \n Chapter 6, “ Eliminating the Security Weakness of Linux \nand UNIX Operating Systems, ” presents an intro-\nduction to securing UNIX in general and Linux in \nparticular, providing some historical context and \ndescribing some fundamental aspects of the secure \noperating system architecture. \n Chapter 7, “ Internet Security, ” shows you how cryptog-\nraphy can be used to address some of the security \nissues besetting communications protocols. \n Chapter 8, “ The Botnet Problem, ” describes the botnet \nthreat and the countermeasures available to network \nsecurity professionals. \n Chapter 9, “ Intranet Security, ” covers internal security \nstrategies and tactics; external security strategies and \ntactics; network access security; and Kerberos. \n Chapter 10, “ Local Area Network Security, ” discusses \nnetwork design and security deployment as well as \nongoing management and auditing. \n Chapter 11, “ Wireless Network Security, ” presents an \noverview of wireless network security technology; \nhow to design wireless network security and plan for \nwireless network security; how to install, deploy, and \nmaintain wireless network security; information war-\nfare countermeasures: the wireless network security \nsolution; and wireless network security solutions and \nfuture directions. \n Chapter 12, “ Cellular Network Security, ” addresses \nthe security of the cellular network; educates read-\ners on the current state of security of the network \nand its vulnerabilities; outlines the cellular network \nspecific attack taxonomy, also called three-dimen-\nsional attack taxonomy ; discusses the vulnerability \nassessment tools for cellular networks; and provides \n" }, { "page_number": 25, "text": "Preface\nxxiv\ninsights into why the network is so vulnerable and \nwhy securing it can prevent communication outages \nduring emergencies. \n Chapter 13, “ RFID Security, ” describes the RFID tags \nand RFID reader and back-end database in detail. \n Part 2: Managing Information Security \n Part 2 discusses how to protect mission-critical systems; \ndeploy security management systems, IT security, ID \nmanagement, intrusion detection and prevention systems, \ncomputer forensics, network forensics, firewalls, and pen-\netration testing; and conduct vulnerability assessments. \nFor instance: \n Chapter 14, “ Information Security Essentials for IT \nManagers: Protecting Mission-Critical Systems, ” \ndiscusses how security goes beyond technical \ncontrols and encompasses people, technology, policy, \nand operations in a way that few other business \nobjectives do. \n Chapter 15, “ Security Management Systems, ” exam-\nines documentation requirements and maintaining \nan effective security system as well as conducting \nassessments. \n Chapter 16, “ Information Technology Security \nManagement, ” discusses the processes that are sup-\nported with enabling organizational structure and \ntechnology to protect an organization’s information \ntechnology operations and IT assets against internal \nand external threats, intentional or otherwise. \n Chapter 17, “ Identity Management, ” presents the evolu-\ntion of identity management requirements. It also \nsurveys how the most advanced identity management \ntechnologies fulfill present-day requirements. It dis-\ncusses how mobility can be achieved in the field of \nidentity management in an ambient intelligent/\nubiquitous computing world. \n Chapter 18, “ Intrusion Prevention and Detection \nSystems, ” discusses the nature of computer system \nintrusions, the people who commit these attacks, and \nthe various technologies that can be utilized to detect \nand prevent them. \n Chapter 19, “ Computer Forensics, ” is intended to pro-\nvide an in-depth familiarization with computer foren-\nsics as a career, a job, and a science. It will help you \navoid mistakes and find your way through the many \naspects of this diverse and rewarding field. \n Chapter 20, “ Network Forensics, ” helps you \ndetermine the path from a victimized network or \nsystem through any intermediate systems and \ncommunication pathways, back to the point of \nattack origination or the person who should be \nheld accountable. \n Chapter 21, “ Firewalls, ” provides an overview of \nfirewalls: policies, designs, features, and configura-\ntions. Of course, technology is always changing, and \nnetwork firewalls are no exception. However, the \nintent of this chapter is to describe aspects of \nnetwork firewalls that tend to endure over time. \n Chapter 22, “ Penetration Testing, ” describes how \ntesting differs from an actual “ hacker attack ” as well \nas some of the ways penetration tests are conducted, \nhow they’re controlled, and what organizations might \nlook for when choosing a company to conduct a \npenetration test for them. \n Chapter 23, “ What Is Vulnerability Assessment? ” \ncovers the fundamentals: defining vulnerability, \nexploit, threat, and risk; analyzing vulnerabilities and \nexploits; and configuring scanners. It also shows you \nhow to generate reports, assess risks in a changing \nenvironment, and manage vulnerabilities. \n Part 3: Encryption Technology \n Part 3 discusses how to implement data encryption, sat-\nellite encryption, public key infrastructure, and instant-\nmessaging security. For instance: \n Chapter 24, “ Data Encryption, ” is about the role played \nby cryptographic technology in data security. \n Chapter 25, “ Satellite Encryption, ” proposes a method \nthat enhances and complements satellite encryp-\ntion’s role in securing the information society. It \nalso covers satellite encryption policy instruments; \nimplementing satellite encryption; misuse of satel-\nlite encryption technology; and results and future \ndirections. \n Chapter 26, “ Public Key Infrastructure, ” explains the \ncryptographic background that forms the foundation \nof PKI systems; the mechanics of the X.509 PKI \nsystem (as elaborated by the Internet Engineering \nTask Force); the practical issues surrounding the \nimplementation of PKI systems; a number of alter-\nnative PKI standards; and alternative cryptographic \nstrategies for solving the problem of secure public \nkey distribution. \n Chapter 27, “ Instant-Messaging Security, ” helps you \ndevelop an IM security plan, keep it current, and \nmake sure it makes a difference. \n" }, { "page_number": 26, "text": "Preface\nxxv\n Part 4: Privacy and Access Management \n Part 4 discusses Internet privacy, personal privacy policies, \nvirtual private networks, identity theft, and VoIP security. \nFor instance: \n Chapter 28, “ Net Privacy, ” addresses the privacy issues \nin the digital society from various points of view, \ninvestigating the different aspects related to the \nnotion of privacy and the debate that the intricate \nessence of privacy has stimulated; the most common \nprivacy threats and the possible economic aspects \nthat may influence the way privacy is (and especially \nis not currently) managed in most firms; the efforts \nin the computer science community to face privacy \nthreats, especially in the context of mobile and data-\nbase systems; and the network-based technologies \navailable to date to provide anonymity when \ncommunicating over a private network. \n Chapter 29, “ Personal Privacy Policies, ” begins with the \nderivation of policy content based on privacy legisla-\ntion, followed by a description of how a \npersonal privacy policy may be constructed \nsemiautomatically. It then shows how to addition-\nally specify policies so that negative unexpected \noutcomes can be avoided. Finally, it describes the \nauthor’s Privacy Management Model, which explains \nhow to use personal privacy policies to protect pri-\nvacy, including what is meant by a “ match ” of con-\nsumer and service provider policies and how \nnonmatches can be resolved through negotiation. \n Chapter 30, “ Virtual Private Networks, ” covers VPN \nscenarios, VPN comparisons, and information \nassurance requirements. It also covers building VPN \ntunnels; applying cryptographic protection; \nimplementing IP security; and deploying virtual \nprivate networks. \n Chapter 31, “ Identity Theft, ” describes the importance of \nunderstanding the human factor of ID theft security \nand details the findings from a study on deceit. \n Chapter 32, “ VoIP Security, ” deals with the attacks \ntargeted toward a specific host and issues related to \nsocial engineering. \n Part 5: Storage Security \n Part 5 covers storage area network (SAN) security and \nrisk management. For instance: \n Chapter 33, “ SAN Security, ” describes the following \ncomponents: protection rings; security and \nprotection; restricting access to storage; access \ncontrol lists (ACLs) and policies; port blocks and \nport prohibits; and zoning and isolating resources. \n Chapter 34, “ Storage Area Networking Security \nDevices, ” covers all the issues and security concerns \nrelated to SAN security. \n Chapter 35, “ Risk Management, ” discusses physical \nsecurity threats, environmental threats, and incident \nresponse. \n Part 6: Physical Security \n Part 6 discusses physical security essentials, biometrics, \nhomeland security, and information warfare. For instance: \n Chapter 36, “ Physical Security Essentials, ” is concerned \nwith physical security and some overlapping areas of \npremises security. It also looks at physical security \nthreats and then considers physical security \nprevention measures. \n Chapter 37, “ Biometrics, ” discusses the different types \nof biometrics technology and verification systems \nand how the following work: biometrics eye analysis \ntechnology; biometrics facial recognition \ntechnology; facial thermal imaging; biometrics \nfinger-scanning analysis technology; biometrics \ngeometry analysis technology; biometrics verifica-\ntion technology; and privacy-enhanced, \nbiometrics-based verification/authentication as well \nas biometrics solutions and future directions. \n Chapter 38, “ Homeland Security, ” describes some \nprinciple provisions of U.S. homeland security-\nrelated laws and Presidential directives. It gives \nthe organizational changes that were initiated to \nsupport homeland security in the United States. \nThe chapter highlights the 9/11 Commission that \nCongress charted to provide a full account of \nthe circumstances surrounding the 2001 terrorist \nattacks and to develop recommendations for correc-\ntive measures that could be taken to prevent future \nacts of terrorism. It also details the Intelligence \nReform and Terrorism Prevention Act of 2004 \nand the Implementation of the 9/11 Commission \nRecommendations Act of 2007. \n Chapter 39, “ Information Warfare, ” defines information \nwarfare (IW) and discusses its most common tactics, \nweapons, and tools as well as comparing IW terror-\nism with conventional warfare and addressing the \nissues of liability and the available legal remedies \nunder international law. \n" }, { "page_number": 27, "text": "Preface\nxxvi\n Part 7: Advanced Security \n Part 7 discusses security through diversity, online repu-\ntation, content filtering, and data loss protection. For \ninstance: \n Chapter 40, “ Security Through Diversity, ” covers some \nof the industry trends in adopting diversity in \nhardware, software, and application deployments. \nThis chapter also covers the risks of uniformity, \nconformity, and the ubiquitous impact of adopting \nstandard organizational principals without the \nconsideration of security. \n Chapter 41, “ Reputation Management, ” discusses the \ngeneral understanding of the human notion of \nreputation. It explains how this concept of reputation \nfits into computer security. The chapter presents the \nstate of the art of attack-resistant reputation compu-\ntation. It also gives an overview of the current market \nof online reputation services. The chapter concludes \nby underlining the need to standardize online \nreputation for increased adoption and robustness. \n Chapter 42, “ Content Filtering, ” examines the many \nbenefits and justifications of Web-based content \nfiltering such as legal liability risk reduction, \nproductivity gains, and bandwidth usage. It also \nexplores the downside and unintended consequences \nand risks that improperly deployed or misconfigured \nsystems create. The chapter also looks into methods \nto subvert and bypass these systems and the reasons \nbehind them. \n Chapter 43, “ Data Loss Protection, ” introduces the \nreader to a baseline understanding of how to \ninvestigate and evaluate DLP applications in the \nmarket today. \n John R. Vacca \n Editor-in-Chief \n jvacca@frognet.net \n www.johnvacca.com \n" }, { "page_number": 28, "text": " Acknowledgments \n There are many people whose efforts on this book have \ncontributed to its successful completion. I owe each a \ndebt of gratitude and want to take this opportunity to \noffer my sincere thanks. \n A very special thanks to my senior acquisitions \neditor, Rick Adams, without whose continued inter-\nest and support this book would not have been possi-\nble. Assistant editor Heather Scherer provided staunch \nsupport and encouragement when it was most needed. \nThanks to my production editor, A. B. McGee_and \ncopyeditor, Darlene Bordwell, whose fine editorial \nwork has been invaluable. Thanks also to my marketing \nmanager, Marissa Hederson, whose efforts on this book \nhave been greatly appreciated. Finally, thanks to all the \nother people at Computer Networking and Computer \nand Information Systems Security, Morgan Kaufmann \nPublishers/Elsevier Science & Technology Books, whose \nmany talents and skills are essential to a finished book. \n Thanks to my wife, Bee Vacca, for her love, her help, \nand her understanding of my long work hours. Also, a \nvery, very special thanks to Michael Erbschloe for writ-\ning the Foreword. Finally, I wish to thank all the follow-\ning authors who contributed chapters that were necessary \nfor the completion of this book: John Mallery, Scott R. \nEllis, Michael West, Tom Chen, Patrick Walsh, Gerald \nBeuchelt, Mario Santana, Jesse Walker, Xinyuan Wang, \nDaniel Ramsbrock, Bill Mansoor, Dr. Pramod Pandya, \nChunming Rong, Prof. Erdal Cayirci, Prof. Gansen Zhao, \nLiang Yan, Peng Liu, Thomas F La Porta, Kameswari \nKotapati, Albert Caballero, Joe Wright, Jim Harmening, \nRahul Bhaskar, Prof. Bhushan Kapoor, Dr. Jean-Marc \nSeigneur, Christopher W. Day, Yong Guan, Dr. Errin W. \nFulp, Sanjay Bavisi, Almantas Kakareka, Daniel S. Soper, \nTerence Spies, Samuel JJ Curry, Marco Cremonini, \nChiara Braghin, Claudio Agostino Ardagna, Dr. George \nYee, Markus Jacobsson, Alex Tsow, Sid Stamm, Chris \nSoghoian, Harsh Kupwade Patil, Dan Wing, Jeffrey S. \nBardin, Robert Rounsavall, Sokratis K. Katsikas, William \nStallings, Luther Martin, Jan Eloff, Anna Granova, Kevin \nNoble, Peter Nicoletti, and Ken Perkins. \n" }, { "page_number": 29, "text": "This page intentionally left blank\n" }, { "page_number": 30, "text": " About the Editor \n \n John Vacca is an information technology consultant and \nbestselling author based in Pomeroy, Ohio. Since 1982 \nJohn has authored 60 books. Some of his most recent \nworks include Biometric Technologies and Verification \nSystems (Elsevier, 2007); Practical Internet Security \n(Springer, 2006); Optical Networking Best Practices \nHandbook (Wiley-Interscience, 2006); Guide to Wireless \nNetwork Security (Springer, 2006); Computer Forensics: \nComputer Crime Scene Investigation , 2nd Edition \n(Charles River Media, 2005); Firewalls: Jumpstart for \nNetwork and Systems Administrators (Elsevier, 2004); \n Public Key Infrastructure: Building Trusted Applications \nand Web Services ( Auerbach, 2004); Identity Theft \n(Prentice Hall/PTR, 2002); The World’s 20 Greatest \nUnsolved Problems (Pearson Education, 2004); and \nmore than 600 articles in the areas of advanced storage, \n computer security, and aerospace technology. John was \nalso a configuration management specialist, computer \nspecialist, and the computer security official (CSO) \nfor NASA’s space station program (Freedom) and the \nInternational Space Station Program from 1988 until his \nearly retirement from NASA in 1995. \n" }, { "page_number": 31, "text": "This page intentionally left blank\n" }, { "page_number": 32, "text": " Contributors \n Claudio Agostino Ardagna (Chapter 28), Dept. of \nInformation Technology, University of Milan, Crema, \nItaly \n Jeffrey S. Bardin (Chapter 33), Independent Security \nConsultant, Barre, Massachusetts 01005 \n Jay Bavisi (Chapter 22), President, EC-Council, \nAlbuquerque, New Mexico 87109 \n Gerald Beuchelt (Chapter 5), Independent Security \nConsultant, Burlington, Massachusetts 01803 \n Rahul Bhaskar (Chapter 38), Department of Information \nSystems and Decision Sciences, California State \nUniversity, Fullerton, California 92834 \n Rahul Bhaskar (Chapter 16), Department of Information \nSystems and Decision Sciences, California State \nUniversity, Fullerton, California 92834 \n Chiara Braghin (Chapter 28), Dept. of Information \nTechnology, University of Milan, Crema, Italy \n Albert Caballero CISSP, GSEC (Chapter 14), \nSecurity Operations Center Manager, Terremark \nWorldwide, Inc., Bay Harbor Islands, Florida 33154 \n Professor Erdal Cayirci (Chapters 11, 13), University \nof Stavanger, N-4036 Stavanger, Norway \n Tom Chen (Chapter 4), Swansea University, Singleton \nPark, SA2 8PP, Wales, United Kingdom \n Marco Cremonini (Chapter 28), Dept. of Information \nTechnology, University of Milan, Crema, Italy \n Sam Curry (Chapter 27), VP Product Management, \nRSA, the Security Division of EMC, Bedford, \nMassachusetts 01730 \n Christopher Day, CISSP, NSA:IEM (Chapter 18), \nSenior Vice President, Secure Information Systems, \nTerremark Worldwide, Inc., Miami, Florida 33131 \n Scott R. Ellis, EnCE (Chapters 2, 19), RGL – Forensic \nAccountants & Consultants, Forensics and Litigation \nTechnology, Chicago, Illinois 60602 \n Jan H. P. Eloff (Chapter 39), Extraordinary Professor, \nInformation & Computer Security Architectures \nResearch Group, Department of Computer Science, \nUniversity of Pretoria, and Research Director SAP \nMeraka UTD/SAP Research CEC, Hillcrest, Pretoria, \nSouth Africa, 0002 \n Michael Erbschloe (Foreword), Teaches Information \nSecurity courses at Webster University, St. Louis, \nMissouri 63119 \n Errin W. Fulp (Chapter 21), Department of Computer \nScience, Wake Forest University, Winston-Salem, \nNorth Carolina 27109 \n Anna Granova (Chapter 39), Advocate of the High \nCourt of South Africa, Member of the Pretoria Society \nof Advocates, University of Pretoria, Computer Science \nDepartment, Hillcrest, Pretoria, South Africa, 0002 \n Yong Guan (Chapter 20), Litton Assistant Professor, \nDepartment of Electrical and Computer Engineering, \nIowa State University, Ames, Iowa 50011 \n James T. Harmening (Chapters 15, 30), Computer \nBits, Inc., Chicago, Illinois 60602 \n Markus Jakobsson (Chapter 31), Principal Scientist, \nCSL, Palo Alto Research Center, Palo Alto, California \n94304 \n Almantas Kakareka (Chapter 23), Terremark World \nWide Inc., Security Operations Center, Miami, Florida \n33132 \n Bhushan Kapoor (Chapters 16, 24, 38), Department of \nInformation Systems and Decision Sciences, California \nState University, Fullerton, California 92834 \n Sokratis K. Katsikas (Chapter 35), Department of \nTechnology Education & Digital Systems, University \nof Piraeus, Piraeus 18532, Greece \n Larry Korba (Chapter 29), Ottawa, Ontario, Canada \nK1G 5N7. \n Kameswari Kotapati (Chapter 12), Department of \nComputer Science and Engineering, The Pennsylvania \nState University, University Park, Pennsylvania 16802 \n Thomas F. LaPorta (Chapter 12), Department of \nComputer Science and Engineering, The Pennsylvania \nState University, University Park, Pennsylvania 16802 \n Peng Liu (Chapter 12), College of Information Sciences \nand Technology, The Pennsylvania State University, \nUniversity Park, Pennsylvania 16802 \n Tewfiq El Maliki (Chapter 17), Telecommunications \nlabs, University of Applied Sciences of Geneva, \nGeneva, Switzerland \n" }, { "page_number": 33, "text": "Contributors\nxxxii\n John R. Mallery (Chapter 1), BKD, LLP, Kansas City, \nMissouri 64105-1936 \n Bill Mansoor (Chapter 9), Information Systems Audit \nand Control Association (ISACA), Rancho Santa \nMargarita, California 92688-8741 \n Luther Martin (Chapter 37), Voltage Security, Palo \nAlto, California 94304 \n John McDonald (Chapter 33), EMC Corporation, \nHopkinton, Massachusetts 01748 \n John McGowan (Chapter 33), EMC Corporation, \nHopkinton, Massachusetts 01748 \n Peter F. Nicoletti (Chapter 42), Secure Information \nSystems, Terremark Worldwide, Miami, Florida \n Kevin Noble, CISSP GSEC (Chapter 40), Director, \nSecure Information Services, Terremark Worldwide \nInc., Miami, Florida 33132 \n Pramod Pandya (Chapters 10, 24), Department of \nInformation Systems and Decision Sciences, California \nState University, Fullerton, California 92834 \n Harsh Kupwade Patil (Chapter 32), Department \nof Electrical Engineering, Southern Methodist \nUniversity, Dallas, Texas 75205 \n Ken Perkins (Chapter 43), CIPP (Certified Information \nPrivacy Professional), Sr. Systems Engineer, Blazent \nIncorporated, Denver, Colorado 80206 \n Daniel Ramsbrock (Chapter 8), Department of \nComputer Science, George Mason University, Fairfax, \nVirginia 22030 \n Chunming Rong (Chapters 11, 13), Professor, Ph.D., \nChair of Computer Science Section, Faculty of Science \nand Technology, University of Stavanger, N-4036 \nStavanger, Norway \n Robert Rounsavall (Chapter 34), GCIA, GCWN , \nDirector, SIS – SOC, Terremark Worldwide, Inc., \nMiami, Florida 33131 \n Mario Santana (Chapter 6), Terremark, Dallas, Texas \n75226 \n Jean-Marc Seigneur (Chapters 17, 41), Department of \nSocial and Economic Sciences, University of Geneva, \nSwitzerland \n Daniel S. Soper (Chapter 25), Information and \nDecision Sciences Department, Mihaylo College of \nBusiness and Economics, California State University, \nFullerton, California 92834-6848 \n Terence Spies (Chapter 26), Voltage Security, Inc., Palo \nAlto, California 94304 \n William Stallings (Chapter 36), Independent consult-\nant, Brewster Massachusetts 02631 \n Alex Tsow (Chapter 31), The MITRE Corporation, \nMclean, Virginia 22102 \n Jesse Walker (Chapter 7), Intel Corporation, Hillboro, \nOregon 97124 \n Patrick J. Walsh (Chapter 4), eSoft Inc., Broomfield, \nColorado 80021 \n Xinyuan Wang (Chapter 8), Department of Computer \nScience, George Mason University, Fairfax, Virginia \n22030 \n Michael A. West (Chapter 3), Independent Technical \nWriter, Martinez, California 94553 \n Dan Wing (Chapter 32), Security Technology Group, \nCisco Systems, San Jose, California 95123 \n Joe Wright (Chapters 15, 30), Computer Bits, Inc., \nChicago, Illinois 60602 \n George O.M. Yee (Chapter 29), Information Security \nGroup, Institute for Information Technology, National \nResearch Council Canada, Ottawa, Canada K1A 0R6 \n" }, { "page_number": 34, "text": " Overview of System \nand Network Security: \nA Comprehensive \nIntroduction \nPart I\n CHAPTER 1 Building a Secure Organization \n John Mallery \n CHAPTER 2 A Cryptography Primer \n Scott R. Ellis \n CHAPTER 3 Preventing System Intrusions \n Michael West \n CHAPTER 4 Guarding Against Network Intrusions \n Tom Chen and Patrick Walsh \n CHAPTER 5 Unix and Linux Security \n Gerald Beuchelt \n CHAPTER 6 Eliminating the Security Weakness of Linux and UNIX Operating Systems \n Mario Santana \n CHAPTER 7 Internet Security \n Jesse Walker \n CHAPTER 8 The Botnet Problem \n Xinyuan Wang and Daniel Ramsbrock \n CHAPTER 9 Intranet Security \n Bill Mansoor \n CHAPTER 10 Local Area Network Security \n Dr. Pramod Pandya \n" }, { "page_number": 35, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n2\n CHAPTER 11 Wireless Network Security \n Chunming Rong and Erdal Cayirci \n CHAPTER 12 Cellular Network Security \n Peng Liu, Thomas F. LaPorta and Kameswari Kotapati \n CHAPTER 13 RFID Security \n Chunming Rong and Erdal Cayirci \n" }, { "page_number": 36, "text": "3\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Building a Secure Organization \n John Mallery \n BKD, LLP \n Chapter 1 \n It seems logical that any business, whether a commercial \nenterprise or a not-for-profit business, would understand \nthat building a secure organization is important to long-\nterm success. When a business implements and main-\ntains a strong security posture, it can take advantage \nof numerous benefits. An organization that can dem-\nonstrate an infrastructure protected by robust security \nmechanisms can potentially see a reduction in insurance \npremiums being paid. A secure organization can use its \nsecurity program as a marketing tool, demonstrating to \nclients that it values their business so much that it takes \na very aggressive stance on protecting their information. \nBut most important, a secure organization will not have \nto spend time and money identifying security breaches \nand responding to the results of those breaches. \n As of September 2008, according to the National \nConference of State Legislatures, 44 states, the District of \nColumbia, and Puerto Rico had enacted legislation re quiring \nnotification of security breaches involving personal infor-\nmation. 1 Security breaches can cost an organization sig-\nnificantly through a tarnished reputation, lost business, and \nlegal fees. And numerous regulations, such as the Health \nInsurance Portability and Accountability Act (HIPAA), the \nGramm-Leach-Bliley Act (GLBA), and the Sarbanes-Oxley \nAct, require businesses to maintain the security of informa-\ntion. Despite the benefits of maintaining a secure organi-\nzation and the potentially devastating consequences of not \ndoing so, many organizations have poor security mecha-\nnisms, implementations, policies, and culture. \n 1. OBSTACLES TO SECURITY \n In attempting to build a secure organization, we should \ntake a close look at the obstacles that make it challeng-\ning to build a totally secure organization. \n Security Is Inconvenient \n Security, by its very nature, is inconvenient, and the \nmore robust the security mechanisms, the more incon-\nvenient the process becomes. Employees in an organi-\nzation have a job to do; they want to get to work right \naway. Most security mechanisms, from passwords to \nmultifactor authentication, are seen as roadblocks to pro-\nductivity. One of the current trends in security is to add \nwhole disk encryption to laptop computers. Although \nthis is a highly recommended security process, it adds \na second login step before a computer user can actually \nstart working. Even if the step adds only one minute to \nthe login process, over the course of a year this adds up to \nfour hours of lost productivity. Some would argue that this \nlost productivity is balanced by the added level of security. \nBut across a large organization, this lost productivity \ncould prove significant. \n To gain a full appreciation of the frustration caused by \nsecurity measures, we have only to watch the Transportation \nSecurity Administration (TSA) security lines at any airport. \nSimply watch the frustration build as a particular item is \nrun through the scanner for a third time while a passenger \nis running late to board his flight. Security implementations \nare based on a sliding scale; one end of the scale is total \nsecurity and total inconvenience, the other is total insecurity \nand complete ease of use. When we implement any secu-\nrity mechanism, it should be placed on the scale where the \nlevel of security and ease of use match the acceptable level \nof risk for the organization. \n Computers Are Powerful and Complex \n Home computers have become storehouses of personal \nmaterials. Our computers now contain wedding videos, \nscanned family photos, music libraries, movie collec-\ntions, and financial and medical records. Because com-\nputers contain such familiar objects, we have forgotten \n 1 www.ncsl.org/programs/lis/cip/priv/breachlaws.htm (October 2, 2008). \n" }, { "page_number": 37, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n4\n that computers are very powerful and complex devices. \nIt wasn’t that long ago that computers as powerful as our \ndesktop and laptop computers would have filled one or \nmore very large rooms. In addition, today’s computers \npresent a “ user-friendly ” face to the world. Most people \nare unfamiliar with the way computers truly function and \nwhat goes on “ behind the scenes. ” Things such as the \nWindows Registry, ports, and services are completely \nunknown to most users and poorly understood by many \ncomputer industry professionals. For example, many indi-\nviduals still believe that a Windows login password pro-\ntects data on a computer. On the contrary — someone can \nsimply take the hard drive out of the computer, install it \nas a slave drive in another computer, or place it in a USB \ndrive enclosure, and all the data will be readily accessible. \n Computer Users Are Unsophisticated \n Many computer users believe that because they are skilled \nat generating spreadsheets, word processing documents, \nand presentations, they “ know everything about comput-\ners. ” These “ power users ” have moved beyond application \nbasics, but many still do not understand even basic security \nconcepts. Many users will indiscriminately install software \nand visit questionable Web sites despite the fact that these \nactions could violate company policies. The “ bad guys ” —\n people who want to steal information from or wreak havoc \non computers systems — have also identified that the aver-\nage user is a weak link in the security chain. As compa-\nnies began investing more money in perimeter defenses, \nattackers look to the path of least resistance. They send \nmalware as attachments to email, asking recipients to open \nthe attachment. Despite being told not to open attachments \nfrom unknown senders or simply not to open attachments \nat all, employees consistently violate this policy, wreaking \nhavoc on their networks. The “ I Love You Virus ” spread \nvery rapidly in this manner. More recently, phishing scams \nhave been very effective in convincing individuals to pro-\nvide their personal online banking and credit-card infor-\nmation. Why would an attacker struggle to break through \nan organization’s defenses when end users are more than \nwilling to provide the keys to bank accounts? Addressing \nthe threat caused by untrained and unwary end users is a \nsignificant part of any security program. \n Computers Created Without a Thought \nto Security \n During the development of personal computers (PCs), \nno thought was put into security. Early PCs were very \nsimple affairs that had limited computing power and no \nkeyboards and were programmed by flipping a series \nof switches. They were developed almost as curiosities. \nEven as they became more advanced and complex, all \neffort was focused on developing greater sophistication \nand capabilities; no one thought they would have secu-\nrity issues. We only have to look at some of the early \ncomputers, such as the Berkeley Enterprises Geniac, the \nHeathkit EC-1, or the MITS Altair 8800, to understand \nwhy security was not an issue back then. 2 The develop-\nment of computers was focused on what they could do, \nnot how they could be attacked. \n As computers began to be interconnected, the driving \nforce was providing the ability to share information, cer-\ntainly not to protect it. Initially the Internet was designed \nfor military applications, but eventually it migrated to \ncolleges and universities, the principal tenet of which is \nthe sharing of knowledge. \n Current Trend Is to Share, Not Protect \n Even now, despite the stories of compromised data, \npeople still want to share their data with everyone. And \nWeb-based applications are making this easier to do than \nsimply attaching a file to an email. Social networking \nsites such as SixApart provide the ability to share mate-\nrial: “ Send messages, files, links, and events to your \nfriends. Create a network of friends and share stuff. It’s \nfree and easy . . . ” 3 In addition, many online data stor-\nage sites such as DropSend 4 and FilesAnywhere 5 pro-\nvide the ability to share files. Although currently in the \nbeta state of development, Swivel 6 provides the ability \nto upload data sets for analysis and comparison. These \nsites can allow proprietary data to leave an organization \nby bypassing security mechanisms. \n Data Accessible from Anywhere \n As though employees ’ desire to share data is not enough \nof a threat to proprietary information, many business \nprofessionals want access to data from anywhere they \nwork, on a variety of devices. To be productive, employ-\nees now request access to data and contact information \non their laptops, desktops, home computers, and mobile \ndevices. Therefore, IT departments must now provide \n 2 “ Pop quiz: What was the fi rst personal computer? ” www.blinkenlights.\ncom/pc.shtml (October 26, 2008). \n 3 http://www.sixapart.com (March 24, 2009). \n 4 www.dropsend.com (October 26, 2008). \n 5 www.fi lesanywhere.com (October 26, 2008). \n 6 www.swivel.com (October 26, 2008). \n" }, { "page_number": 38, "text": "Chapter | 1 Building a Secure Organization\n5\n the ability to sync data with numerous devices. And if \nthe IT department can’t or won’t provide this capability, \nemployees now have the power to take matters into their \nown hands. \n Previously mentioned online storage sites can be \naccessed from both the home and office or anywhere \nthere is an Internet connection. Though it might be pos-\nsible to block access to some of these sites, it is not possi-\nble to block access to them all. And some can appear \nrather innocuous. For many, Google’s free email serv-\nice Gmail is a great tool that provides a very robust service \nfor free. What few people realize is that Gmail provides \nmore than 7 GB of storage that can also be used to store \nfiles, not just email. The Gspace plug-in 7 for the Firefox \nbrowser provides an FTP-like interface within Firefox \nthat gives users the ability to transfer files from a compu-\nter to their Gmail accounts. This ability to easily transfer \ndata outside the control of a company makes securing an \norganization’s data that much more difficult. \n Security Isn’t About Hardware and Software \n Many businesses believe that if they purchase enough \nequipment, they can create a secure infrastructure. \nFirewalls, intrusion detection systems, antivirus programs, \nand two-factor authentication products are just some of \nthe tools available to assist in protecting a network and \nits data. It is important to keep in mind that no product \nor combination of products will create a secure organiza-\ntion by itself. Security is a process; there is no tool that \nyou can “ set and forget. ” All security products are only \nas secure as the people who configure and maintain them. \nThe purchasing and implementation of security products \nshould be only a percentage of the security budget. The \nemployees tasked with maintaining the security devices \nshould be provided with enough time, training, and equip-\nment to properly support the products. Unfortunately, in \nmany organizations security activities take a back seat to \nsupport activities. Highly skilled security professionals \nare often tasked with help-desk projects such as resetting \nforgotten passwords, fixing jammed printers, and setting \nup new employee workstations. \n The Bad Guys Are Very Sophisticated \n At one time the computer hacker was portrayed as a lone \nteenager with poor social skills who would break into \nsystems, often for nothing more than bragging rights. As \necommerce has evolved, however, so has the profile of \nthe hacker. \n Now that there are vast collections of credit-card \nnumbers and intellectual property that can be harvested, \norganized hacker groups have been formed to oper-\nate as businesses. A document released in 2008 spells \nit out clearly: “ Cybercrime companies that work much \nlike real-world companies are starting to appear and are \nsteadily growing, thanks to the profits they turn. Forget \nindividual hackers or groups of hackers with common \ngoals. Hierarchical cybercrime organizations where each \ncybercriminal has his or her own role and reward sys-\ntem is what you and your company should be worried \nabout. ” 8 \n Now that organizations are being attacked by highly \nmotivated and skilled groups of hackers, creating a \nsecure infrastructure is mandatory. \n Management Sees Security as a Drain on \nthe Bottom Line \n For most organizations, the cost of creating a strong secu-\nrity posture is seen as a necessary evil, similar to pur-\nchasing insurance. Organizations don’t want to spend the \nmoney on it, but the risks of not making the purchase out-\nweigh the costs. Because of this attitude, it is extremely \nchallenging to create a secure organization. The attitude is \nenforced because requests for security tools are often sup-\nported by documents providing the average cost of a secu-\nrity incident instead of showing more concrete benefits of \na strong security posture. The problem is exacerbated by \nthe fact that IT professionals speak a different language \nthan management. IT professionals are generally focused \non technology, period. Management is focused on rev-\nenue. Concepts such as profitability, asset depreciation, \nreturn on investment, realization, and total cost of own-\nership are the mainstays of management. These are alien \nconcepts to most IT professionals. \n Realistically speaking, though it would be helpful if \nmanagement would take steps to learn some fundamentals \nof information technology, IT professionals should take the \ninitiative and learn some fundamental business concepts. \nLearning these concepts is beneficial to the organization \nbecause the technical infrastructure can be implemented \nin a cost-effective manner, and they are beneficial from a \ncareer development perspective for IT professionals. \n 7 www.getgspace.com (October 27, 2008). \n 8 “ Report: Cybercrime groups starting to operate like the Mafi a, ” pub-\nlished July 16, 2008, http://arstechnica.com/news.ars/post/20080716-\nreport-cybercrime-groups-starting-to-operate-like-the-mafia.html \n(October 27, 2008). \n" }, { "page_number": 39, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n6\n A Google search on “ business skills for IT profession-\nals ” will identify numerous educational programs that \nmight prove helpful. For those who do not have the time \nor the inclination to attend a class, some very useful mate-\nrials can be found online. One such document provided by \nthe Government Chief Information Office of New South \nWales is A Guide for Government Agencies Calculating \nReturn on Security Investment . 9 Though extremely techni-\ncal, another often cited document is Cost-Benefit Analysis \nfor Network Intrusion Detection Systems, by Huaqiang \nWei, Deb Frinke, Olivia Carter, and Chris Ritter. 10 \n Regardless of the approach that is taken, it is impor-\ntant to remember that any tangible cost savings or rev-\nenue generation should be utilized when requesting new \nsecurity products, tools, or policies. Security profession-\nals often overlook the value of keeping Web portals open \nfor employees. A database that is used by a sales staff to \nenter contracts or purchases or check inventory will help \ngenerate more revenue if it has no downtime. A database \nthat is not accessible or has been hacked is useless for \ngenerating revenue. \n Strong security can be used to gain a competitive \nadvantage in the marketplace. Having secured systems \nthat are accessible 24 hours a day, seven days a week \nmeans that an organization can reach and communicate \nwith its clients and prospective clients more efficiently. \nAn organization that becomes recognized as a good cus-\ntodian of client records and information can incorporate \nits security record as part of its branding. This is no dif-\nferent than a car company being recognized for its safety \nrecord. In discussions of cars and safety, for example, \nVolvo is always the first manufacturer mentioned. 11 \n What must be avoided is the “ sky is falling ” mental-\nity. There are indeed numerous threats to a network, but \nwe need to be realistic in allocating resources to protect \nagainst these threats. As of this writing, the National \nVulnerability Database sponsored by the National \nInstitute of Standards and Technology (NIST) lists \n33,428 common vulnerabilities and exposures and pub-\nlishes 18 new vulnerabilities per day. 12 In addition, the \nmedia is filled with stories of stolen laptops, credit-card \nnumbers, and identities. The volume of threats to a net-\nwork can be mind numbing. It is important to approach \nmanagement with “ probable threats ” as opposed to \n “ describable threats. ” Probable threats are those that are \nmost likely to have an impact on your business and the \nones most likely to get the attention of management. \n Perhaps the best approach is to recognize that manage-\nment, including the board of directors, is required to exhibit \na duty of care in protecting their assets that is comparable \nto other organizations in their industry. When a security \nbreach or incident occurs, being able to demonstrate the \nhigh level of security within the organization can signifi-\ncantly reduce exposure to lawsuits, fines, and bad press. \n The goal of any discussion with management is to \nconvince them that in the highly technical and intercon-\nnected world we live in, having a secure network and \ninfrastructure is a “ nonnegotiable requirement of doing \nbusiness. ” 13 An excellent resource for both IT profes-\nsionals and executives that can provide insight into \nthese issues is CERT’s technical report, Governing for \nEnterprise Security . 14 \n 2. TEN STEPS TO BUILDING A SECURE \nORGANIZATION \n Having identified some of the challenges to building a \nsecure organization, let’s now look at 10 ways to suc-\ncessfully build a secure organization. The following \nsteps will put a business in a robust security posture. \n A. Evaluate the Risks and Threats \n In attempting to build a secure organization, where should \nyou start? One commonly held belief is that you should \ninitially identify your assets and allocate security resources \nbased on the value of each asset. Though this approach \nmight prove effective, it can lead to some significant vul-\nnerabilities. An infrastructure asset might not hold a high \nvalue, for example, but it should be protected with the same \neffort as a high-value asset. If not, it could be an entry point \ninto your network and provide access to valuable data. \n Another approach is to begin by evaluating the \nthreats posed to your organization and your data. \n Threats Based on the Infrastructure Model \n The first place to start is to identify risks based on an \norganization’s infrastructure model. What infrastructure \nis in place that is necessary to support the operational \n 9 www.gcio.nsw.gov.au/library/guidelines/resolveuid/87c81d4c6af\nbc1ae163024bd38aac9bd (October 29, 2008). \n 10 www.csds.uidaho.edu/deb/costbenefi t.pdf (October 29, 2008). \n 11 “ Why leaders should care about security ” podcast, October 17, \n2006, Julia Allen and William Pollak, www.cert.org/podcast/show/\n20061017allena.html (November 2, 2008). \n 12 http://nvd.nist.gov/home.cfm (October 29, 2008). \n 13 “ Why leaders should care about security ” podcast, October 17, \n2006, Julia Allen and William Pollak, www.cert.org/podcast/show/\n20061017allena.html (November 2, 2008). \n 14 www.cert.org/archive/pdf/05tn023.pdf . \n" }, { "page_number": 40, "text": "Chapter | 1 Building a Secure Organization\n7\n needs of the business? A small business that operates out \nof one office has reduced risks as opposed to an organi-\nzation that operates out of numerous facilities, includes a \nmobile workforce utilizing a variety of handheld devices, \nand offers products or services through a Web-based \ninterface. An organization that has a large number of \ntelecommuters must take steps to protect its proprietary \ninformation that could potentially reside on personally \nowned computers outside company control. An organi-\nzation that has widely dispersed and disparate systems \nwill have more risk potential than a centrally located one \nthat utilizes uniform systems. \n Threats Based on the Business Itself \n Are there any specific threats for your particular busi-\nness? Have high-level executives been accused of inap-\npropriate activities whereby stockholders or employees \nwould have incentive to attack the business? Are there \nany individuals who have a vendetta against the company \nfor real or imagined slights or accidents? Does the com-\nmunity have a history of antagonism against the organi-\nzation? A risk management or security team should be \nasking these questions on a regular basis to evaluate the \nrisks in real time. This part of the security process is \noften overlooked due to the focus on daily workload. \n Threats Based on Industry \n Businesses belonging to particular industries are targeted \nmore frequently and with more dedication than those in \nother industries. Financial institutions and online retail-\ners are targeted because “ that’s where the money is. ” \nPharmaceutical manufacturers could be targeted to steal \nintellectual property, but they also could be targeted by \nspecial interest groups, such as those that do not believe \nin testing drugs on live animals. \n Identifying some of these threats requires active \ninvolvement in industry-specific trade groups in which \nbusinesses share information regarding recent attacks or \nthreats they have identified. \n Global Threats \n Businesses are often so narrowly focused on their local \nsphere of influence that they forget that by having a net-\nwork connected to the Internet, they are now connected to \nthe rest of the world. If a piece of malware identified on \nthe other side of the globe targets the identical software \nused in your organization, you can be sure that you will \neventually be impacted by this malware. Additionally, \nif extremist groups in other countries are targeting your \nspecific industry, you will also be targeted. \n Once threats and risks are identified, you can take \none of four steps: \n ● Ignore the risk. This is never an acceptable response. \nThis is simply burying your head in the sand and \nhoping the problem will go away — the business \nequivalent of not wearing a helmet when riding a \nmotorcycle. \n ● Accept the risk. When the cost to remove the risk is \ngreater than the risk itself, an organization will often \ndecide to simply accept the risk. This is a viable \noption as long as the organization has spent the time \nrequired to evaluate the risk. \n ● Transfer the risk . Organizations with limited staff \nor other resources could decide to transfer the risk. \nOne method of transferring the risk is to purchase \nspecialized insurance targeted at a specific risk. \n ● Mitigate the risk . Most organizations mitigate risk by \napplying the appropriate resources to minimize the \nrisks posed to their network. \n For organizations that would like to identify and \nquantify the risks to their network and information \nassets, CERT provides a free suite of tools to assist with \nthe project. Operationally Critical Threat, Asset, and \nVulnerability Evaluation (OCTAVE) provides risk-based \nassessment for security assessments and planning. 15 There \nare three versions of OCTAVE: the original OCTAVE, \ndesigned for large organizations (more than 300 employ-\nees); OCTAVE-S (100 people or fewer); and OCTAVE-\nAllegro, which is a streamlined version of the tools and is \nfocused specifically on information assets. \n Another risk assessment tool that might prove helpful is \nthe Risk Management Framework developed by Educause/\nInternet 2. 16 Targeted at institutions of higher learning, the \napproach could be applied to other industries. \n Tracking specific threats to specific operating sys-\ntems, products, and applications can be time consuming. \nVisiting the National Vulnerability Database and manu-\nally searching for specific issues would not necessarily \nbe an effective use of time. Fortunately, the Center for \nEducation and Research in Information Assurance and \nSecurity (CERIAS) at Purdue University has a tool called \nCassandra that can be configured to notify you of specific \nthreats to your particular products and applications. 17 \n 15 OCTAVE, www.cert.org/octave/ (November 2, 2008). \n 16 Risk Management Framework, https://wiki.internet2.edu/confl uence/\ndisplay/secguide/Risk \u0002 Management \u0002 Framework . \n 17 Cassandra, https://cassandra.cerias.purdue.edu/main/index.html . \n" }, { "page_number": 41, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n8\n B. Beware of Common Misconceptions \n In addressing the security needs of an organization, it \nis common for professionals to succumb to some very \ncommon misconceptions. Perhaps the most common \nmisconception is that the business is obscure, unsophisti-\ncated, or boring — simply not a target for malicious activ-\nity. Businesses must understand that any network that is \nconnected to the Internet is a potential target, regardless \nof the type of business. \n Attackers will attempt to gain access to a network \nand its systems for several reasons. The first is to look \naround to see what they can find. Regardless of the type \nof business, personnel information will more than likely \nbe stored on one of the systems. This includes Social \nSecurity numbers and other personal information. This \ntype of information is a target — always. \n Another possibility is that the attacker will modify \nthe information he or she finds or simply reconfigure the \nsystems to behave abnormally. This type of attacker is not \ninterested in financial gain; he is simply the technology \nversion of teenagers who soap windows, egg cars, and \ncover property with toilet paper. He attacks because he \nfinds it entertaining to do so. Additionally, these attackers \ncould use the systems to store stolen “ property ” such as \nchild pornography or credit-card numbers. If a system is \nnot secure, attackers can store these types of materials on \nyour system and gain access to them at their leisure. \n The final possibility is that an attacker will use the \nhacked systems to mount attacks on other unprotected \nnetworks and systems. Computers can be used to mount \ndenial-of-service (DoS) attacks, relay spam, or spread \nmalicious software. To put it simply, no computer or net-\nwork is immune from attack. \n Another common misconception is that an organi-\nzation is immune from problems caused by employees, \nessentially saying, “ We trust all our employees, so we \ndon’t have to focus our energies on protecting our assets \nfrom them. ” Though this is common for small businesses \nin which the owners know everyone, it also occurs in \nlarger organizations where companies believe that they \nonly hire “ professionals. ” It is important to remember \nthat no matter how well job candidates present them-\nselves, a business can never know everything about an \nemployee’s past. For this reason it is important for busi-\nnesses to conduct preemployment background checks of \nall employees. Furthermore, it is important to conduct \nthese background checks properly and completely. \n Many employers trust this task to an online solution \nthat promises to conduct a complete background check \non an individual for a minimal fee. Many of these sites \nplay on individuals ’ lack of understanding of how some \nof these online databases are generated. These sites \nmight not have access to the records of all jurisdictions, \nsince many jurisdictions either do not make their records \navailable online or do not provide them to these data-\nbases. In addition, many of the records are entered by \nminimum wage data-entry clerks whose accuracy is not \nalways 100 percent. \n Background checks should be conducted by organi-\nzations that have the resources at their disposal to get \ncourt records directly from the courthouses where the \nrecords are generated and stored. Some firms have a \nteam of “ runners ” who visit the courthouses daily to pull \nrecords; others have a network of contacts who can visit \nthe courts for them. Look for organizations that are active \nmembers of the National Association of Professional \nBackground Screeners. 18 Members of this organization \nare committed to providing accurate and professional \nresults. And perhaps more important, they can provide \ncounseling regarding the proper approach to take as well \nas interpreting the results of a background check. \n If your organization does not conduct background \nchecks, there are several firms that might be of assistance: \nAccurate Background, Inc., of Lake Forest, California 19 ; \nCredential Check, Inc., of Troy, Michigan 20 ; and Validity \nScreening Solutions in Overland Park, Kansas. 21 The \nWeb sites of these companies all provide informational \nresources to guide you in the process. ( Note: For busi-\nnesses outside the United States or for U.S. businesses \nwith locations overseas, the process might be more dif-\nficult because privacy laws could prevent conducting a \ncomplete background check. The firms we’ve mentioned \nshould be able to provide guidance regarding international \nprivacy laws.) \n Another misconception is that a preemployment \nbackground check is all that is needed. Some errone-\nously believe that once a person is employed, he or she \nis “ safe ” and can no longer pose a threat. However, peo-\nple’s lives and fortunes can change during the course of \nemployment. Financial pressures can cause otherwise \nlaw-abiding citizens to take risks they never would have \nthought possible. Drug and alcohol dependency can alter \npeople’s behavior as well. For these and other reasons \nit is a good idea to do an additional background check \nwhen an employee is promoted to a position of higher \nresponsibility and trust. If this new position involves \n 18 National Association of Professional Background Screeners, \n www.napbs.com . \n 19 www.accuratebackground.com . \n 20 www.credentialcheck.com . \n 21 www.validityscreening.com . \n" }, { "page_number": 42, "text": "Chapter | 1 Building a Secure Organization\n9\n handling financial responsibilities, the background check \nshould also include a credit check. \n Though these steps might sound intrusive, which is \nsometimes a reason cited not to conduct these types of \nchecks, they can also be very beneficial to the employee \nas well as the employer. If a problem is identified dur-\ning the check, the employer can often offer assistance to \nhelp the employee get through a tough time. Financial \ncounseling and substance abuse counseling can often \nturn a potentially problematic employee into a very loyal \nand dedicated one. \n Yet another common misconception involves infor-\nmation technology (IT) professionals. Many businesses \npay their IT staff fairly high salaries because they \nunderstand that having a properly functioning techni-\ncal infrastructure is important for the continued success \nof the company. Since the staff is adept at setting up \nand maintaining systems and networks, there is a gen-\neral assumption that they know everything there is to \nknow about computers. It is important to recognize that \nalthough an individual might be very knowledgeable and \ntechnologically sophisticated, no one knows everything \nabout computers. Because management does not under-\nstand technology, they are not in a very good position to \njudge a person’s depth of knowledge and experience in \nthe field. Decisions are often based on the certifications \na person has achieved during his or her career. Though \ncertifications can be used to determine a person’s level \nof competency, too much weight is given to them. Many \ncertifications require nothing more than some time and \ndedication to study and pass a certification test. Some \ntraining companies also offer boot camps that guaran-\ntee a person will pass the certification test. It is possible \nfor people to become certified without having any real-\nworld experience with the operating systems, applica-\ntions, or hardware addressed by the certification. When \njudging a person’s competency, look at his or her expe-\nrience level and background first, and if the person has \nachieved certifications in addition to having significant \nreal-world experience, the certification is probably a \nreflection of the employee’s true capabilities. \n The IT staff does a great deal to perpetuate the image \nthat they know everything about computers. One of the \nreasons people get involved with the IT field in the first \nplace is because they have an opportunity to try new \nthings and overcome new challenges. This is why when \nan IT professional is asked if she knows how to do some-\nthing, she will always respond “ Yes. ” But in reality the \nreal answer should be, “ No, but I’ll figure it out. ” Though \nthey frequently can figure things out, when it comes to \nsecurity we must keep in mind that it is a specialized area, \nand implementing a strong security posture requires sig-\nnificant training and experience. \n C. Provide Security Training for IT \nStaff — Now and Forever \n Just as implementing a robust, secure environment is a \ndynamic process, creating a highly skilled staff of security \nprofessionals is also a dynamic process. It is important to \nkeep in mind that even though an organization’s technical \ninfrastructure might not change that frequently, new vul-\nnerabilities are being discovered and new attacks are being \nlaunched on a regular basis. In addition, very few organi-\nzations have a stagnant infrastructure; employees are con-\nstantly requesting new software, and more technologies \nare added in an effort to improve efficiencies. Each new \naddition likely adds additional security vulnerabilities. \n It is important for the IT staff to be prepared to iden-\ntify and respond to new threats and vulnerabilities. It \nis recommended that those interested in gaining a deep \nsecurity understanding start with a vendor-neutral pro-\ngram. A vendor-neutral program is one that focuses \non concepts rather than specific products. The SANS \n(SysAdmin, Audit, Network, Security) Institute offers \ntwo introductory programs: Intro to Information Security \n(Security 301), 22 a five-day class designed for peo-\nple just starting out in the security field, and the SANS \nSecurity Essentials Bootcamp (Security 401), 23 a six-day \nclass designed for people with some security experience. \nEach class is also available as a self-study program, and \neach can be used to prepare for a specific certification. \n Another option is start with a program that follows the \nCompTia Security \u0002 certification requirements, such as the \nGlobal Knowledge Essentials of Information Security. 24 \nSome colleges offer similar programs. \n Once a person has a good fundamental background \nin security, he should then undergo vendor-specific train-\ning to apply the concepts learned to specific applications \nand security devices. \n A great resource for keeping up with current trends \nin security is to become actively involved in a secu-\nrity-related trade organization. The key concept here is \n actively involved . Many professionals join organiza-\ntions so that they can add an item to the “ professional \naffiliations ” section of their r é sum é . Becoming actively \n 22 SANS Intro to Computer Security, www.sans.org. \n 23 SANS Security Essentials Bootcamp, www.sans.org. \n 24 www.globalknowledge.com/training/course.asp?pageid \u0003 9 & course\nid \u0003 10242 & catid \u0003 191 & country \u0003 United \u0002 States . \n" }, { "page_number": 43, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n10\n involved means attending meetings on a regular basis \nand serving on a committee or in a position on the execu-\ntive board. Though this seems like a daunting time com-\nmitment, the benefit is that the professional develops \na network of resources that can be available to provide \ninsight, serve as a sounding board, or provide assistance \nwhen a problem arises. Participating in these associations \nis a very cost-effective way to get up to speed with current \nsecurity trends and issues. Here are some organizations 25 \nthat can prove helpful: \n ● ASIS International, the largest security-related \norganization in the world, focuses primarily on \nphysical security but has more recently started \naddressing computer security as well. \n ● ISACA, formerly the Information Systems Audit and \nControl Association. \n ● High Technology Crime Investigation Association \n(HTCIA). \n ● Information Systems Security Association (ISSA). \n ● InfraGard, a joint public and private organization spon-\nsored by the Federal Bureau of Investigation (FBI). \n In addition to monthly meetings, many local chapters \nof these organizations sponsor regional conferences that \nare usually very reasonably priced and attract nationally \nrecognized experts. \n Arguably one of the best ways to determine whether \nan employee has a strong grasp of information security \nconcepts is if she can achieve the Certified Information \nSystems Security Professional (CISSP) certification. \nCandidates for this certification are tested on their under-\nstanding of the following 10 knowledge domains: \n ● Access control \n ● Application security \n ● Business continuity and disaster recovery planning \n ● Cryptography \n ● Information security and risk management \n ● Legal, regulations, compliance, and investigations \n ● Operations security \n ● Physical (environmental) security \n ● Security architecture and design \n ● Telecommunications and network security \n What makes this certification so valuable is that the \ncandidate must have a minimum of five years of profes-\nsional experience in the information security field or four \nyears of experience and a college degree. To maintain \ncertification, a certified individual is required to attend \n120 hours of continuing professional education during \nthe three-year certification cycle. This ensures that those \nholding the CISSP credential are staying up to date with \ncurrent trends in security. The CISSP certification is main-\ntained by (ISC) 2 26 . \n D. Think “ Outside the Box ” \n For most businesses, the threat to their intellectual assets \nand technical infrastructure comes from the “ bad guys ” \nsitting outside their organizations, trying to break in. \nThese organizations establish strong perimeter defenses, \nessentially “ boxing in ” their assets. However, internal \nemployees have access to proprietary information to do \ntheir jobs, and they often disseminate this information \nto areas where it is no longer under the control of the \nemployer. This dissemination of data is generally not \nperformed with any malicious intent, simply for employ-\nees to have access to data so that they can perform their \njob responsibilities more efficiently. This also becomes \na problem when an employee leaves (or when a person \nstill-employed loses something like a laptop with pro-\nprietary information stored on it) and the organization \nand takes no steps to collect or control their proprietary \ninformation in the possession of their now ex-employee. \n One of the most overlooked threats to intellectual \nproperty is the innocuous and now ubiquitous USB \nFlash drive. These devices, the size of a tube of lipstick, \nare the modern-day floppy disk in terms of portable data \nstorage. They are a very convenient way to transfer data \nbetween computers. But the difference between these \ndevices and a floppy disk is that USB Flash drives can \nstore a very large amount of data. A 16 GB USB Flash \ndrive has the same storage capacity as more than 10,000 \nfloppy disks! As of this writing, a 16 GB USB Flash \ndrive can be purchased for as little as $30. Businesses \nshould keep in mind that as time goes by, the capacity of \nthese devices will increase and the price will decrease, \nmaking them very attractive to employees. \n These devices are not the only threat to data. Because \nother devices can be connected to the computer through \nthe USB port, digital cameras, MP3 players, and exter-\nnal hard drives can now be used to remove data from a \ncomputer and the network to which it is connected. Most \npeople would recognize that external hard drives pose a \nthreat, but they would not recognize other devices as a \nthreat. Cameras and music players are designed to store \nimages and music, but to a computer they are simply \n 25 ASIS International, www.asisonline.org ; ISACA, www.isaca.org ; \nHTCIA, www.htcia.org ; ISSA, www.issa.org ; InfraGard, www.infragard.\nnet . \n 26 (ISC) 2 , www.isc2.org . \n" }, { "page_number": 44, "text": "Chapter | 1 Building a Secure Organization\n11\n additional mass storage devices. It is difficult for people to \nunderstand that an iPod can carry word processing docu-\nments, databases, and spreadsheets as well as music. \nFortunately, Microsoft Windows tracks the devices that \nare connected to a system in a Registry key, HKEY_\nLocal_Machine\\System\\ControlSet00x\\Enum\\USBStor. \nIt might prove interesting to look in this key on your own \ncomputer to see what types of devices have been connected. \n Figure 1.1 shows a wide array of devices that have been \nconnected to a system that includes USB Flash drives, a \ndigital camera, and several external hard drives. \n Windows Vista has an additional key that tracks \nconnected devices: HKEY_Local_Machine\\Software\\\nMicrosoft\\Windows Portable Devices\\Devices. 27 ( Note: \nAnalyzing the Registry is a great way to investigate the \nactivities of computer users. For many, however, the \nRegistry is tough to navigate and interpret. If you are \ninterested in understanding more about the Registry, you \nmight want to download and play with Harlan Carvey’s \nRegRipper. 28 ) \n Another threat to information that carries data outside \nthe walls of the organization is the plethora of handheld \ndevices currently in use. Many of these devices have the \nability to send and receive email as well as create, store, \nand transmit word processing, spreadsheet, and PDF \nfiles. Though most employers will not purchase these \ndevices for their employees, they are more than happy \nto allow their employees to sync their personally owned \n FIGURE 1.1 Identifying connected USB devices in the USBStor \nRegistry key. \ndevices with their corporate computers. Client contact \ninformation, business plans, and other materials can eas-\nily be copied from a system. Some businesses feel that \nthey have this threat under control because they provide \ntheir employees with corporate-owned devices and they \ncan collect these devices when employees leave their \nemployment. The only problem with this attitude is that \nemployees can easily copy data from the devices to their \nhome computers before the devices are returned. \n Because of the threat of portable data storage devices \nand handheld devices, it is important for an organization \nto establish policies outlining the acceptable use of these \ndevices as well as implementing an enterprise-grade \nsolution to control how, when, or if data can be copied \nto them. Filling all USB ports with epoxy is a cheap \nsolution, but it is not really effective. Fortunately there \nare several products that can protect against this type \nof data leak. DeviceWall from Centennial Software 29 \nand Mobile Security Enterprise Edition from Bluefire \nSecurity Technologies 30 are two popular ones. \n Another way that data leaves control of an organiza-\ntion is through the use of online data storage sites. These \nsites provide the ability to transfer data from a compu-\nter to an Internet-accessible location. Many of these sites \nprovide 5 GB or more of free storage. Though it is cer-\ntainly possible to blacklist these sites, there are so many, \nand more are being developed on a regular basis, that it is \ndifficult if not impossible to block access to all of them. \nOne such popular storage location is the storage space \nprovided with a Gmail account. Gmail provides a large \namount of storage space with its free accounts (7260 \nMB as of this writing, and growing). To access this stor-\nage space, users must use the Firefox browser with the \nGspace plugin installed. 31 Once logged in, users can \ntransfer files simply by highlighting the file and clicking \nan arrow. Figure 1.2 shows the Gspace interface. \n Another tool that will allow users to access the stor-\nage space in their Gmail account is the Gmail Drive \nshell extension. 32 This shell extension places a drive \nicon in Windows Explorer, allowing users to copy files \nto the online storage location as though it were a normal \nmapped drive. Figure 1.3 shows the Gmail Drive icon in \nWindows Explorer. \n Apple has a similar capability for those users with \na MobileMe account. This drive is called iDisk and \n 27 http://windowsir.blogspot.com/2008/06/portable-devices-on-vista.\nhtml (November 8, 2008). \n 28 RegRipper, www.regripper.net . \n 29 DeviceWall, www.devicewall.com . \n 30 Bluefi re Security Technologies, 1010 Hull St., Ste. 210, Baltimore, \nMd. 21230. \n 31 Gspace, www.getgspace.com . \n 32 Gmail Drive, www.viksoe.dk/code/gmail.htm . \n" }, { "page_number": 45, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n12\n appears in the Finder. People who utilize iDisk can \naccess the files from anywhere using a Web browser, \nbut they can also upload files using the browser. Once \nuploaded, the files are available right on the user’s desk-\ntop, and they can be accessed like any other file. Figures \n1.4 and 1.5 show iDisk features. \n In addition, numerous sites provide online storage. \nA partial list is included here: \n ● ElephantDrive: www.elephantdrive.com \n ● Mozy: www.mozy.com \n ● Box: www.box.net \n ● Carbonite: www.carbonite.com \n ● Windows Live SkyDrive: www.skydrive.live.com \n ● FilesAnywhere: www.filesanywhere.com \n ● Savefile: www.savefile.com \n ● Spare Backup: www.sparebackup.com \n ● Digitalbucket.net: www.digitalbucket.net \n ● Memeo: www.memeo.com \n ● Biscu.com: www.biscu.com \n Note: Though individuals might find these sites \nconvenient and easy to use for file storage and backup \npurposes, businesses should think twice about storing \ndata on them. The longevity of these sites is not guar-\nanteed. For example, Xdrive, a popular online storage \nservice created in 1999 and purchased by AOL in 2005 \n(allegedly for US$30 million), shut down on January 12, \n2009. \n E. Train Employees: Develop a Culture \nof Security \n One of the greatest security assets is a business’s own \nemployees, but only if they have been properly trained \nto comply with security policies and to identify potential \nsecurity problems. \n Many employees don’t understand the significance of \nvarious security policies and implementations. As men-\ntioned previously, they consider these policies nothing \n FIGURE 1.2 Accessing Gspace using the Firefox browser. \n FIGURE 1.3 Gmail Drive in Windows Explorer. \n" }, { "page_number": 46, "text": "Chapter | 1 Building a Secure Organization\n13\n more than an inconvenience. Gaining the support and \nallegiance of employees takes time, but it is time well \nspent. Begin by carefully explaining the reasons behind \nany security implementation. One of the reasons could \nbe ensuring employee productivity, but focus primarily \non the security issues. File sharing using LimeWire and \neMule might keep employees away from work, but they \ncan also open up holes in a firewall. Downloading and \ninstalling unapproved software can install malicious \nsoftware that can infect user systems, causing their com-\nputers to function slowly or not at all. \n Perhaps the most direct way to gain employee sup-\nport is to let employees know that the money needed to \nrespond to attacks and fix problems initiated by users is \nmoney that is then not available for raises and promo-\ntions. Letting employees know that they now have some \n “ skin in the game ” is one way to get them involved in \nsecurity efforts. If a budget is set aside for responding to \nsecurity problems and employees help stay well within \nthe budget, the difference between the money spent and \nthe actual budget could be divided among employees as \na bonus. Not only would employees be more likely to \nspeak up if they notice network or system slowdowns, \nthey would probably be more likely to confront strangers \nwandering through the facility. \n Another mechanism that can be used to gain security \nallies is to provide advice regarding the proper secu-\nrity mechanisms for securing home computers. Though \nsome might not see this as directly benefiting the com-\npany, keep in mind that many employees have corporate \ndata on their home computers. This advice can come \nfrom periodic, live presentations (offer refreshments and \nattendance will be higher) or from a periodic newsletter \nthat is either mailed or emailed to employees’ personal \naddresses. \n The goal of these activities is to encourage employ-\nees to approach management or the security team vol-\nuntarily. When this begins to happen on a regular basis, \nyou will have expanded the capabilities of your security \nteam and created a much more secure organization. \n The security expert Roberta Bragg used to tell a story \nof one of her clients who took this concept to a high \nlevel. The client provided the company mail clerk with a \nWiFi hotspot detector and promised him a free steak din-\nner for every unauthorized wireless access point he could \n FIGURE 1.4 Accessing files in iDisk. \n FIGURE 1.5 iDisk Upload window in Firefox. \n" }, { "page_number": 47, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n14\n find on the premises. The mail clerk was very happy to \nhave the opportunity to earn three free steak dinners. \n F. Identify and Utilize Built-In Security \nFeatures of the Operating System and \nApplications \n Many organizations and systems administrators state \nthat they cannot create a secure organization because \nthey have limited resources and simply do not have the \nfunds to purchase robust security tools. This is a ridicu-\nlous approach to security because all operating systems \nand many applications include security mechanisms \nthat require no organizational resources other than time \nto identify and configure these tools. For Microsoft \nWindows operating systems, a terrific resource is the \nonline Microsoft TechNet Library. 33 Under the Solutions \nAccelerators link you can find security guides for all \nrecent Microsoft Windows operating systems. Figure 1.6 \nshows the table of contents for Windows 2008 Server. \n TechNet is a great resource and can provide insight \ninto managing numerous security issues, from Microsoft \nOffice 2007 to security risk management. These docu-\nments can assist in implementing the built-in security fea-\ntures of Microsoft Windows products. Assistance is needed \nin identifying many of these capabilities because they are \noften hidden from view and turned off by default. \n One of the biggest concerns in an organization today \nis data leaks, which are ways that confidential information \ncan leave an organization despite robust perimeter security. \nAs mentioned previously, USB Flash drives are one cause \nof data leaks; another is the recovery of data found in the \nunallocated clusters of a computer’s hard drive. Unallocated \nclusters, or free space , as it is commonly called, is the area \nof a hard drive where the operating system and applications \ndump their artifacts or residual data. Though this data is not \nviewable through a user interface, the data can easily be \nidentified (and sometimes recovered) using a hex editor such \nas WinHex. 34 Figure 1.7 shows the contents of a deleted file \nstored on a floppy disk being displayed by WinHex. \n Should a computer be stolen or donated, it is very \npossible that someone could access the data located in \nunallocated clusters. For this reason, many people strug-\ngle to find an appropriate “ disk-scrubbing ” utility. Many \nsuch commercial utilities exist, but there is one built \ninto Microsoft Windows operating systems. The com-\nmand-line program cipher.exe is designed to display or \nalter the encryption of directories (files) stored on NTFS \npartitions. Few people even know about this command; \neven fewer are familiar with the /w switch. Here is a \ndescription of the switch from the program’s Help file: \n Removes data from available unused disk space on the \nentire volume. If this option is chosen, all other options are \nignored. The directory specified can be anywhere in a local \nvolume. If it is a mount point or points to a directory in \nanother volume, the data on that volume will be removed. \n To use Cipher, click Start | Run and type cmd . \nWhen the cmd.exe window opens, type cipher /w: folder , \nwhere folder is any folder in the volume that you want \nto clean, and then press Enter . Figure 1.8 shows Cipher \nwiping a folder. \n For more on secure file deletion issues, see the \nauthor’s white paper in the SANS reading room, “Secure \nfile deletion: Fact or fiction?” 35 \n Another source of data leaks is the personal and editing \ninformation that can be associated with Microsoft Office \nfiles. In Microsoft Word 2003 you can configure the appli-\ncation to remove personal information on save and to warn \nyou when you are about to print, share, or send a docu-\nment containing tracked changes or comments. \n To access this feature, within Word click Tools | \nOptions and then click the Security tab. Toward the \n FIGURE 1.6 Windows Server 2008 Security Guide Table of Contents. \n 33 Microsoft TechNet Library, http://technet.microsoft.com/en-us/\nlibrary/default.aspx . \n 34 WinHex, www.x-ways.net/winhex/index-m.html . \n 35 “ Secure fi le deletion: Fact or fi ction? ” www.sans.org/reading_room/\nwhitepapers/incident/631.php (November 8, 2008). \n" }, { "page_number": 48, "text": "Chapter | 1 Building a Secure Organization\n15\n bottom of the security window you will notice the two \noptions described previously. Simply select the options \nyou want to use. Figure 1.9 shows these options. \n Microsoft Office 2007 made this tool more robust and \nmore accessible. A separate tool called Document Inspector \ncan be accessed by clicking the Microsoft Office button, \npointing to Prepare Document , then clicking Inspect \nDocument . Then select the items you want to remove. \n Implementing a strong security posture often begins \nby making the login process more robust. This includes \nincreasing the complexity of the login password. All \npasswords can be cracked, given enough time and \nresources, but the more difficult you make cracking a \npassword, the greater the possibility the asset the pass-\nword protects will stay protected. \n All operating systems have some mechanism to \nincrease the complexity of passwords. In Microsoft \nWindows XP Professional, this can be accomplished by \nclicking Start | Control Panel | Administrative Tools | \nLocal Security Policy . Under Security Settings , expand \n Account Policies and then highlight Password Policy . \nIn the right-hand panel you can enable password com-\nplexity. Once this is enabled, passwords must contain at \nleast three of the four following password groups 36 : \n ● English uppercase characters (A through Z) \n ● English lowercase characters (a through z) \n FIGURE 1.7 WinHex displaying the contents of a deleted Word document. \n FIGURE 1.8 Cipher wiping a folder called Secretstuff. \n FIGURE 1.9 Security options for Microsoft Word 2003. \n 36 “ Users receive a password complexity requirements message that \ndoes not specify character group requirements for a password, ” http://\nsupport.microsoft.com/kb/821425 (November 8, 2008). \n" }, { "page_number": 49, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n16\n ● Numerals (0 through 9) \n ● Nonalphabetic characters (such as !, $, #, %) \n It is important to recognize that all operating systems \nhave embedded tools to assist with security. They often \nrequire a little research to find, but the time spent in \nidentifying them is less than the money spent on pur-\nchasing additional security products or recovering from \na security breach. \n Though not yet used by many corporations, Mac OS \nX has some very robust security features, including File \nVault, which creates an encrypted home folder and the \nability to encrypt virtual memory. Figure 1.10 shows the \nsecurity options for Mac OS X. \n G. Monitor Systems \n Even with the most robust security tools in place, it is \nimportant to monitor your systems. All security prod-\nucts are manmade and can fail or be compromised. As \nwith any other aspect of technology, one should never \nrely on simply one product or tool. Enabling logging on \nyour systems is one way to put your organization in a \nposition to identify problem areas. The problem is, what \nshould be logged? There are some security standards that \ncan help with this determination. One of these standards \nis the Payment Card Industry Data Security Standard \n(PCI DSS). 37 Requirement 10 of the PCI DSS states that \norganizations must “ Track and monitor access to network \nresources and cardholder data. ” If you simply substitute \n confidential information for the phrase cardholder data, \nthis requirement is an excellent approach to a log man-\nagement program. Requirement 10 is reproduced here: \n Logging mechanisms and the ability to track user activi-\nties are critical. The presence of logs in all environments \nallows thorough tracking and analysis if something does \ngo wrong. Determining the cause of a compromise is very \ndifficult without system activity logs: \n FIGURE 1.10 Security options for Mac OS X. \n 37 PCI DSS, www.pcisecuritystandards.org/ . \n" }, { "page_number": 50, "text": "Chapter | 1 Building a Secure Organization\n17\n 1. Establish a process for linking all access to system \ncomponents (especially access done with administrative \nprivileges such as root) to each individual user. \n 2. Implement automated audit trails for all system \ncomponents to reconstruct the following events: \n ● All individual user accesses to cardholder data \n ● All actions taken by any individual with root or \nadministrative privileges \n ● Access to all audit trails \n ● Invalid logical access attempts \n ● Use of identification and authentication mechanisms \n ● Initialization of the audit logs \n ● Creation and deletion of system-level objects \n 3. Record at least the following audit trail entries for all \nsystem components for each event: \n ● User identification \n ● Type of event \n ● Date and time \n ● Success or failure indication \n ● Origination of event \n ● Identity or name of affected data, system component, \nor resource \n 4. Synchronize all critical system clocks and times. \n 5. Secure audit trails so they cannot be altered: \n ● Limit viewing of audit trails to those with a \njob-related need. \n ● Protect audit trail files from unauthorized \nmodifications. \n ● Promptly back up audit trail files to a centralized log \nserver or media that is difficult to alter. \n ● Copy logs for wireless networks onto a log server on \nthe internal LAN. \n ● Use file integrity monitoring and change detection \nsoftware on logs to ensure that existing log data can-\nnot be changed without generating alerts (although \nnew data being added should not cause an alert). \n 6. Review logs for all system components at least daily. \nLog reviews must include those servers that perform \nsecurity functions like intrusion detection system \n(IDS) and authentication, authorization, and accounting \nprotocol (AAA) servers (for example, RADIUS). \n \n Note: Log harvesting, parsing, and alerting tools may \nbe used to achieve compliance. \n 7. Retain audit trail history for at least one year, with a \nminimum of three months online availability. \n Requirement 6 looks a little overwhelming, since few \norganizations have the time to manually review log files. \nFortunately, there are tools that will collect and parse log \nfiles from a variety of sources. All these tools have the \nability to notify individuals of a particular event. One \nsimple tool is the Kiwi Syslog Daemon 38 for Microsoft \nWindows. Figure 1.11 shows the configuration screen \nfor setting up email alerts in Kiwi. \n Additional log parsing tools include Microsoft’s \nLog Parser 39 and, for Unix, Swatch. 40 Commercial \ntools include Cisco Security Monitoring, Analysis, and \nResponse System (MARS) 41 and GFI EventsManager. 42 \n An even more detailed approach to monitoring your \nsystems is to install a packet-capturing tool on your net-\nwork so you can analyze and capture traffic in real time. \nOne tool that can be very helpful is Wireshark, which \nis “ an award-winning network protocol analyzer devel-\noped by an international team of networking experts. ” 43 \nWireshark is based on the original packet capture tool, \nEthereal. Analyzing network traffic is not a trivial task \nand requires some training, but it is the perhaps the most \naccurate way to determine what is happening on your \nnetwork. Figure 1.12 shows Wireshark monitoring the \ntraffic on a wireless interface. \n H. Hire a Third Party to Audit Security \n Regardless of how talented your staff is, there is always \nthe possibility that they overlooked something or inad-\nvertently misconfigured a device or setting. For this rea-\nson it is very important to bring in an extra set of “ eyes, \nears, and hands ” to review your organization’s security \nposture. \n Though some IT professionals will become paranoid \nhaving a third party review their work, intelligent staff \nmembers will recognize that a security review by outsid-\ners can be a great learning opportunity. The advantage \nof having a third party review your systems is that the \noutsiders have experience reviewing a wide range of sys-\ntems, applications, and devices in a variety of industries. \nThey will know what works well and what might work \nbut cause problems in the future. They are also more \nlikely to be up to speed on new vulnerabilities and the \nlatest product updates. Why? Because this is all they do. \n 38 Kiwi Syslog Daemon, www.kiwisyslog.com . \n 39 Log Parser 2.2, www.microsoft.com/downloads/details.aspx?Family\nID \u0003 890cd06b-abf8-4c25-91b2-f8d975cf8c07 & displaylang \u0003 en . \n 40 Swatch, http://sourceforge.net/projects/swatch/ . \n 41 Cisco MARS, www.cisco.com/en/US/products/ps6241/ . \n 42 GFI EventsManager, www.gfi .com/eventsmanager/ . \n 43 Wireshark, www.wireshark.org . \n" }, { "page_number": 51, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n18\nto attackers and how secure the system is, should attack-\ners make it past the perimeter defenses. You don’t want to \nhave “ Tootsie Pop security ” — a hard crunchy shell with a \nsoft center. The external review, often called a penetra-\ntion test, can be accomplished in several ways; the first \nis a no knowledge approach, whereby the consultants are \n FIGURE 1.12 The protocol analyzer Wireshark monitoring a wireless interface. \n FIGURE 1.11 Kiwi Syslog Daemon Email Alert Configuration screen. \n They are not encumbered by administrative duties, inter-\nnal politics, and help desk requests. They will be more \nobjective than in-house staff, and they will be in a position \nto make recommendations after their analysis. \n The third-party analysis should involve a two-pronged \napproach: They should identify how the network appears \n" }, { "page_number": 52, "text": "Chapter | 1 Building a Secure Organization\n19\n provided with absolutely no information regarding the \nnetwork and systems prior to their analysis. Though this \nis a very realistic approach, it can be time consuming and \nvery expensive. Using this approach, consultants must \nuse publicly available information to start enumerating \nsystems for testing. This is a realistic approach, but a par-\ntial knowledge analysis is more efficient and less expen-\nsive. If provided with a network topology diagram and \na list of registered IP addresses, the third-party review-\ners can complete the review faster and the results can \nbe addressed in a much more timely fashion. Once the \npenetration test is complete, a review of the internal net-\nwork can be initiated. The audit of the internal network \nwill identify open shares, unpatched systems, open ports, \nweak passwords, rogue systems, and many other issues. \n I. Don’t Forget the Basics \n Many organizations spend a great deal of time and \nmoney addressing perimeter defenses and overlook some \nfundamental security mechanisms, as described here. \n Change Default Account Passwords \n Nearly all network devices come preconfigured with \na password/username combination. This combination \nis included with the setup materials and is documented \nin numerous locations. Very often these devices are the \ngateways to the Internet or other internal networks. If \nthese default passwords are not changed upon configu-\nration, it becomes a trivial matter for an attacker to get \ninto these systems. Hackers can find password lists on \nthe Internet, 44 and vendors include default passwords in \ntheir online manuals. For example, Figure 1.13 shows \nthe default username and password for a Netgear router. \n Use Robust Passwords \n With the increased processing power of our computers \nand password-cracking software such as the Passware \nproducts 45 and AccessData’s Password Recovery Toolkit, 46 \ncracking passwords is fairly simple and straightfor-\nward. For this reason it is extremely important to cre-\nate robust passwords. Complex passwords are hard for \nusers to remember, though, so it is a challenge to cre-\nate passwords that can be remembered without writ-\ning them down. One solution is to use the first letter of \neach word in a phrase, such as “ I l ike t o e at i mported \n c heese f rom H olland. ” This becomes IlteicfH , which is \nan eight-character password using upper- and lowercase \nletters. This can be made even more complex by substi-\ntuting an exclamation point for the letter I and substitut-\ning the number 3 for the letter e , so that the password \nbecomes !lt3icfH. This is a fairly robust password that \ncan be remembered easily. \n Close Unnecessary Ports \n Ports on a computer are logical access points for com-\nmunication over a network. Knowing what ports are \nopen on your computers will allow you to understand the \ntypes of access points that exist. The well-known port \nnumbers are 0 through 1023. Some easily recognized \nports and what they are used for are listed here: \n ● Port 21: FTP \n ● Port 23: Telnet \n ● Port 25: SMTP \n ● Port 53: DNS \n ● Port 80: HTTP \n ● Port 110: POP \n ● Port 119: NNTP \n Since open ports that are not needed can be an \nentrance into your systems, and open ports that are open \nunexpectedly could be a sign of malicious software, \nidentifying open ports is an important security pro-\ncess. There are several tools that will allow you to iden-\ntify open ports. The built-in command-line tool netstat \n FIGURE 1.13 Default username and password for Netgear router. \n 44 www.phenoelit-us.org/dpl/dpl.html . \n 45 Passware, www.lostpassword.com . \n 46 Password Recovery Toolkit, www.accessdata.com/decryptionTool.\nhtml . \n" }, { "page_number": 53, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n20\n will allow you to identify open ports and process IDs by \nusing the following switches: \n -a Displays all connections and listening ports \n -n Displays addresses and port numbers in numerical \nform \n -o Displays the owning process ID associated with each \nconnection \n ( Note: In Unix, netstat is also available but utilizes the \nfollowing switches: -atvp. ) \n Other tools that can prove helpful are ActivePorts, 47 \na graphical user interface (GUI) tool that allows you \nto export the results in delimited format, and Fport, 48 a \npopular command-line tool. Sample results are shown in \n Figure 1.14 . \n J. Patch, Patch, Patch \n Nearly all operating systems have a mechanism for auto-\nmatically checking for updates. This notification system \nshould be turned on. Though there is some debate as to \nwhether updates should be installed automatically, sys-\ntems administrators should at least be notified of updates. \nThey might not want to have them installed automati-\ncally, since patches and updates have been known to \ncause more problems than they solve. However, adminis-\ntrators should not wait too long before installing updates, \nbecause this can unnecessarily expose systems to attack. \nA simple tool that can help keep track of system updates \nis the Microsoft Baseline Security Analyzer, 49 which also \nwill examine other fundamental security configurations. \n Use Administrator Accounts for \nAdministrative Tasks \n A common security vulnerability is created when sys-\ntems administrators conduct administrative or personal \ntasks while logged into their computers with adminis-\ntrator rights. Tasks such as checking email, surfing the \nInternet, and testing questionable software can expose \nthe computer to malicious software. This means that \nthe malicious software can run with administrator privi-\nleges, which can create serious problems. Administrators \nshould log into their systems using a standard user account \nto prevent malicious software from gaining control of \ntheir computers. \n Restrict Physical Access \n With a focus on technology, it is often easy to overlook \nnontechnical security mechanisms. If an intruder can \ngain physical access to a server or other infrastructure \nasset, the intruder will own the organization. Critical \nsystems should be kept in secure areas. A secure area \nis one that provides the ability to control access to only \nthose who need access to the systems as part of their job \n FIGURE 1.14 Sample output from Fport. \n 47 ActivePorts, www.softpile.com . \n 48 Fport, www.foundstone.com/us/resources/proddesc/fport.htm . \n 49 Microsoft Baseline Security Analyzer, http://technet.microsoft.com/\nen-us/security/cc184923.aspx . \n" }, { "page_number": 54, "text": "Chapter | 1 Building a Secure Organization\n21\n responsibilities. A room that is kept locked using a key \nthat is only provided to the systems administrator, with \nthe only duplicate stored in a safe in the office manag-\ner’s office, is a good start. The room should not have any \nwindows that can open. In addition, the room should \nhave no labels or signs identifying it as a server room or \nnetwork operations center. The equipment should not be \nstored in a closet where other employees, custodians, or \ncontractors can gain access. The validity of your secu-\nrity mechanisms should be reviewed during a third-party \nvulnerability assessment. \n Don’t Forget Paper! \n With the advent of advanced technology, people have for-\ngotten how information was stolen in the past — on paper. \nManaging paper documents is fairly straightforward. \nLocking file cabinets should be used — and locked consist-\nently. Extra copies of proprietary documents, document \ndrafts, and expired internal communications are some of \nthe materials that should be shredded. A policy should \nbe created to tell employees what they should and should \nnot do with printed documents. The following example \nof the theft of trade secrets underscores the importance \nof protecting paper documents: \n A company surveillance camera caught Coca-Cola \nemployee Joya Williams at her desk looking through files \nand “ stuffing documents into bags, ” Nahmias and FBI offi-\ncials said. Then in June, an undercover FBI agent met at the \nAtlanta airport with another of the defendants, handing him \n$30,000 in a yellow Girl Scout Cookie box in exchange for \nan Armani bag containing confidential Coca-Cola docu-\nments and a sample of a product the company was develop-\ning, officials said. 50 \n The steps to achieving security mentioned in this chapter \nare only the beginning. They should provide some insight \ninto where to start building a secure organization. \n 50 3 accused in theft of Coke secrets, ” Washington Post , July 26, \n2006, \n www.washingtonpost.com/wp-dyn/content/article/2006/07/05/\nAR2006070501717.html (November 8, 2008). \n" }, { "page_number": 55, "text": "This page intentionally left blank\n" }, { "page_number": 56, "text": "23\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n A Cryptography Primer \n Scott R. Ellis \n RGL Forensics \n Chapter 2 \n Man is a warrior creature, a species that ritually engages \nin a type of warfare where the combat can range from \nthe subtlety of inflicting economic damage, or achiev-\ning economic superiority and advantage, to moving \nsomeone’s chair a few inches from sitting distance or \nputting rocks in their shoes, to the heinousness of the \noutright killing of our opponents. As such, it is in our \nnature to want to prevent others who would do us harm \nfrom intercepting private communications (which could \nbe about them!). Perhaps nothing so perfectly illustrates \nthis fact as the art of cryptography. It is, in its purpose, \nan art form entirely devoted to the methods whereby we \ncan prevent information from falling into the hands of \nthose who would use it against us — our enemies. \n Since the beginning of sentient language, cryptogra-\nphy has been a part of communication. It is as old as lan-\nguage itself. In fact, one could make the argument that \nthe desire and ability to encrypt communication, to alter \na missive in such a way so that only the intended recipi-\nent may understand it, is an innate ability hardwired into \nthe human genome. Aside from the necessity to commu-\nnicate, it could very well be what led to the development \nof language itself. Over time, languages and dialects \nevolve, as we can see with Spanish, French, Portuguese, \nand Italian — all “ Latin ” languages. People who speak \nFrench have a great deal of trouble understanding people \nwho speak Spanish, and vice versa. The profundity of \nLatin cognates in these languages is undisputed, but gen-\nerally speaking, the two languages are so far removed \nthat they are not dialects, they are separate languages. \nBut why is this? Certain abilities, such as walking, are \nhardwired into our nervous systems. Other abilities, such \nas language, are not. \n So why isn’t language hardwired into our nervous \nsystem, as it is with bees, who are born knowing how \nto tell another bee how far away a flower is, as well \nas the quantity of pollen and whether there is danger \npresent? Why don’t we humans all speak the exact same \n language? Perhaps we do, to a degree, but we choose not \nto do so. The reason is undoubtedly because humans, \nunlike bees, understand that knowledge is power, and \nknowledge is communicated via spoken and written \nwords. Plus we weren’t born with giant stingers with \nwhich to simply sting people we don’t like. With the \ndevelopment of evolving languages innate in our genetic \nwiring, the inception of cryptography was inevitable. \n In essence, computer-based cryptography is the art of \ncreating a form of communication that embraces the fol-\nlowing precepts: \n ● Can be readily understood by the intended recipients \n ● Cannot be understood by unintended recipients \n ● Can be adapted and changed easily with relatively \nsmall modifications, such as a changed passphrase or \nword \n Any artificially created lexicon, such as the Pig Latin \nof children, pictograph codes, gang-speak, or corpo-\nrate lingo — and even the names of music albums, such \nas Four Flicks — are all manners of cryptography where \nreal text, sometimes not so ciphered, is hidden in what \nappears to be plain text. They are attempts at hidden \ncommunications. \n 1. WHAT IS CRYPTOGRAPHY? WHAT IS \nENCRYPTION? \n Ask any ancient Egyptian and he’ll undoubtedly define \n cryptography as the practice of burying their dead so \nthat they cannot be found again. They were very good \nat it; thousands of years later, new crypts are still being \ndiscovered. The Greek root krypt literally means “ a hid-\nden place, ” and as such it is an appropriate base for any \nterm involving cryptology. According to the Online \nEtymology Dictionary, crypto - as a prefix, mean-\ning “ concealed, secret, ” has been used since 1760, and \n" }, { "page_number": 57, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n24\n from the Greek graphikos , “ of or for writing, belonging \nto drawing, picturesque. ” 1 Together, crypto \u0002 graphy \nwould then mean “ hiding place for ideas, sounds, pic-\ntures, or words. ” Graph , technically from its Greek root, \nis “ the art of writing. ” Encryption , in contrast, merely \nmeans the act of carrying out some aspect of cryptogra-\nphy. Cryptology , with its - ology ending, is the study of \ncryptography. Encryption is subsumed by cryptography. \n How Is Cryptography Done? \n For most information technology occupations, knowl-\nedge of cryptography is a very small part of a broader \nskill set, and is generally limited to relevant applica-\ntion. The argument could be made that this is why the \nInternet is so extraordinarily plagued with security \nbreaches. The majority of IT administrators, software \nprogrammers, and hardware developers are barely cog-\nnizant of the power of true cryptography. Overburdened \nwith battling the plague that they inherited, they can’t \nafford to devote the time or resources needed to imple-\nment a truly secure strategy. And the reason, as we shall \ncome to see, is because as good at cryptographers can \nbe — well, just as it is said that everyone has an evil twin \nsomewhere in the world, for every cryptographer there is \na de cryptographer working just as diligently to decipher \na new encryption algorithm. \n Traditionally, cryptography has consisted of any \nmeans possible whereby communications may be \nencrypted and transmitted. This could be as simple \nas using a language with which the opposition is not \nfamiliar. Who hasn’t been somewhere where everyone \naround you was speaking a language you didn’t under-\nstand? There are thousands of languages in the world, \nnobody can know them all. As was shown in World War \nII, when Allied Forces used Navajo as a means of com-\nmunicating freely, some languages are so obscure that \nan entire nation may not contain one person who speaks \nit! All true cryptography is composed of three parts: a \ncipher, an original message, and the resultant encryption. \nThe cipher is the method of encryption used. Original \nmessages are referred to as plain text or as clear text. \nA message that is transmitted without encryption is said \nto be sent “ in the clear. ” The resultant message is called \n ciphertext or cryptogram. This section begins with a \nsimple review of cryptography procedures and carries \nthem through; each section building on the next to illus-\ntrate the principles of cryptography. \n 2. FAMOUS CRYPTOGRAPHIC DEVICES \n The past few hundred years of technical development \nand advances have brought greater and greater means \nto decrypt, encode, and transmit information. With the \nadvent of the most modern warfare techniques and the \nincrease in communication and ease of reception, the need \nfor encryption has never been greater. \n World War II publicized and popularized cryptogra-\nphy in modern culture. The Allied Forces ’ ability to cap-\nture, decrypt, and intercept Axis communications is said \nto have hastened WWII’s end by several years. Here we \ntake a quick look at some famous cryptographic devices \nfrom that era. \n The Lorenz Cipher \n The Lorenz cipher machine was an industrial-strength \nciphering machine used in teleprinter circuits by the \nGermans during WWII. Not to be confused with its \nsmaller cousin, the Enigma machine, the Lorenz cipher \ncould possibly be best compared to a virtual private net-\nwork tunnel for a telegraph line — only it wasn’t send-\ning Morse code, it was using a code not unlike a sort of \nAmerican Standard Code for Information Interchange \n(ASCII) format. A granddaddy of sorts, called Baudot \ncode, was used to send alphanumeric communications \nacross telegraph lines. Each character was represented \nby a series of 5 bits. \n It is often confused with the famous Enigma, but \nunlike the Enigma (which was a portable field unit), the \nLorenz cipher could receive typed messages, encrypt \nthem, send them to another distant Lorenz cipher, which \nwould then decrypt the signal. It used a pseudorandom \ncipher XOR’d with plaintext. The machine would be \ninserted inline as an attachment to a Lorenz teleprinter. \n Figure 2.1 is a rendered drawing from a photograph of a \nLorenz cipher machine. \n Enigma \n The Enigma machine was a field unit used in WWII by \nGerman field agents to encrypt and decrypt messages \nand communications. Similar to the Feistel function \nof the 1970s, the Enigma machine was one of the first \nmechanized methods of encrypting text using an itera-\ntive cipher. It employed a series of rotors that, with some \nelectricity, a light bulb, and a reflector, allowed the oper-\nator to either encrypt or decrypt a message. The origi-\nnal position of the rotors, set with each encryption and \n 1 www.etymonline.com . \n" }, { "page_number": 58, "text": "Chapter | 2 A Cryptography Primer\n25\n based on a prearranged pattern that in turn was based on \nthe calendar, allowed the machine to be used, even if it \nwas compromised. \n When the Enigma was in use, with each subsequent \nkey press, the rotors would change in alignment from \ntheir set positions in such a way that a different letter \nwas produced each time. The operator, with a message \nin hand, would enter each character into the machine by \npressing a typewriter-like key. The rotors would align, \nand a letter would then illuminate, telling the operator \nwhat the letter really was. Likewise, when enciphering, \nthe operator would press the key and the illuminated \nletter would be the cipher text. The continually chang-\ning internal flow of electricity that caused the rotors to \nchange was not random, but it did create a polyalphabetic \ncipher that could be different each time it was used. \n 3. CIPHERS \n Cryptography is built on one overarching premise: the \nneed for a cipher that can reliably, and portably, be used to \nencrypt text so that, through any means of cryptanalysis —\n differential, deductive, algebraic, or the like — the cipher-\ntext cannot be undone with any available technology. \nThroughout the centuries, there have been many attempts \nto create simple ciphers that can achieve this goal. With \nthe exception of the One Time Pad, which is not particu-\nlarly portable, success has been limited. \n Let’s look at a few of these methods now. \n The Substitution Cipher \n In this method, each letter of the message is replaced \nwith a single character. See Table 2.1 for an example \nof a substitution cipher. Because some letters appear \nmore often and certain words appear more often than \nothers, some ciphers are extremely easy to decrypt, and \nsome can be deciphered at a glance by more practiced \ncryptologists. \n By simply understanding probability and with some \napplied statistics, certain metadata about a language \ncan be derived and used to decrypt any simple, one-\nfor-one substitution cipher. Decryption methods often \nrely on understanding the context of the ciphertext . What \nwas encrypted — business communication? Spreadsheets? \nTechnical data? Coordinates? For example, using a hex edi-\ntor and an access database to conduct some statistics, we \ncan use the information in Table 2.2 to gain highly special-\nized knowledge about the data in Chapter 19, “ Computer \nForensics, ” by Scott R. Ellis, in this book. A long chapter \nat nearly 25,000 words, it provides a sufficiently large sta-\ntistical pool to draw some meaningful analyses. \n FIGURE 2.1 The Lorenz machine was set inline with a teletype to produce encrypted telegraphic signals. \n" }, { "page_number": 59, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n26\n Table 2.3 gives additional data about the occurrence \nof specific words in Chapter 19. Note that because it is a \ntechnical text, words such as computer, files, email, and \n drive emerge as leaders. Analysis of these leaders can \nreveal individual and paired alpha frequencies. Being \narmed with knowledge about the type of communication \ncan be very beneficial in decrypting it. \n Further information about types of data being \nencrypted include word counts by length of the word. \n Table 2.4 contains such a list for Chapter 19. This \ninformation can be used to begin to piece together use-\nful and meaningful short sentences, which can provide \ncues to longer and more complex structures. It is exactly \nthis sort of activity that good cryptography attempts to \ndefeat. \n Were it encrypted using a simple substitution cipher, \na good start to deciphering Chapter 19 could be made \nusing the information we’ve gathered. As a learning \nexercise, game, or logic puzzle, substitution ciphers are \nquite useful. Some substitution ciphers that are more \nelaborate can be just as difficult to crack. Ultimately, \nthough, the weakness behind a substitution cipher is the \nfact that the ciphertext remains a one-to-one, directly \ncorresponding substitution; ultimately anyone with a pen \n TABLE 2.1 A simple substitution cipher. Letters are numbered by their order in the alphabet, to provide a \nnumeric reference key. To encrypt a message, the letters are replaced, or substituted, by the numbers. This is a \nparticularly easy cipher to reverse. \n A \n B \n C \n D \n E \n F G H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V \n W \n X \n Y \n Z \n 1 \n 2 \n 3 \n 4 \n 5 \n 6 \n 7 \n 8 \n 9 \n 10 \n 11 \n 12 \n 13 \n 14 \n 15 \n 16 \n 17 \n 18 \n 19 \n 20 \n 21 \n 22 \n 23 24 25 26 \n O \n C \n Q \n W \n B \n X \n Y \n E \n I \n L \n Z \n A \n D \n R \n J \n S \n P \n F \n G \n K \n H \n N \n T \n U \n M \n V \n 15 3 \n 17 23 \n 2 24 25 5 \n 9 \n 12 \n 26 \n 1 \n 4 \n 18 \n 10 \n 19 \n 16 \n 6 \n 7 \n 11 \n 8 \n 14 \n 20 21 13 22 \nand paper and a large enough sample of the ciphertext \ncan defeat it. Using a computer, deciphering a simple \nsubstitution cipher becomes child’s play. \n The Shift Cipher \n Also known as the Caesar cipher, the shift cipher is \none that anyone can readily understand and remember \nfor decoding. It is a form of the substitution cipher. By \nshifting the alphabet a few positions in either direction, \na simple sentence can become unreadable to casual \ninspection. Example 2.1 is an example of such a shift. \n Interestingly, for cryptogram word games, the spaces \nare always included. Often puzzles use numbers instead of \nletters for the substitution. Removing the spaces in this par-\nticular example can make the ciphertext somewhat more \nsecure. The possibility for multiple solutions becomes an \nissue; any number of words might fit the pattern. \n Today many software tools are available to quickly \nand easily decode most cryptograms (at least, those that \nare not written in a dead language). You can have some \nfun with these tools; for example, the name Scott Ellis, \nwhen decrypted, turns into Still Books. The name of \na friend of the author’s decrypts to “ His Sinless. ” It is \napparent, then, that smaller-sample simple substitution \nciphers can have more than one solution. \n Much has been written and much has been said about \nfrequency analysis; it is considered the “ end-all and be-\nall ” with respect to cipher decryption. This is not to be \nconfused with cipher breaking, which is a modern attack \nagainst the actual cryptographic algorithms themselves. \nHowever, to think of simply plugging in some numbers \ngenerated from a Google search is a bit na ï ve. The fre-\nquency chart in Table 2.5 is commonplace on the Web. \n It is beyond the scope of this chapter to delve into the \naccuracy of the table, but suffice it to say that our own \nanalysis of Chapter 19’s 118,000 characters, a technical \ntext, yielded a much different result; see Table 2.6 . \nPerhaps it is the significantly larger sample and the fact \n TABLE 2.2 Statistical data of interest in encryption. \nAn analysis of a selection of a manuscript (in this \ncase, the preedited version of Chapter 19 of this \nbook) can provide insight into the reasons that \ngood ciphers need to be developed. \n Character Analysis \n Count \n Number of distinct alphanumeric \ncombinations \n 1958 \n Distinct characters \n 68 \n Number of four-letter words \n 984 \n Number of five-letter words \n 1375 \n" }, { "page_number": 60, "text": "Chapter | 2 A Cryptography Primer\n27\nmust take into consideration spacing and word lengths \nwhen considering whether or not a string matches a \nword. It stands to reason, then, that the formulation of \nthe cipher, where a substitution that is based partially \non frequency similarities and with a whole lot of obfus-\ncation so that when messages are decrypted they have \nambiguous or multiple meanings, would be desirable \n TABLE 2.4 Leaders by word length in the preedited \nmanuscript for Chapter 19. The context of the clear \ntext can make the cipher less secure. There are, \nafter all, only a finite number of words. Fewer of \nthem are long. \n Words Field \n Number of Dupes Word Length \n XOriginalArrivalTime: \n 2 \n 21 \n interpretations \n 2 \n 15 \n XOriginatingIP: \n 2 \n 15 \n electronically \n 4 \n 14 \n investigations \n 5 \n 14 \n interpretation \n 6 \n 14 \n reconstructing \n 3 \n 14 \n irreproducible \n 2 \n 14 \n professionally \n 2 \n 14 \n inexperienced \n 2 \n 13 \n Demonstrative \n 2 \n 13 \n XAnalysisOut: \n 8 \n 13 \n Steganography \n 7 \n 13 \n Understanding \n 8 \n 13 \n certification \n 2 \n 13 \n circumstances \n 8 \n 13 \n unrecoverable \n 4 \n 13 \n investigation \n 15 \n 13 \n automatically \n 2 \n 13 \n admissibility \n 2 \n 13 \n XProcessedBy: \n 2 \n 13 \n administrator \n 4 \n 13 \n determination \n 3 \n 13 \n investigative \n 3 \n 13 \n practitioners \n 2 \n 13 \n preponderance \n 2 \n 13 \n intentionally \n 2 \n 13 \n consideration \n 2 \n 13 \n Interestingly \n 2 \n 13 \n that it is a technical text that makes the results different \nafter the top two. Additionally, where computers are con-\ncerned, an actual frequency analysis would take into con-\nsideration all ASCII characters, as shown in Table 2.6 . \n Frequency analysis is not difficult; once all the letters \nof a text are pulled into a database program, it is fairly \nstraightforward to do a count of all the duplicate values. \nThe snippet of code in Example 2.2 demonstrates one \nway whereby text can be transformed into a single col-\numn and imported into a database. \n The cryptograms that use formatting (every word \nbecomes the same length) are considerably more diffi-\ncult for basic online decryption programs to crack. They \n TABLE 2.3 Five-letter word recurrences in Chapter \n19: A glimpse of the leading five-letter words \nfound in the preedited manuscript. Once unique \nletter groupings have been identified, substitution, \noften by trial and error, can result in a meaningful \nreconstruction that allows the entire cipher to be \nrevealed. \n Words Field \n Number of Recurrences \n files \n 125 \n drive \n 75 \n there \n 67 \n email \n 46 \n these \n 43 \n other \n 42 \n about \n 41 \n where \n 36 \n would \n 33 \n every \n 31 \n court \n 30 \n their \n 30 \n first \n 28 \n Using \n 28 \n which \n 24 \n could \n 22 \n table \n 22 \n After \n 21 \n image \n 21 \n Don’t \n 19 \n tools \n 19 \n being \n 18 \n entry \n 18 \n" }, { "page_number": 61, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n28\n TABLE 2.5 “ In a random sampling of 1000 letters, ” \nthis pattern emerges. \n Letter \n Frequency \n E \n 130 \n T \n 93 \n N \n 78 \n R \n 77 \n I \n 74 \n O \n 74 \n A \n 73 \n S \n 63 \n D \n 44 \n H \n 35 \n L \n 35 \n C \n 30 \n F \n 28 \n P \n 27 \n U \n 27 \n M \n 25 \n Y \n 19 \n G \n 16 \n W \n 16 \n V \n 13 \n B \n 9 \n X \n 5 \n K \n 3 \n Q \n 3 \n J \n 2 \n Z \n 1 \n Total \n 1000 \n TABLE 2.6 Using MS Access to perform some \nfrequency analysis of Chapter 19 in this book. \nCharacters with fewer repetitions than z were \nexcluded from the return. Character frequency \nanalysis of different types of communications yield \nslightly different results. \n Chapter 19 Letters \n Frequency \n e \n 14,467 \n t \n 10,945 \n a \n 9239 \n i \n 8385 \n o \n 7962 \n s \n 7681 \n n \n 7342 \n r \n 6872 \n h \n 4882 \n l \n 4646 \n d \n 4104 \n c \n 4066 \n u \n 2941 \n m \n 2929 \n f \n 2759 \n p \n 2402 \n y \n 2155 \n g \n 1902 \n w \n 1881 \n b \n 1622 \n v \n 1391 \n . \n 1334 \n , \n 1110 \n k \n 698 \n 0 \n 490 \n x \n 490 \n q \n 166 \n 7 \n 160 \n * \n 149 \n 5 \n 147 \n ) \n 147 \n ( \n 146 \n j \n 145 \n 3 \n 142 \n Example 2.1 A sample cryptogram. Try \nthis out: \n Gv Vw, Dtwvg? \n Hint: Caesar said it, and it is Latin. 2 \n 2 Et tu, Brute? \n" }, { "page_number": 62, "text": "Chapter | 2 A Cryptography Primer\n29\n TABLE 2.6 (Continued) \n Chapter 19 Letters \n Frequency \n 6 \n 140 \n Æ \n 134 \n ò \n 134 \n ô \n 129 \n ö \n 129 \n 4 \n 119 \n z \n 116 \n Total \n 116,798 \n for simple ciphers. However, this would only be true \nfor very short and very obscure messages that could be \ncode words to decrypt other messages or could simply \nbe sent to misdirect the opponent. The amount of cipher-\ntext needed to successfully break a cipher is called unic-\nity distance. Ciphers with small unicity distances are \nweaker than those with large ones. \n Ultimately, substitution ciphers are vulnerable to \neither word-pattern analysis, letter-frequency analysis, or \nsome combination of both. Where numerical information \nis encrypted, tools such as Benford’s Law can be used \nto elicit patterns of numbers that should be occurring. \nForensic techniques incorporate such tools to uncover \naccounting fraud. So, though this particular cipher is a \nchild’s game, it is useful in that it is an underlying prin-\nciple of cryptography and should be well understood \nbefore continuing. The primary purpose of discussing it \nhere is as an introduction to ciphers. \n Further topics of interest and places to find informa-\ntion involving substitution ciphers are the chi-square \nstatistic, Edgar Allan Poe, Sherlock Holmes, Benford’s \nLaw, Google, and Wikipedia. \n The Polyalphabetic Cipher \n The previous section clearly demonstrated that though \nthe substitution cipher is fun and easy, it is also vulner-\nable and weak. It is especially susceptible to frequency \nanalysis. Given a large enough sample, a cipher can eas-\nily be broken by mapping the frequency of the letters in \nthe ciphertext to the frequency of letters in the language \nor dialect of the ciphertext (if it is known). To make \nciphers more difficult to crack, Blaise de Vigen è re from \nthe 16th-century court of Henry III of France proposed \na polyalphabetic substitution. In this cipher, instead of a \none-to-one relationship, there is a one-to-many. A single \nletter can have multiple substitutes. The Vigen è re solu-\ntion was the first known cipher to use a keyword. \n It works like this: First, a tableau is developed, as in \n Table 2.7 . This tableau is a series of shift ciphers. In fact, \nsince there can be only 26 additive shift ciphers, it is all \nof them. \n In Table 2.7 , a table in combination with a keyword is \nused to create the cipher. For example, if we choose the \nkeyword rockerrooks , overlay it over the plaintext, and \ncross-index it to Table 2.7 , we can produce the ciphertext. \nIn this example, the top row is used to look up the plaintext, \nand the leftmost column is used to reference the keyword. \n For example, we lay the word rockerrooks over the \nsentence, “ Ask not what your country can do for you. ” \nLine 1 is the keyword, line 2 is the plain text, and line 3 \nis the ciphertext. \n Keyword: \n ROC KER ROOK SROC KERROOK SRO CK ERR OOK \n Plaintext: \n ASK NOT WHAT YOUR COUNTRY CAN DO FOR YOU \n Ciphertext: RGM XSK NVOD QFIT MSLEHFI URB FY JFI MCE \n The similarity of this tableau to a mathematical table \nlike the one in Table 2.8 becomes apparent. Just think \nletters instead of numbers and it becomes clear how this \nworks. The top row is used to “ look up ” a letter from the \nplaintext, the leftmost column is used to locate the over-\nlaying keyword letter, and where the column and the row \nintersect is the ciphertext. \n This similarity is, in fact, the weakness of the cipher. \nThrough some creative “ factoring, ” the length of the key-\nword can be determined. Since the tableau is, in prac-\ntice, a series of shift ciphers, the length of the keyword \n Example 2.2 \n 1: Sub Letters2column () \n 2: Dim bytText () As Byte \n 3: Dim bytNew() As Byte \n 4: Dim lngCount As Long \n 5: With ActiveDocument.Content \n 6: bytText \u0003 .Text \n 7: ReDim bytNew((((UBound(bytText()) \u0002 1) * 2) - 5)) \n 8: For lngCount \u0003 0 To (UBound(bytText()) - 2) Step 2 \n 9: bytNew((lngCount * 2)) \u0003 bytText(lngCount) \n 10: bytNew(((lngCount * 2) \u0002 2)) \u0003 13 \n 11: Next lngCount \n 12: .Text \u0003 bytNew() \n 13: End With \n 14: End Sub \n" }, { "page_number": 63, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n30\n determines how many ciphers are used. The keyword \n rockerrook , with only six distinct letters, uses only six \nciphers. Regardless, for nearly 300 years many people \nbelieved the cipher to be unbreakable. \n The Kasiski/Kerckhoff Method \n Now let’s look at Kerckhoff’s principle — “ only secrecy \nof the key provides security ” (not to be confused with \nKirchhoff’s law, a totally different man and rule). In \nthe 19th century, Auguste Kerckhoff said that essen-\ntially, a system should still be secure, even when eve-\nryone knows everything about the system (except the \npassword). Basically, his feeling was that if more than \none person knows something, it’s no longer a secret. \nThroughout modern cryptography, the inner workings \nof cryptographic techniques have been well known and \npublished. Creating a portable, secure, unbreakable code \nis easy if nobody knows how it works. The problem lies \nin the fact that we people just can’t keep a secret! \n TABLE 2.7 Vigen è re’s tableau arranges all of the shift ciphers in a single table. It then implements a keyword to \ncreate a more complex cipher than the simple substitution or shift ciphers. The number of spurious keys , that \nis, bogus decryptions that result from attempting to decrypt a polyalphabetic encryption, is greater than those \ncreated during the decryption of a single shift cipher. \n Letter A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O P \n Q \n R \n S \n T \n U \n V \n W \n X \n Y Z \n A \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V \n W \n X \n Y \n Z \n B \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V \n W X \n Y \n Z \n A \n C \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M N \n O \n P \n Q \n R \n S \n T \n U \n V \n W \n X \n Y \n Z \n A B \n D \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V \n W X \n Y \n Z \n A \n B \n C \n E \n E \n F \n G \n H \n I \n J \n K \n L \n M N \n O P \n Q \n R \n S \n T \n U \n V \n W \n X \n Y \n Z \n A \n B \n C D \n F \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V \n W \n X \n Y \n Z \n A \n B \n C \n D E \n G \n G \n H \n I \n J \n K \n L \n M \n N \n O P \n Q R \n S \n T \n U \n V \n W \n X \n Y \n Z \n A \n B \n C \n D \n E \n F \n H \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V \n W \n X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n I \n I \n J \n K \n L \n M \n N \n O \n P \n Q R \n S \n T \n U \n V \n W X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G H \n J \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V \n W \n X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H I \n K \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V \n W \n X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n L \n L \n M N \n O \n P \n Q \n R \n S \n T \n U \n V \n W \n X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n M \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V \n W X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n N \n N \n O P \n Q \n R \n S \n T \n U \n V \n W \n X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n O \n O \n P \n Q \n R \n S \n T \n U \n V \n W X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M N \n P \n P \n Q R \n S \n T \n U \n V \n W \n X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N O \n Q \n Q \n R \n S \n T \n U \n V \n W \n X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O P \n R \n R \n S \n T \n U \n V \n W X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n S \n S \n T \n U \n V \n W X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q R \n T \n T \n U \n V \n W \n X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M N \n O \n P \n Q \n R \n S \n U \n U \n V \n W X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n V \n V \n W X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n W \n W \n X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U V \n X \n X \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V W \n Y \n Y \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V \n W X \n Z \n Z \n A \n B \n C \n D \n E \n F \n G \n H \n I \n J \n K \n L \n M \n N \n O \n P \n Q \n R \n S \n T \n U \n V \n W X \n Y \n" }, { "page_number": 64, "text": "Chapter | 2 A Cryptography Primer\n31\n repeat themselves, the highest frequency will be the \nlength of the password. The distance between the two \noccurrences will be the length of the password. In \n Example 2.3 , we see BH and BG repeating, and then we \nsee BG repeating at a very tight interval of 2, which tells \nus the password might be two characters long and based \non two shift ciphers that, when decrypted side by side, \nwill make a real word. Not all bigrams will be indica-\ntors of this, so some care must be taken. As can be seen, \nBH repeats with an interval of 8, but the password is not \neight digits long (but it is a factor of 8!). By locating the \ndistance of all the repeating bigrams and factoring them, \nwe can deduce the length of the keyword. \n 4. MODERN CRYPTOGRAPHY \n Some of cryptography’s greatest stars emerged in WWII. \nFor the first time during modern warfare, vast resources \nwere devoted to enciphering and deciphering commu-\nnications. Both sides made groundbreaking advances in \ncryptography. Understanding the need for massive calcu-\nlations (for the time — more is probably happening in the \nRAM of this author’s PC over a period of five minutes \nthan happened in all of WWII), both sides developed new \nmachinery — predecessors to the modern solid-state com-\nputers — that could be coordinated to perform the calcula-\ntions and procedures needed to crack enemy ciphers. \n The Vernam Cipher (Stream Cipher) \n Gilbert Sandford Vernam (1890 – 1960) was said to have \ninvented the stream cipher in 1917. Vernam worked for \n In 1863 Kasiski, a Prussian major, proposed a \nmethod to crack the Vigen è re cipher. His method, in \nshort, required that the cryptographer deduce the length \nof the keyword used and then dissect the cryptogram into \na corresponding number of ciphers. Each cipher would \nthen be solved independently. The method required \nthat a suitable number of bigrams be located. A bigram \nis a portion of the ciphertext, two characters long, that \nrepeats itself in a discernible pattern. In Example 2.3 , a \nrepetition has been deliberately made simple with a short \nkeyword ( toto ) and engineered by crafting a harmonic \nbetween the keyword and the plaintext. \n This might seem an oversimplification, but it effec-\ntively demonstrates the weakness of the polyalphabetic \ncipher. Similarly, the polyalphanumeric ciphers, such as \nthe Gronsfeld cipher, are even weaker since they use 26 \nletters and 10 digits. This one also happens to decrypt \nto “ On of when on of, ” but a larger sample with such \na weak keyword would easily be cracked by even the \nleast intelligent of Web-based cryptogram solvers. The \nharmonic is created by the overlaying keyword with \nthe underlying text; when the bigrams “ line up ” and \n TABLE 2.8 The multiplication table is the inspiration for the Vigen è re tableau \n Multiplier 1 \n 2 \n 3 \n 4 \n 5 \n 6 \n 7 \n 8 \n 9 \n 10 \n 1 \n 1 \n 2 \n 3 \n 4 \n 5 \n 6 \n 7 \n 8 \n 9 \n 10 \n 2 \n 2 \n 4 \n 6 \n 8 \n 10 \n 12 \n 14 \n 16 \n 18 \n 20 \n 3 \n 3 \n 6 \n 9 \n 12 \n 15 \n 18 \n 21 \n 24 \n 27 \n 30 \n 4 \n 4 \n 8 \n 12 \n 16 \n 20 \n 24 \n 28 \n 32 \n 36 \n 40 \n 5 \n 5 \n 10 \n 15 \n 20 \n 25 \n 30 \n 35 \n 40 \n 45 \n 50 \n 6 \n 6 \n 12 \n 18 \n 24 \n 30 \n 36 \n 42 \n 48 \n 54 \n 60 \n 7 \n 7 \n 14 \n 21 \n 28 \n 35 \n 42 \n 49 \n 56 \n 63 \n 70 \n 8 \n 8 \n 16 \n 24 \n 32 \n 40 \n 48 \n 56 \n 64 \n 72 \n 80 \n 9 \n 9 \n 18 \n 27 \n 36 \n 45 \n 54 \n 63 \n 72 \n 81 \n 90 \n 10 \n 10 \n 20 \n 30 \n 40 \n 50 \n 60 \n 70 \n 80 \n 90 \n 100 \n Example 2.3 A repetitious, weak keyword \ncombines with plaintext to produce an \neasily deciphered ciphertext. \n Keyword \n to to toto \nto to toto o to \n Plaintext \n It is what it is, isn’t it? \n Ciphertext \n BH BG PVTH BH BG BGG H BH \n" }, { "page_number": 65, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n32\n Bell Labs, and his patent described a cipher in which a \nprepared key, on a paper tape, combined with plaintext to \nproduce a transmitted ciphertext message. The same tape \nwould then be used to decrypt the ciphertext. In effect, \nthe Vernam and “ one-time pad ” ciphers are very similar. \nThe primary difference is that the “ one-time pad ” cipher \nimplements an XOR for the first time and dictates that a \ntruly random stream cipher be used for the encryption. \nThe stream cipher had no such requirement and used a dif-\nferent method of relay logic to combine a pseudo-random \nstream of bits with the plaintext bits. More about the XOR \nprocess is discussed in the section on XOR ciphering. In \npractice today, the Vernam cipher is any stream cipher in \nwhich pseudo-random or random text is combined with \nplaintext to produce cipher text that is the same length as \nthe cipher. RC4 is a modern example of a Vernam cipher. \n The One-Time Pad \n The “ one-time pad ” cipher, attributed to Joseph \nMauborgne, 3 is perhaps one of the most secure forms of \ncryptography. It is very difficult to break if used properly, \nand if the key stream is perfectly random, the ciphertext \ngives away absolutely no details about the plaintext, \nwhich renders it unbreakable. And, just as the name sug-\ngests, it uses a single random key that is the same length \nas the entire message, and it uses the key only once. The \nword pad is derived from the fact that the key would be \ndistributed in pads of paper, with each sheet torn off and \ndestroyed as it was used. \n There are several weaknesses to this cipher. We begin to \nsee that the more secure the encryption, the more it will rely \non other means of key transmission. The more a key has to \nbe moved around, the more likely it is that someone who \nshouldn’t have it will have it. The following weaknesses are \napparent in this “ bulletproof ” style of cryptography: \n ● Key length has to equal plaintext length. \n ● It is susceptible to key interception; the key must be \ntransmitted to the recipient, and the key is as long as \nthe message! \n ● It’s cumbersome, since it doubles the traffic on the line. \n ● The cipher must be perfectly random. \n ● One-time use is absolutely essential. As soon as two \nseparate messages are available, the messages can be \ndecrypted. Example 2.4 demonstrates this. \n Since most people don’t use binary, the author takes \nthe liberty in Example 2.4 of using decimal numbers \n Example 2.4 Using the random cipher, a modulus shift instead of an XOR, and plaintext \nto produce ciphertext. \n Plaintext 1 \n t h i s w i l l b e s o e a s y t o b r e a k i t w i l l b e f u n n y \n 20 8 9 19 23 9 12 12 2 5 19 15 5 1 19 25 20 15 2 18 5 1 11 9 20 23 9 12 12 2 5 6 21 14 14 25 \n Cipher One \n q e r t y u i o p a s d f g h j k l z x c v b n m q a z w s x e r f v t \n 17 5 18 20 25 21 9 15 16 1 19 4 6 7 8 10 11 12 26 24 3 22 2 14 13 17 1 26 23 19 24 5 18 6 22 20 \n CipherText 1 \n 11 13 1 13 22 4 21 1 18 6 12 19 11 8 1 9 5 1 2 16 8 23 13 23 7 14 10 12 9 21 3 11 13 20 10 19 \n k m a m v d u a r f l s k h a i e a b p h w m w g n j l w u c k m t j s \n Plaintext 2 \n T h i s w i l l n o t b e e a s y t o b r e a k o r b e t o o f u n n y \n 20 8 9 19 23 9 12 12 14 15 20 2 5 5 1 19 25 20 15 2 18 5 1 11 15 18 2 5 20 15 15 6 21 14 14 25 \n Ciphertext 2, also using Cipher One. \n 11 13 1 13 22 4 21 1 4 16 13 6 11 12 9 3 10 6 15 0 21 1 3 25 2 9 3 5 17 8 13 11 13 20 10 19 \n k m a m v d u a e p m f k l i f j f o z u a c y b i c e q h m k m t j s \n 3 Wikipedia, Gilbert Vernam entry. \n" }, { "page_number": 66, "text": "Chapter | 2 A Cryptography Primer\n33\nmodulus 26 to represent the XOR that would take place \nin a bitstream encryption (see the section on the XOR \ncipher) that uses the method of the one-time pad. \n A numeric value is assigned to each letter, per Table \n2.9 . By assigning a numeric value to each letter, add-\ning the plaintext value to the ciphertext value, modu-\nlus 26, yields a pseudo-XOR, or a wraparound Caesar \nshift that has a different shift for each letter in the entire \nmessage. \n As this example demonstrates, by using the same \ncipher twice, a dimension is introduced that allows for \nthe introduction of frequency analysis. By placing the \ntwo streams side by side, we can identify letters that are \nthe same. In a large enough sample, where the cipher \ntext is sufficiently randomized, frequency analysis of the \naligned values will begin to crack the cipher wide open \nbecause we know that they are streaming in a logical \norder — the order in which they were written. One of the \nchief advantages of 21st-century cryptography is that the \n “ eggs ” are scrambled and descrambled during decryption \nbased on the key, which you don’t, in fact, want people \nto know. If the same cipher is used repeatedly, multiple \ninferences can be made and eventually the entire key can \nbe deconstructed. Because plaintext 1 and plaintext 2 are \nso similar, this sample yields the following harmonics \n(in bold and boxed) as shown in Example 2.5 . \n Cracking Ciphers \n One method of teasing out the frequency patterns is \nthrough the application of some sort of mathematical \nformula to test a hypothesis against reality. The chi-\nsquare test is perhaps one of the most commonly used; \nit allows someone to use what is called inferential statis-\ntics to draw certain inferences about the data by testing \nit against known statistical distributions. \n Using the chi-square test against an encrypted text \nwould allow certain inference to be made, but only where \nthe contents, or the type of contents (random or of an \nexpected distribution), of the text were known. For exam-\nple, someone may use a program that encrypts files. By \ncreating the null hypothesis that the text is completely \nrandom, and by reversing the encryption steps, a block \ncipher may emerge as the null hypothesis is disproved \nthrough the chi-square test. This would be done by revers-\ning the encryption method and XORing against the bytes \nwith a block created from the known text. At the point \nwhere the non-encrypted text matches the positioning of \nthe encrypted text, chi-square would reveal that the output \nis not random and the block cipher would be revealed. \n Chi-squared \u0003 … (observed-expected)2/(expected) \n Observed would be the actual zero/one ratio produced \nby XORing the data streams together, and expected \n TABLE 2.9 A simple key is created so that random characters and regular characters may be combined with a \nmodulus function. Without the original cipher, this key is meaningless intelligence. It is used here in a similar \ncapacity as an XOR, which is also a function that everyone knows how to do. \n Key \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n a \n b \n c \n d \n e \n f \n g \n h \n i \n j \n k \n l \n m \n n \n o \n p \n q \n r \n s \n t \n u \n v \n w \n x \n y \n z \n 1 \n 2 \n 3 \n 4 \n 5 \n 6 \n 7 \n 8 \n 9 \n 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 \n Example 2.5 \nk m a m v d u a r f l s k h a i e a b p h w m w g n j l w u c k m t j s (ciphertext 1)\nk m a m v d u a e p m f k l i f j f o z u a c y b i c e q h m k m t j s (ciphertext 2) \n The two ciphertexts, side by side, show a high level of \nharmonics. This indicates that two different ciphertexts \nactually have the same cipher. Where letters are differ-\nent, since XOR is a known process and our encryption \ntechnique is also publicly known, it’s a simple matter \nto say that r \u0003 18, e \u0003 5 (see Table 2.9 ) and thusly con-\nstruct an algorithm that can tease apart the cipher and \nciphertext to produce plaintext. \n" }, { "page_number": 67, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n34\n would be the randomness of zeroes and ones (50/50) \nexpected in a body of pseudorandom text. \n Independent of having a portion of the text, a large \nbody of encrypted text could be reverse encrypted using a \nblock size of all zeroes; in this manner it may be possible \nto tease out a block cipher by searching for non random \nblock sized strings. Modern encryption techniques gener-\nate many, many block cipher permutations that are layered \nagainst previous iterations (n-1) of permutated blocks. \nThe feasibility of running such decryption techniques \nwould require both a heavy-duty programmer, statistician, \nan incredible amount of processing power, and in-depth \nknowledge of the encryption algorithm used. An unpub-\nlished algorithm would render such testing worthless. \n Notably, the methods and procedures used in break-\ning encryption algorithms are used throughout society \nin many applications where a null hypothesis needs to \nbe tested. Forensic consultants use pattern matching and \nsimilar decryption techniques to combat fraud on a daily \nbasis. Adrian Fleissig, a seasoned economist, makes \nuse of many statistical tests to examine corporate data \n(see side bar, “ Some Statistical Tests for Cryptographic \nApplications ” ). \n The XOR Cipher and Logical Operands \n In practice, the XOR cipher is not so much a cipher as it \nis a mechanism whereby ciphertext is produced. Random \nbinary stream cipher would be a better term. The terms \n XOR, logical disjunction , and inclusive disjunction may \nbe used interchangeably. Most people are familiar with \nthe logical functions of speech, which are words such as \n and, or, nor, and not . A girl can tell her brother, “ Mother \nis either upstairs or at the neighbor’s, ” which means she \ncould be in either state, but you have no way of knowing \nwhich one it is. The mother could be in either place, and \nyou can’t infer from the statement the greater likelihood \nof either. The outcome is undecided. \n Alternately, if a salesman offers a customer either \na blue car or a red car, the customer knows that he can \nhave red or he can have blue. Both statements are true. \nBlue cars and red cars exist simultaneously in the world. \nA person can own both a blue car and a red car. But \nMother will never be in more than one place at a time. \nPurportedly, there is widespread belief that no author \nhas produced an example of an English or sentence that \nappears to be false because both of its inputs are true. 5 \nQuantum physics takes considerable exception to this \nstatement (which explains quantum physicists) at the \n 5 Barrett and Stenner, The myth of the exclusive “ Or, ” Mind , 80 (317), \n116 – 121, 1971. \n Some Statistical Tests for Cryptographic Applications \nBy Adrian Fleissig \n In many applications, it is often important to determine if \na sequence is random. For example, a random sequence \nprovides little or no information in cryptographic analy-\nsis. When estimating economic and financial models, it is \nimportant for the residuals from the estimated model to be \nrandom. Various statistical tests can be used to evaluate \nif a sequence is actually a random sequence or not. For \na truly random sequence, it is assumed that each element \nis generated independently of any prior and/or future ele-\nments. A statistical test is used to compute the probability \nthat the observed sequence is random compared to a truly \nrandom sequence. The procedures have test statistics that \nare used to evaluate the null hypothesis which typically \nassumes that the observed sequence is random. The alter-\nnative hypothesis is that the sequence is non random. Thus \nfailing to accept the null hypothesis, at some critical level \nselected by the researcher, suggests that the sequence may \nbe non random. \n There are many statistical tests to evaluate for random-\nness in a sequence such as Frequency Tests, Runs Tests, \nDiscrete Fourier Transforms, Serial Tests and many others. \nThe tests statistics often have chi-square or standard nor-\nmal distributions which are used to evaluate the hypoth-\nesis. While no test is overall superior to the other tests, a \nFrequency or Runs Test is a good starting point to exam-\nine for non-randomness in a sequence. As an example, a \nFrequency or Runs Test typically evaluates if the number of \nzeros and ones in a sequence are about the same, as would \nbe the case if the sequence was truly random. \n It is important to examine the results carefully. For exam-\nple, the researcher may incorrectly fail to accept the null \nhypothesis that the sequence is random and thereby makes \na Type I Error. Incorrectly accepting the null of randomness \nwhen the sequence is actually non random results in com-\nmitting a Type II Error. The reliability of the results depends \non having a sufficiently large number of elements in a \nsequence. In addition, it is important to perform alternative \ntests to evaluate if a sequence is random. 4 \n 4 Adrian Fleissig is the Senior Economist of Counsel for RGL Forensics, \n2006 – present. He is also a Full Professor, California State University \nFullerton (CSUF) since 2003 with a joint Ph.D. in Economics and \nStatistics, North Carolina State University, 1993. \n" }, { "page_number": 68, "text": "Chapter | 2 A Cryptography Primer\n35\n quantum mechanical level. In the Schrodinger cat exper-\niment, the sentence “ The cat is alive or dead ” or the \nstatement “ The photon is a particle and a wave until you \nlook at it, then it is a particle or a wave, depending on \nhow you observed it ” both create a quandary for logi-\ncal operations, and there are no Venn diagrams or words \nthat are dependant on time or quantum properties of the \nphysical universe. Regardless of this exception, when \nspeaking of things in the world in a more rigorously \ndescriptive fashion (in the macroscopically nonphenom-\nenological sense), greater accuracy is needed. \n To create a greater sense of accuracy in discussions \nof logic, the operands as listed in Figure 2.2 were cre-\nated. When attempting to understand this chart, the best \nthing to do is to assign a word to the A and B values and \nthink of each Venn diagram as a universe of documents, \nperhaps in a document database or just on a computer \nbeing searched. If A stands for the word tree and B for \n frog , then each letter simply takes on a very significant \nand distinct meaning. \n In computing, it is traditional that a value of 0 is \nfalse and a value of 1 is true. An XOR operation, then, \nis the determination of whether two possibilities can be \ncombined to produce a value of true or false, based on \nwhether both operations are true, both are false, or one \nof the values is true. \n 1 XOR 1 \u0003 0 \n 0 XOR 0 \u0003 0 \n 1 XOR 0 \u0003 1 \n 0 XOR 1 \u0003 1 \n In an XOR operation, if the two inputs are differ-\nent, the resultant is TRUE, or 1. If the two inputs are the \nsame, the resultant value is FALSE, or 0. \n In Example 2.6 , the first string represents the plain-\ntext and the second line represents the cipher. The third \nline represents the ciphertext. If, and only exactly if, just \none of the items has a value of TRUE, the results of the \nXOR operation will be true. \n Without the cipher, and if the cipher is truly random, \ndecoding the string becomes impossible. However, as \nin the one-time pad, if the same cipher is used, then (1) \nthe cryptography becomes vulnerable to a known text \nattack, and (2) it becomes vulnerable to statistical analy-\nsis. Example 2.7 demonstrates this by showing exactly \nwhere the statistical aberration can be culled in the \nstream. If we know they both used the same cipher, can \nanyone solve for Plaintext A and Plaintext B? \n Block Ciphers \n Block ciphers work very similarly to the polyalpha-\nbetic cipher with the exception that a block cipher pairs \ntogether two algorithms for the creation of ciphertext and \nits decryption. It is also somewhat similar in that, where \nthe polyalphabetic cipher used a repeating key, the block \ncipher uses a permutating, yet repeating, cipher block. \nEach algorithm uses two inputs: a key and a “ block ” of \nbits, each of a set size. Each output block is the same \nsize as the input block, the block being transformed by \nthe key. The key, which is algorithm based, is able to \nselect the permutation of its bijective mapping from 2 n , \nwhere n is equal to the number of bits in the input block. \nOften, when 128-bit encryption is discussed, it is refer-\nring to the size of the input block. Typical encryption \nmethods involve use of XOR chaining or some similar \noperation; see Figure 2.3 . \n Block ciphers have been very widely used since 1976 \nin many encryption standards. As such, cracking these \nciphers became, for a long time, the top priority of cipher \ncrackers everywhere. Block ciphers provide the backbone \nalgorithmic technology behind most modern-era ciphers. \n FIGURE 2.2 In each Venn diagram, the possible outcome of two \ninputs is decided. \n Example 2.6 Line 1 and line 2 are \ncombined with an XOR operand to \nproduce line 3. \n Line 1, plaintext : \n 1 0 0 1 1 1 0 1 0 1 1 0 1 1 1 1 \n Line 2, random cipher \" \" : 1 0 0 0 1 1 0 1 0 1 0 0 1 0 0 1 \n Line 3, XOR ciphertext: 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 \n" }, { "page_number": 69, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n36\n 5. THE COMPUTER AGE \n To many people, January 1, 1970, is considered the dawn \nof the computer age. That’s when Palo Alto Research \nCenter (PARC) in California introduced modern com-\nputing; the graphical user interface (no more command \nline and punch cards), networking on an Ethernet, and \nobject-oriented programming have all been attributed \nto PARC. The 1970s also featured the Unix clock, Alan \nShepherd on the moon, the U.S. Bicentennial, the civil \nrights movement, women’s liberation, Robert Heinlein’s \nsci-fi classic, Stranger in a Strange Land , and, most \nimportant to this chapter, modern cryptography. The \nlate 1960s and early 1970s changed the face of the mod-\nern world at breakneck speed. Modern warfare reached \ntentative heights with radio-guided missiles, and war-\nfare needed a new hero. And then there was the Data \nEncryption Standard, or DES; in a sense DES was the \nturning point for cryptography in that, for the first time, \nit fully leveraged the power of modern computing in its \nalgorithms. The sky appeared to be the limit, but, unfor-\ntunately for those who wanted to keep their information \nsecure, decryption techniques were not far behind. \n Data Encryption Standard \n In the mid-1970s the U.S. government issued a public \nspecification, through its National Bureau of Standards \n(NBS), called the Data Encryption Standard or, most \ncommonly, DES. This could perhaps be considered \n Example 2.7 \nCipher-Block Chaining (CBC)\nPlaintext\nBlock Cipher\nEncryption\nBlock Cipher\nEncryption\nCiphertext\nKey\nKey\nKey\nCiphertext\nCiphertext\nPlaintext\nPlaintext\nBlock Cipher\nEncryption\n FIGURE 2.3 XOR chaining, or cipher-block chaining (CBC), is a method whereby the next block of plaintext to be encrypted is XOR’d with the \nprevious block of ciphertext before being encrypted. \n To reconstruct the cipher if the plaintext is known, \nPlaintextA can be XOR’d to ciphertextB to produce \ncipherA! Clearly, in a situation where plaintext may \nbe captured, using the same cipher key twice could \ncompletely expose the message. By using statistical \nanalysis, the unique possibilities for PlaintextA and \nPlaintextB will emerge; unique possibilities means that \nfor ciphertext \u0003 x , where the cipher is truly random, \nthis should be at about 50% of the sample. Additions of \nciphertext n \u0002 1 will increase the possibilities for unique \ncombinations because, after all, these binary streams \nmust be converted to text and the set of binary stream \npossibilities that will combine into ASCII characters is \nrelatively small. Using basic programming skills, you \ncan develop algorithms that will quickly and easily sort \nthrough this data to produce a deciphered result. An \nintelligent person with some time on her hands could \nsort it out on paper or in an Excel spreadsheet. When the \nchoice is “ The red house down the street from the green \nhouse is where we will meet ” or a bunch of garbage, it \nbegins to become apparent how to decode the cipher. \n CipherA and PlaintextA are XOR’d to produce ciphertextA: \n PlaintextA: 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 \n cipherA: 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 \n ciphertextA: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 \n PlaintextB and cipherA are XOR’d to produce ciphertextB: \n ciphertextB: 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 \n cipherA: 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 \n PlaintextB: 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 \n | \u0004 ----- Column 1 --------- \u0005 | | \u0004 --------Column 2 ------ | \n Note: Compare ciphertextA to ciphertextB! \n" }, { "page_number": 70, "text": "Chapter | 2 A Cryptography Primer\n37\n the dawn of modern cryptography because it was very \nlikely the first block cipher, or at least its first wide-\nspread implementation. But the 1970s were a relatively \nuntrusting time. “ Big Brother ” loomed right around the \ncorner (as per George Orwell’s 1984 ), and the majority \nof people didn’t understand or necessarily trust DES. \nIssued under the NBS, now called the National Institute \nof Standards and Technology (NIST), hand in hand with \nthe National Security Agency (NSA), DES led to tre-\nmendous interest in the reliability of the standard among \nacademia’s ivory towers. A shortened key length and \nthe implementation of substitution boxes, or “ S-boxes, ” \nin the algorithm led many to think that the NSA had \ndeliberately weakened the algorithms and left a security \n “ back door ” of sorts. \n The use of S-boxes in the standard was not gener-\nally understood until the design was published in 1994 \nby Don Coppersmith. The S-boxes, it turned out, had \nbeen deliberately designed to prevent a sort of crypta-\nnalysis attack called differential cryptanalysis , as was \ndiscovered by IBM researchers in the early 1970s; the \nNSA had asked IBM to keep quiet about it. In 1990 the \nmethod was “ rediscovered ” independently and, when \nused against DES, the usefulness of the S-boxes became \nreadily apparent. \n Theory of Operation \n DES used a 64-bit block cipher combined with a mode \nof operation based on cipher-block chaining (CBC) \ncalled the Feistel function . This consisted of an initial \nexpansion permutation followed by 16 rounds of XOR \nkey mixing via subkeys and a key schedule, substitution \n(S-boxes), and permutation. 6 In this strategy, a block is \nincreased from 32 bits to 48 bits (expansion permuta-\ntion). Then the 48-bit block is divided in half. The first \nhalf is XORs, with parts of the key according to a key \nschedule. These are called subkeys. Figure 2.4 shows \nthis concept in a simplified format. \n The resulting cipher is then XOR’d with the half of \nthe cipher that was not used in step 1. The two halves \nswitch sides. Substitution boxes reduce the 48 bits down \nto 32 bits via a nonlinear function and then a permuta-\ntion, according to a permutation table, takes place. Then \nthe entire process is repeated again, 16 times, except in \nthe last step the two halves are not flipped. Finally, this \ndiffusive strategy produced via substitution, permuta-\ntion, and key schedules creates an effective ciphertext. \nBecause a fixed-length cipher, a block cipher, is used, \n FIGURE 2.4 The Feistel function with a smaller key size. \n 6 A. Sorkin, LUCIFER: A cryptographic algorithm, Cryptologia , 8(1), \n22 – 35, 1984. \n" }, { "page_number": 71, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n38\n the permutations and the S-box introduce enough con-\nfusion that the cipher cannot be deduced through brute-\nforce methods without extensive computing power. \n With the increase in size of hard drives and compu-\nter memory, the need for disk space and bandwidth still \ndemand that a block cipher algorithm be portable. DES, \nTriple DES, and the Advanced Encryption Standard \n(AES) all provide or have provided solutions that are \nsecure and practical. \n Implementation \n Despite the controversy at the time, DES was imple-\nmented. It became the encryption standard of choice \nuntil the late 1990s, when it was broken when Deep \nCrack and distributed.net broke a DES key in 22 hours \nand 15 minutes. Later that year a new form of DES \ncalled Triple DES, which encrypted the plaintext in three \niterations, was published. It remained in effect until \n2002, when it was superseded by AES. \n Rivest, Shamir, and Adleman (RSA) \n The release of DES also included the creation and \nrelease of Ron Rivest, Adi Shamir, and Leonard \nAdleman’s encryption algorithm (RSA). Rivest, Shamir, \nand Adleman, based at the Massachusetts Institute of \nTechnology (MIT), publicly described the algorithm in \n1977. RSA is the first encryption standard to introduce \n(to public knowledge) the new concept of digital signing. \nIn 1997 it was revealed through declassification of papers \nthat Clifford Cocks, a British mathematician working for \nthe U.K. Government Communications Headquarters \n(GCHQ), had, in 1973, written a paper describing this \nprocess. Assigned a status of top secret, the work had pre-\nviously never seen the light of day. Because it was submit-\nted in 1973, the method had been considered unattainable, \nsince computing power at the time could not handle its \nmethods. \n Advanced Encryption Standard (AES or \nRijndael) \n AES represents one of the latest chapters in the history \nof cryptography. It is currently one of the most popular \nof encryption standards and, for people involved in any \nsecurity work, its occurrence on the desktop is frequent. \nIt also enjoys the free marketing and acceptance that it \nreceived when it was awarded the title of official cryp-\ntography standard in 2001. 7 This designation went into \neffect in May of the following year. \n Similarly to DES, AES encrypts plaintext in a series \nof rounds, involves the use of a key and block sizes, and \nleverages substitution and permutation boxes. It differs \nfrom DES in the following respects: \n ● It supports 128-bit block sizes. \n ● The key schedule is based on the S-box. \n ● It expands the key, not the plaintext. \n ● It is not based on a Feistel cipher. \n ● It is extremely complex. \n The AES algorithms are to symmetric ciphers what \na bowl of spaghetti is to the shortest distance between \ntwo points. Through a series of networked XOR opera-\ntions, key substitutions, temporary variable transforma-\ntions, increments, iterations, expansions, value swapping, \nS-boxing, and the like, a very strong encryption is created \nthat, with modern computing, is impossible to break. It is \nconceivable that, with so complex a series of operations, \na computer file and block could be combined in such a \nway as to produce all zeroes. Theoretically, the AES \ncipher could be broken by solving massive quadratic \nequations that take into consideration every possible vec-\ntor and solve 8000 quadratic equations with 1600 binary \nunknowns. This sort of an attack is called an algebraic \nattack and, where traditional methods such as differen-\ntial or differential cryptanalysis fail, it is suggested that \nthe strength in AES lies in the current inability to solve \nsupermultivariate quadratic equations with any sort of \nefficiency. \n Reports that AES is not as strong as it should be \nare likely, at this time, to be overstated and inaccurate, \nbecause anyone can present a paper that is dense and \ndifficult to understand and claims to achieve the incred-\nible. 8 It is unlikely that, any time in the near or maybe \nnot-so-near future (this author hedges his bets), AES \nwill be broken using multivariate quadratic polynomials \nin thousands of dimensions. Mathematica is very likely \none of the most powerful tools that can solve quadratic \nequations, and it is still many years away from being \nable to perform this feat. \n 7 U.S. FIPS PUB 197 (FIPS 197), November 26, 2001. \n 8 Bruce Schneier, Crypto-Gram Newsletter , September 15, 2002. \n" }, { "page_number": 72, "text": "39\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n Preventing System Intrusions \n Michael West \n Independent senior technical writer \n Chapter 3 \n The moment you establish an active Web presence, you \nput a target on your company’s back. And like the hap-\nless insect that lands in the spider’s web, your compa-\nny’s size determines the size of the disturbance you \ncreate on the Web — and how quickly you’re noticed by \nthe bad guys. How attractive you are as prey is usually \ndirectly proportionate to what you have to offer a preda-\ntor. If yours is an ecommerce site whose business thrives \non credit card or other financial information or a com-\npany with valuable secrets to steal, your “ juiciness ” quo-\ntient goes up; you have more of value there to steal. And \nif your business is new and your Web presence is recent, \nthe assumption could be made that perhaps you’re not \nyet a seasoned veteran in the nuances of cyber warfare \nand, thus, are more vulnerable to an intrusion. \n Unfortunately for you, many of those who seek to \npenetrate your network defenses are educated, moti-\nvated, and quite brilliant at developing faster and more \nefficient methods of quietly sneaking around your perim-\neter, checking for the smallest of openings. Most IT \nprofessionals know that an enterprise’s firewall is cease-\nlessly being probed for weaknesses and vulnerabilities \nby crackers from every corner of the globe. Anyone who \nfollows news about software understands that seemingly \nevery few months, word comes out about a new, exploit-\nable opening in an operating system or application. It’s \nwidely understood that no one — not the most savvy net-\nwork administrator or the programmer who wrote the \nsoftware — can possibly find and close all the holes in \ntoday’s increasingly complex software. \n Bugs exist in applications, operating systems, server \nprocesses (daemons), and clients. System configurations \ncan also be exploited, such as not changing the default \nadministrator’s password or accepting default system \nsettings, or unintentionally leaving a hole open by con-\nfiguring the machine to run in a nonsecure mode. Even \nTransmission Control Protocol/Internet Protocol (TCP/\nIP), the foundation on which all Internet traffic operates, \ncan be exploited, since the protocol was designed before \nthe threat of hacking was really widespread. Therefore \nit contains design flaws that can allow, for example, a \ncracker to easily alter IP data. \n Once the word gets out that a new and exploitable \nopening exists in an application (and word will get out), \ncrackers around the world start scanning sites on the \nInternet searching for any and all sites that have that par-\nticular opening. \n Making your job even harder is the fact that many \nopenings into your network can be caused by your \nemployees. Casual surfing of porn sites can expose the \nnetwork to all kinds of nasty bugs and malicious code, \nmerely by an employee visiting the site. The problem \nis that, to users, it might not seem like such a big deal. \nThey either don’t realize or don’t care that they’re leav-\ning the network wide open to intrusion. \n 1 . SO, WHAT IS AN INTRUSION? \n A network intrusion is an unauthorized penetration \nof a computer in your enterprise or an address in your \nassigned domain. An intrusion can be passive (in which \npenetration is gained stealthily and without detection) \nor active (in which changes to network resources are \neffected). Intrusions can come from outside your net-\nwork structure or inside (an employee, customer, or busi-\nness partner). Some intrusions are simply meant to let \nyou know the intruder was there, defacing your Web site \nwith various kinds of messages or crude images. Others \nare more malicious, seeking to extract critical informa-\ntion on either a one-time basis or as an ongoing parasitic \nrelationship that will continue to siphon off data until \nit’s discovered. Some intruders will seek to implant care-\nfully crafted code designed to crack passwords, record \nkeystrokes, or mimic your site while directing unaware \nusers to their site. Others will embed themselves into \n" }, { "page_number": 73, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n40\n the network and quietly siphon off data on a continuing \nbasis or to modify public-facing Web pages with various \nkinds of messages. \n An attacker can get into your system physically (by \nhaving physical access to a restricted machine and its \nhard drive and/or BIOS), externally (by attacking your \nWeb servers or finding a way to bypass your firewall), or \ninternally (your own users, customers, or partners). \n 2 . SOBERING NUMBERS \n So how often do these intrusions occur? The estimates \nare staggering: Depending on which reporting agency \nyou listen to, anywhere from 79 million to over 160 \nmillion compromises of electronic data occurred world-\nwide between 2007 and 2008 . U.S. government statistics \nshow an estimated 37,000 known and reported incidents \nagainst federal systems alone in 2007, and the number \nis expected to rise as the tools employed by crackers \nbecome increasingly sophisticated. \n In one case, credit- and debit-card information for \nover 45 million users was stolen from a large merchant \nin 2005, and data for an additional 130,000 were lifted in \n2006. Merchants reported that the loss would cost them \nan estimated $5 million. \n Spam continues to be one of the biggest problems \nfaced by businesses today and has been steadily increas-\ning every year. An Internet threat report published by \nSecure Computing Corporation in October 2008 states, \n “ The acquisition of innocent machines via email and \nWeb-based infections continued in Q3 with over 5000 \nnew zombies created every hour. ” 1 And in the election \nyear of 2008, election-related spam messages were esti-\nmated to exceed 100 million messages per day. \n According to research done by Secure Computing, \nmalware use is also on a steady rise, “ with nearly 60% \nof all malware-infected URLs ” coming from the United \nStates and China. And Web-related attacks will become \nmore widespread, with political and financially moti-\nvated attacks topping the list. With the availability of \nWeb attack toolkits increasing, Secure Computing’s \nresearch estimates that “ about half of all Web-borne \nattacks will likely be hosted on compromised legitimate \nWeb sites. ” \n Alarmingly, there is also a rise in teenage involve-\nment in cracking. Chris Boyd, director of malware \nresearch at FaceTime Security, was quoted in an October \n29, 2008, posting on the BBC’s Web site that he’s \n “ seeing kids of 11 and 12 sharing credit-card details and \nasking for hacks. ” 2 Some of the teens have resorted to \nposting videos of their work on YouTube, not realizing \nthey’ve thus made it incredibly easy to track them down. \nBut the fact that they exist and are sharing information \nvia well-known venues is worrisome enough, the fum-\nbling teen crackers of today are tomorrow’s network \nsecurity nightmares in the making. \n Whatever the goal of the intrusion — fun, greed, brag-\nging rights, or theft of data — the end result is the same: \na weakness in your network security has been detected \nand exploited. And unless you discover that weakness —\n the intrusion entry point — it will continue to be an open \ndoor into your environment. \n So, just who’s out there looking to break into your \nnetwork? \n 3 . KNOW YOUR ENEMY: HACKERS \nVERSUS CRACKERS \n An entire community of people — experts in program-\nming and computer networking and those who thrive on \nsolving complex problems — have been around since the \nearliest days of computing. The term hacker originated \nfrom the members of this culture, and they are quick \nto point out that it was hackers who built and make the \nInternet run, and hackers who created the Unix operat-\ning system. Hackers see themselves as members of a \ncommunity who build things and make them work. And \nthe term cracker is, to those in their culture, a badge of \nhonor. \n Ask a traditional hacker about people who sneak \ninto computer systems to steal data or cause havoc, and \nhe’ll most likely correct you by telling you those people \naren’t true hackers. (In the cracker community, the term \nfor these types is cracker , and the two labels aren’t syn-\nonymous.) So, to not offend traditional hackers, I’ll use \nthe term crackers and focus on them and their efforts. \n From the lone-wolf cracker seeking peer recognition \nto the disgruntled former employee out for revenge or \nthe deep pockets and seemingly unlimited resources of a \nhostile government bent on taking down wealthy capital-\nists, crackers are out there in force, looking to find the \nchink in your system’s defensive armor. \n A cracker’s specialty — or in some cases, his mission \nin life — is seeking out and exploiting vulnerabilities of \nan individual computer or network for their own pur-\nposes. Crackers ’ intentions are normally malicious and/\n 1 “ Internet Threats Report and Predictions for 2009, ” October 27, \n2008, Secure Computing Corporation. \n 2 http://news.bbc.co.uk , October 29, 2008. \n" }, { "page_number": 74, "text": "Chapter | 3 Preventing System Intrusions\n41\n or criminal in nature. They have, at their disposal, a vast \nlibrary of information designed to help them hone their \ntactics, skills, and knowledge, and they can tap into the \nalmost unlimited experience of other crackers through a \ncommunity of like-minded individuals sharing informa-\ntion across underground networks. \n They usually begin this life learning the most basic of \nskills: software programming. The ability to write code \nthat can make a computer do what they want is seduc-\ntive in and of itself. As they learn more and more about \nprogramming, they also expand their knowledge of oper-\nating systems and, as a natural course of progression, \noperating systems ’ weaknesses. They also quickly learn \nthat, to expand the scope and type of their illicit handi-\nwork, they need to learn HTML — the code that allows \nthem to create phony Web pages that lure unsuspecting \nusers into revealing important financial or personal data. \n There are vast underground organizations to which \nthese new crackers can turn for information. They hold \nmeetings, write papers, and develop tools that they pass \nalong to each other. Each new acquaintance they meet \nfortifies their skill set and gives them the training to \nbranch out to more and more sophisticated techniques. \nOnce they gain a certain level of proficiency, they begin \ntheir trade in earnest. \n They start off simply by researching potential tar-\nget firms on the Internet (an invaluable source for all \nkinds of corporate network related information). Once \na target has been identified, they might quietly tiptoe \naround, probing for old forgotten back doors and oper-\nating system vulnerabilities. They can start off simply \nand innocuously by running basic DNS queries that can \nprovide IP addresses (or ranges of IP addresses) as start-\ning points for launching an attack. They might sit back \nand listen to inbound and/or outbound traffic, record IP \naddresses, and test for weaknesses by pinging various \ndevices or users. \n They can surreptitiously implant password cracking \nor recording applications, keystroke recorders, or other \nmalware designed to keep their unauthorized connection \nalive — and profitable. \n The cracker wants to act like a cyber-ninja, sneak-\ning up to and penetrating your network without leaving \nany trace of the incursion. Some more seasoned crack-\ners can put multiple layers of machines, many hijacked, \nbetween them and your network to hide their activ-\nity. Like standing in a room full of mirrors, the attack \nappears to be coming from so many locations you can’t \npick out the real from the ghost. And before you real-\nize what they’ve done, they’ve up and disappeared like \nsmoke in the wind. \n 4 . MOTIVES \n Though the goal is the same — to penetrate your network \ndefenses — crackers ’ motives are often different. In some \ncases, a network intrusion could be done from the inside \nby a disgruntled employee looking to hurt the organiza-\ntion or steal company secrets for profit. \n There are large groups of crackers working dili-\ngently to steal credit-card information that they then turn \naround and make available for sale. They want a quick \ngrab and dash — take what they want and leave. Their \ncousins are the network parasites — those who quietly \nbreach your network, then sit there siphoning off data. \n A new and very disturbing trend is the discovery that \ncertain governments have been funding digital attacks \non network resources of both federal and corporate sys-\ntems. Various agencies from the U.S. Department of \nDefense to the governments of New Zealand, France, \nand Germany have reported attacks originating from \nunidentified Chinese hacking groups. It should be \nnoted that the Chinese government denies any involve-\nment, and there is no evidence that it is or was involved. \nFurthermore, in October 2008, the South Korean Prime \nMinister is reported to have issued a warning to his cabi-\nnet that “ about 130,000 items of government information \nhad been hacked [by North Korean computer crackers] \nover the past four years. ” 3 \n 5 . TOOLS OF THE TRADE \n Crackers today are armed with an increasingly sophisti-\ncated and well-stocked tool kit for doing what they do. \nLike the professional thief with his custom-made lock \npicks, crackers today can obtain a frightening array of \ntools to covertly test your network for weak spots. Their \ntools range from simple password-stealing malware and \nkeystroke recorders (loggers) to methods of implanting \nsophisticated parasitic software strings that copy data \nstreams coming in from customers who want to perform \nan ecommerce transaction with your company. Some of \nthe more widely used tools include these: \n ● Wireless sniffers. Not only can these devices locate \nwireless signals within a certain range, they can \nsiphon off the data being transmitted over the signals. \nWith the rise in popularity and use of remote wireless \ndevices, this practice is increasingly responsible for \nthe loss of critical data and represents a significant \nheadache for IT departments. \n 3 Quoted from http://news.theage.com.au , October 15, 2008. \n" }, { "page_number": 75, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n42\n ● Packet sniffers. Once implanted in a network data \nstream, these passively analyze data packets moving \ninto and out of a network interface, and utilities \ncapture data packets passing through a network \ninterface. \n ● Port scanners. A good analogy for these utilities is a \nthief casing a neighborhood, looking for an open or \nunlocked door. These utilities send out successive, \nsequential connection requests to a target system’s \nports to see which one responds or is open to the \nrequest. Some port scanners allow the cracker to \nslow the rate of port scanning — sending connection \nrequests over a longer period of time — so the \nintrusion attempt is less likely to be noticed. These \ndevices ’ usual targets are old, forgotten “ back doors, ” \nor ports inadvertently left unguarded after network \nmodifications. \n ● Port knocking. Sometimes network administrators \ncreate a secret back-door method of getting through \nfirewall-protected ports — a secret knock that enables \nthem to quickly access the network. Port-knocking \ntools find these unprotected entries and implant \na Trojan horse that listens to network traffic for \nevidence of that secret knock. \n ● Keystroke loggers. These are spyware utilities \nplanted on vulnerable systems that record a user’s \nkeystrokes. Obviously, when someone can sit back \nand record every keystroke a user makes, it doesn’t \ntake long to obtain things like usernames, passwords, \nand ID numbers. \n ● Remote administration tools. Programs embedded on \nan unsuspecting user’s system that allow the cracker \nto take control of that system. \n ● Network scanners. Explore networks to see the \nnumber and kind of host systems on a network, the \nservices available, the host’s operating system, and \nthe type of packet filtering or firewalls being used. \n ● Password crackers. These sniff networks for data \nstreams associated with passwords, then employ a \nbrute-force method of peeling away any encryption \nlayers protecting those passwords. \n 6 . BOTS \n A new and particularly virulent threat that has emerged \nover the past few years is one in which a virus is surrep-\ntitiously implanted in large numbers of unprotected com-\nputers (usually those found in homes), hijacking them \n(without the owners ’ knowledge) and turning them into \nslaves to do the cracker’s bidding. These compromised \ncomputers, known as bots , are linked in vast and usually \nuntraceable networks called botnets . Botnets are designed \nto operate in such a way that instructions come from \na central PC and are rapidly shared among other botted \ncomputers in the network. Newer botnets are now using \na “ peer-to-peer ” method that, because they lack a cen-\ntral identifiable point of control, makes it difficult if not \nimpossible for law enforcement agencies to pinpoint. And \nbecause they often cross international boundaries into \ncountries without the means (or will) to investigate and \nshut them down, they can grow with alarming speed. They \ncan be so lucrative that they’ve now become the cracker’s \ntool of choice. \n Botnets exist, in large part, because of the number \nof users who fail to observe basic principles of compu-\nter security — installed and/or up-to-date antivirus soft-\nware, regular scans for suspicious code, and so on — and \nthereby become unwitting accomplices. Once taken over \nand “ botted, ” their machines are turned into channels \nthrough which large volumes of unwanted spam or mali-\ncious code can be quickly distributed. Current estimates \nare that, of the 800 million computers on the Internet, \nup to 40% are bots controlled by cyber thieves who are \nusing them to spread new viruses, send out unwanted \nspam email, overwhelm Web sites in denial-of-service \n(DoS) attacks, or siphon off sensitive user data from \nbanking or shopping Web sites that look and act like \nlegitimate sites with which customers have previously \ndone business. \n It’s such a pervasive problem that, according to \na report published by security firm Damballa, 4 bot-\nnet attacks rose from an estimated 300,000 per day in \nAugust 2006 to over 7 million per day one year later, \nand over 90% of what was sent out was spam email. \nEven worse for ecommerce sites is a growing trend in \nwhich a site’s operators are threatened with DoS attacks \nunless they pay protection money to the cyber extortion-\nist. Those who refuse to negotiate with these terrorists \nquickly see their sites succumb to relentless rounds of \ncyber “ carpet bombing. ” \n Bot controllers, also called herders , can also make \nmoney by leasing their networks to others who need a \nlarge and untraceable means of sending out massive \namounts of advertisements but don’t have the financial \nor technical resources to create their own networks. \nMaking matters worse is the fact that botnet technol-\nogy is available on the Internet for less than $100, which \nmakes it relatively easy to get started in what can be a \nvery lucrative business. \n 4 Quoted from USA Today , March 17, 2008. \n" }, { "page_number": 76, "text": "Chapter | 3 Preventing System Intrusions\n43\n 7 . SYMPTOMS OF INTRUSIONS \n As stated earlier, your company’s mere presence on the \nWeb places a target on your back. It’s only a matter of \ntime before you experience your first attack. It could be \nsomething as innocent looking as several failed login \nattempts or as obvious as an attacker having defaced \nyour Web site or crippled your network. It’s important \nthat you go into this knowing you’re vulnerable. \n Crackers are going to first look for known weak-\nnesses in the operating system (OS) or any applications \nyou are using. Next, they would start probing, looking \nfor holes, open ports, or forgotten back doors — faults \nin your security posture that can quickly or easily be \nexploited. \n Arguably one of the most common symptoms of an \nintrusion — either attempted or successful — is repeated \nsigns that someone is trying to take advantage of your \norganization’s own security systems, and the tools you \nuse to keep watch for suspicious network activity may \nactually be used against you quite effectively. Tools such \nas network security and file integrity scanners, which \ncan be invaluable at helping you conduct ongoing assess-\nments of your network’s vulnerability, are also available \nand can be used by crackers looking for a way in. \n Large numbers of unsuccessful login attempts are \nalso a good indicator that your system has been targeted. \nThe best penetration-testing tools can be configured with \nattempt thresholds that, when exceeded, will trigger an \nalert. They can passively distinguish between legitimate \nand suspicious activity of a repetitive nature, monitor \nthe time intervals between activities (alerting when the \nnumber exceeds the threshold you set), and build a data-\nbase of signatures seen multiple times over a given period. \n The “ human element ” (your users) is a constant fac-\ntor in your network operations. Users will frequently \nenter a mistyped response but usually correct the error \non the next try. However, a sequence of mistyped com-\nmands or incorrect login responses (with attempts to \nrecover or reuse them) can be a signs of brute-force \nintrusion attempts. \n Packet inconsistencies — direction (inbound or out-\nbound), originating address or location, and session char-\nacteristics (ingoing sessions vs. outgoing sessions) — can \nalso be good indicators of an attack. If a packet has an \nunusual source or has been addressed to an abnormal \nport — say, an inconsistent service request — it could be a \nsign of random system scanning. Packets coming from \nthe outside that have local network addresses that request \nservices on the inside can be a sign that IP spoofing is \nbeing attempted. \n Sometimes odd or unexpected system behavior is \nitself a sign. Though this is sometimes difficult to track, \nyou should be aware of activity such as changes to sys-\ntem clocks, servers going down or server processes inex-\nplicably stopping (with system restart attempts), system \nresource issues (such as unusually high CPU activity or \noverflows in file systems), audit logs behaving in strange \nways (decreasing in size without administrator interven-\ntion), or unexpected user access to resources. If you note \nunusual activity at regular times on given days, heavy \nsystem use (possible DoS attack) or CPU use (brute-\nforce password-cracking attempts) should always be \ninvestigated. \n 8 . WHAT CAN YOU DO? \n It goes without saying that the most secure network — the \none that has the least chance of being compromised — is \none that has no direct connection to the outside world. \nBut that’s hardly a practical solution, since the whole \nreason you have a Web presence is to do business. And \nin the game of Internet commerce, your biggest concern \nisn’t the sheep coming in but the wolves dressed like \nsheep coming in with them. So, how do you strike an \nacceptable balance between keeping your network intru-\nsion free and keeping it accessible at the same time? \n As your company’s network administrator, you walk \na fine line between network security and user needs. You \nhave to have a good defensive posture that still allows for \naccess. Users and customers can be both the lifeblood of \nyour business and its greatest potential source of infec-\ntion. Furthermore, if your business thrives on allowing \nusers access, you have no choice but to let them in. It \nseems like a monumentally difficult task at best. \n Like a castle, imposing but stationary, every defen-\nsive measure you put up will eventually be compro-\nmised by the legions of very motivated thieves looking \nto get in. It’s a game of move/countermove: You adjust, \nthey adapt. So you have to start with defenses that can \nquickly and effectively adapt and change as the outside \nthreats adapt. \n First and foremost, you need to make sure that your \nperimeter defenses are as strong as they can be, and \nthat means keeping up with the rapidly evolving threats \naround you. The days of relying solely on a firewall that \nsimply does firewall functions are gone; today’s crackers \nhave figured out how to bypass the firewall by exploiting \nweaknesses in applications themselves. Simply being \nreactive to hits and intrusions isn’t a very good option, \neither; that’s like standing there waiting for someone to \n" }, { "page_number": 77, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n44\n hit you before deciding what to do rather than seeing the \noncoming punch and moving out of its way or blocking \nit. You need to be flexible in your approach to the new-\nest technologies, constantly auditing your defenses to \nensure that your network’s defensive armor can meet the \nlatest threat. You have to have a very dynamic and effec-\ntive policy of constantly monitoring for suspicious activ-\nities that, when discovered, can be quickly dealt with so \nthat someone doesn’t slip something past without your \nnoticing it. Once that happens, it’s too late. \n Next, and this is also a crucial ingredient for net-\nwork administrators: You have to educate your users. \nNo matter how good a job you’ve done at tightening up \nyour network security processes and systems, you still \nhave to deal with the weakest link in your armor — your \nusers. It doesn’t do any good to have bulletproof pro-\ncesses in place if they’re so difficult to manage that users \nwork around them to avoid the difficulty, or if they’re so \nloosely configured that a casually surfing user who visits \nan infected site will pass that infection along to your net-\nwork. The degree of difficulty in securing your network \nincreases dramatically as the number of users goes up. \n User education becomes particularly important where \nmobile computing is concerned. Losing a device, using \nit in a place (or manner) in which prying eyes can see \npasswords or data, awareness of hacking tools specifi-\ncally designed to sniff wireless signals for data, and log-\nging on to unsecured networks are all potential problem \nareas with which users need to be familiar. \n Know Today’s Network Needs \n The traditional approach to network security engineering \nhas been to try to erect preventative measures — firewalls — \nto protect the infrastructure from intrusion. The firewall \nacts like a filter, catching anything that seems suspicious \nand keeping everything behind it as sterile as possible. \nHowever, though firewalls are good, they typically don’t \ndo much in the way of identifying compromised appli-\ncations that use network resources. And with the speed \nof evolution seen in the area of penetration tools, an \napproach designed simply to prevent attacks will be less \nand less effective. \n Today’s computing environment is no longer con-\nfined to the office, as it used to be. Though there are still \nfixed systems inside the firewall, ever more sophisti-\ncated remote and mobile devices are making their way \ninto the workforce. This influx of mobile computing has \nexpanded the traditional boundaries of the network to \nfarther and farther reaches and requires a different way \nof thinking about network security requirements. \n Your network’s endpoint or perimeter is mutating — \nexpanding beyond its historical boundaries. Until \nrecently, that endpoint was the user, either a desktop sys-\ntem or laptop, and it was relatively easy to secure those \ndevices. To use a metaphor: The difference between end-\npoints of early network design and those of today is like \nthe difference between the battles of World War II and the \ncurrent war on terror. In the battles of WWII there were \nvery clearly defined “ front lines ” — one side controlled by \nthe Allied powers, the other by the Axis. Today, however, \nthe war on terror has no such front lines and is fought \nin multiple areas with different techniques and strategies \nthat are customized for each combat theater. \n With today’s explosion of remote users and mobile \ncomputing, your network’s endpoint is no longer as clearly \ndefined as it once was, and it is evolving at a very rapid \npace. For this reason, your network’s physical perimeter \ncan no longer be seen as your best “ last line of defense, ” \neven though having a robust perimeter security system is \nstill a critical part of your overall security policy. \n Any policy you develop should be organized in such \na way as to take advantage of the strength of your uni-\nfied threat management (UTM) system. Firewalls, antivi-\nrus, and intrusion detection systems (IDSs), for example, \nwork by trying to block all currently known threats — the \n “ blacklist ” approach. But the threats evolve more quickly \nthan the UTM systems can, so it almost always ends up \nbeing an “ after the fact ” game of catch-up. Perhaps a bet-\nter, and more easily managed, policy is to specifically \nstate which devices are allowed access and which appli-\ncations are allowed to run in your network’s applica-\ntions. This “ whitelist ” approach helps reduce the amount \nof time and energy needed to keep up with the rapidly \nevolving pace of threat sophistication, because you’re \nspecifying what gets in versus what you have to keep out. \n Any UTM system you employ should provide the \nmeans of doing two things: specify which applica-\ntions and devices are allowed and offer a policy-based \napproach to managing those applications and devices. \nIt should allow you to secure your critical resources \nagainst unauthorized data extraction (or data leakage), \noffer protection from the most persistent threats (viruses, \nmalware, and spyware), and evolve with the ever-changing \nspectrum of devices and applications designed to pen-\netrate your outer defenses. \n So, what’s the best strategy for integrating these new \nremote endpoints? First, you have to realize that these \nnew remote, mobile technologies are becoming increas-\ningly ubiquitous and aren’t going away anytime soon. \nIn fact, they most likely represent the future of comput-\ning. As these devices gain in sophistication and function, \n" }, { "page_number": 78, "text": "Chapter | 3 Preventing System Intrusions\n45\n they are unchaining end users from their desks and, \nfor some businesses, are indispensible tools. iPhones, \nBlackberries, Palm Treos, and other smart phones and \ndevices now have the capability to interface with corpo-\nrate email systems, access networks, run enterprise-level \napplications, and do full-featured remote computing. \nAs such, they also now carry an increased risk for net-\nwork administrators due to loss or theft (especially if the \ndevice is unprotected by a robust authentication method) \nand unauthorized interception of their wireless signals \nfrom which data can be siphoned off. \n To cope with the inherent risks, you engage an effec-\ntive security policy for dealing with these devices: under \nwhat conditions can they be used, how many of your users \nneed to employ them, what levels and types of access will \nthey have, and how will they be authenticated? \n Solutions are available for adding strong authentica-\ntion to users seeking access via wireless LANs. Tokens, \neither of the hardware or software variety, are used to \nidentify the user to an authentication server for verifi-\ncation of their credentials. For example, PremierAccess \nby Aladdin Knowledge Systems can handle incoming \naccess requests from a wireless access point and, if the \nuser is authenticated, pass them into the network. \n Key among the steps you take to secure your network \nwhile allowing mobile computing is to fully educate the \nusers of such technology. They need to understand, in \nno uncertain terms, the risks to your network (and ulti-\nmately to the company in general) represented by their \nmobile devices and that their mindfulness of both the \ndevice’s physical and electronic security is an absolute \nnecessity. \n Network Security Best Practices \n So, how do you either “ clean and tighten up ” your exist-\ning network or design a new one that can stand up to \nthe inevitable onslaught of attacks? Let’s look at some \nbasics. Consider the diagram shown in Figure 3.1 . \n The illustration in Figure 3.1 shows what could be a \ntypical network layout. Users outside the DMZ approach \nthe network via a secure (HTTPS) Web or VPN connec-\ntion. They are authenticated by the perimeter firewall \nand handed off to either a Web server or a VPN gateway. \nIf allowed to pass, they can then access resources inside \nthe network. \n If you’re the administrator of an organization that has \nonly, say, a couple dozen users with whom to contend, \nyour task (and the illustration layout) will be relatively \neasy to manage. But if you have to manage several hun-\ndred (or several thousand) users, the complexity of your \ntask increases by an order of magnitude. That makes a \ngood security policy an absolute necessity. \n 9 . SECURITY POLICIES \n Like the tedious prep work before painting a room, organ-\nizations need a good, detailed, and well-written security \npolicy. Not something that should be rushed through \n FIGURE 3.1 Network diagram. \n" }, { "page_number": 79, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n46\n “ just to get it done, ” your security policy should be well \nthought out; in other words, the “ devil is in the details. ” \nYour security policy is designed to get everyone involved \nwith your network “ thinking along the same lines. ” \n The policy is almost always a work in progress. It \nmust evolve with technology, especially those technolo-\ngies aimed at surreptitiously getting into your system. \nThe threats will continue to evolve, as will the systems \ndesigned to hold them at bay. \n A good security policy isn’t always a single document; \nrather, it is a conglomeration of policies that address \nspecific areas, such as computer and network use, forms \nof authentication, email policies, remote/mobile tech-\nnology use, and Web surfing policies. It should be writ-\nten in such a way that, while comprehensive, it can be \neasily understood by those it affects. Along those lines, \nyour policy doesn’t have to be overly complex. If you \nhand new employees something that resembles War \nand Peace in size and tell them they’re responsible for \nknowing its content, you can expect to have continued \nproblems maintaining good network security awareness. \nKeep it simple. \n First, you need to draft some policies that define your \nnetwork and its basic architecture. A good place to start \nis by asking the following questions: \n ● What kinds of resources need to be protected (user \nfinancial or medical data, credit-card information, \netc.)? \n ● How many users will be accessing the network on \nthe inside (employees, contractors, etc.)? \n ● Will there need to be access only at certain times or \non a 24/7 basis (and across multiple time zones and/\nor internationally)? \n ● What kind of budget do I have? \n ● Will remote users be accessing the network, and if \nso, how many? \n ● Will there be remote sites in geographically distant \nlocations (requiring a failsafe mechanism, such \nas replication, to keep data synched across the \nnetwork)? \n Next, you should spell out responsibilities for secu-\nrity requirements, communicate your expectations to \nyour users (one of the weakest links in any security \npolicy), and lay out the role(s) for your network admin-\nistrator. It should list policies for activities such as Web \nsurfing, downloading, local and remote access, and types \nof authentication. You should address issues such as add-\ning users, assigning privileges, dealing with lost tokens \nor compromised passwords, and under what circum-\nstances you will remove users from the access database. \n You should establish a security team (sometimes \nreferred to as a “ tiger team ” ) whose responsibility it will \nbe to create security policies that are practical, workable, \nand sustainable. They should come up with the best plan \nfor implementing these policies in a way that addresses \nboth network resource protection and user friendliness. \nThey should develop plans for responding to threats as \nwell as schedules for updating equipment and software. \nAnd there should be a very clear policy for handling \nchanges to overall network security — the types of con-\nnections through your firewall that will and will not be \nallowed. This is especially important because you don’t \nwant an unauthorized user gaining access, reaching into \nyour network, and simply taking files or data. \n 10 . RISK ANALYSIS \n You should have some kind of risk analysis done to \ndetermine, as near as possible, the risks you face with the \nkind of operations you conduct (ecommerce, classified/\nproprietary information handling, partner access, or the \nlike). Depending on the determined risk, you might need \nto rethink your original network design. Though a simple \nextranet/intranet setup with mid-level firewall protection \nmight be okay for a small business that doesn’t have \nmuch to steal, that obviously won’t work for a com-\npany that deals with user financial data or proprietary/\nclassified information. In that case, what might be \nneeded is a tiered system in which you have a “ corporate \nside ” (on which things such as email, intranet access, \nand regular Internet access are handled) and a separate, \nsecure network not connected to the Internet or corpo-\nrate side. These networks can only be accessed by a user \non a physical machine, and data can only be moved to \nthem by “ sneaker-net ” physical media (scanned for \nviruses before opening). These networks can be used \nfor data systems such as test or lab machines (on which, \nfor example, new software builds are done and must be \nmore tightly controlled, to prevent inadvertent corruption \nof the corporate side), or networks on which the storage \nor processing of proprietary, business-critical, or classi-\nfied information are handled. In Department of Defense \nparlance, these are sometimes referred to as red nets or \n black nets. \n Vulnerability Testing \n Your security policy should include regular vulnerabil-\nity testing. Some very good vulnerability testing tools, \nsuch as WebInspect, Acunetix, GFI LANguard, Nessus, \n" }, { "page_number": 80, "text": "Chapter | 3 Preventing System Intrusions\n47\n HFNetChk, and Tripwire, allow you to conduct your \nown security testing. Furthermore, there are third-party \ncompanies with the most advanced suite of testing tools \navailable that can be contracted to scan your network for \nopen and/or accessible ports, weaknesses in firewalls, \nand Web site vulnerability. \n Audits \n You should also factor in regular, detailed audits of all \nactivities, with emphasis on those that seem to be near or \noutside established norms. For example, audits that reveal \nhigh rates of data exchanges after normal business hours, \nwhen that kind of traffic would not normally be expected, \nis something that should be investigated. Perhaps, after \nchecking, you’ll find that it’s nothing more than an \nemployee downloading music or video files. But the \npoint is that your audit system saw the increase in traffic \nand determined it to be a simple Internet use policy viola-\ntion rather than someone siphoning off more critical data. \n There should be clearly established rules for deal-\ning with security, use, and/or policy violations as well as \nattempted or actual intrusions. Trying to figure out what \nto do after the intrusion is too late. And if an intrusion \ndoes occur, there should be a clear-cut system for deter-\nmining the extent of damage; isolation of the exploited \napplication, port, or machine; and a rapid response to \nclosing the hole against further incursions. \n Recovery \n Your plan should also address the issue of recovery \nafter an attack has occurred. You need to address issues \nsuch as how the network will be reconfigured to close \noff the exploited opening. This might take some time, \nsince the entry point might not be immediately discern-\nable. There has to be an estimate of damage — what was \ntaken or compromised, was malicious code implanted \nsomewhere, and, if so, how to most efficiently extract it \nand clean the affected system. In the case of a virus in a \ncompany’s email system, the ability to send and receive \nemail could be halted for days while infected systems \nare rebuilt. And there will have to be discussions about \nhow to reconstruct the network if the attack decimated \nfiles and systems. \n This will most likely involve more than simply rein-\nstalling machines from archived backups. Because the \ncompromise will most likely affect normal business \noperations, the need to expedite the recovery will ham-\nper efforts to fully analyze just what happened. \n This is the main reason for preemptively writing a \ndisaster recovery plan and making sure that all depart-\nments are represented in its drafting. However, like the \nnetwork security policy itself, the disaster recovery plan \nwill also be a work in progress that should be reviewed \nregularly to ensure that it meets the current needs. \nThings such as new threat notifications, software patches \nand updates, vulnerability assessments, new application \nrollouts, and employee turnover all have to be addressed. \n 11 . TOOLS OF YOUR TRADE \n Though the tools available to people seeking unauthor-\nized entry into your domain are impressive, you also \nhave a wide variety of tools to help keep them out. \nBefore implementing a network security strategy, how-\never, you must be acutely aware of the specific needs of \nthose who will be using your resources. \n Simple antispyware and antispam tools aren’t \nenough. In today’s rapidly changing software environ-\nment, strong security requires penetration shielding, \nthreat signature recognition, autonomous reaction to \nidentified threats, and the ability to upgrade your tools \nas the need arises. \n The following discussion talks about some of the \nmore common tools you should consider adding to your \narsenal. \n Firewalls \n Your first line of defense should be a good firewall, or \nbetter yet, a system that effectively incorporates sev-\neral security features in one. Secure Firewall (formerly \nSidewinder) from Secure Computing is one of the \nstrongest and most secure firewall products available, \nand as of this writing it has never been successfully \nhacked. It is trusted and used by government and defense \nagencies. Secure Firewall combines the five most neces-\nsary security systems — firewall, antivirus/spyware/spam, \nvirtual private network (VPN), application filtering, and \nintrusion prevention/detection systems — into a single \nappliance. \n Intrusion Prevention Systems \n A good intrusion prevention system (IPS) is a vast improve-\nment over a basic firewall in that it can, among other things, \nbe configured with policies that allow it to make autono-\nmous decisions as to how to deal with application-level \nthreats as well as simple IP address or port-level attacks. \n" }, { "page_number": 81, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n48\n IPS products respond directly to incoming threats in a \nvariety of ways, from automatically dropping (extracting) \nsuspicious packets (while still allowing legitimate ones to \npass) to, in some cases, placing an intruder into a “ quaran-\ntine ” file. IPS, like an application layer firewall, can be con-\nsidered another form of access control in that it can make \npass/fail decisions on application content. \n For an IPS to be effective, it must also be very good \nat discriminating between a real threat signature and \none that looks like but isn’t one (false positive). Once a \nsignature interpreted to be an intrusion is detected, the \nsystem must quickly notify the administrator so that the \nappropriate evasive action can be taken. The following \nare types of IPS: \n ● Network-based. Network-based IPSs create a series \nof choke points in the enterprise that detect suspected \nintrusion attempt activity. Placed inline at their \nneeded locations, they invisibly monitor network traf-\nfic for known attack signatures that they then block. \n ● Host-based. These systems don’t reside on the \nnetwork per se but rather on servers and individual \nmachines. They quietly monitor activities and requests \nfrom applications, weeding out actions deemed \nprohibited in nature. These systems are often very \ngood at identifying post-decryption entry attempts. \n ● Content-based. These IPSs scan network packets, \nlooking for signatures of content that is unknown \nor unrecognized or that has been explicitly labeled \nthreatening in nature. \n ● Rate-based. These IPSs look for activity that falls \noutside the range of normal levels, such as activity \nthat seems to be related to password cracking and \nbrute-force penetration attempts, for example. \n When searching for a good IPS, look for one that \nprovides, at minimum: \n ● Robust protection for your applications, host systems, \nand individual network elements against exploita-\ntion of vulnerability-based threats as “ single-bullet \nattacks, ” Trojan horses, worms, botnets, and surrepti-\ntious creation of “ back doors ” in your network \n ● Protection against threats that exploit vulnerabilities \nin specific applications such as Web services, mail, \nDNS, SQL, and any Voice over IP (VoIP) services \n ● Detection and elimination of spyware, phishing, and \nanonymizers (tools that hide a source computer’s \nidentifying information so that Internet activity can \nbe undertaken surreptitiously) \n ● Protection against brute-force and DoS attacks, \napplication scanning, and flooding \n ● A regular method of updating threat lists and \nsignatures \n Application Firewalls \n Application firewalls (AFs) are sometimes confused with \nIPSs in that they can perform IPS-like functions. But an \nAF is specifically designed to limit or deny an applica-\ntion’s level of access to a system’s OS — in other words, \nclosing any openings into a computer’s OS to deny the \nexecution of harmful code within an OS’s structure. AFs \nwork by looking at applications themselves, monitoring \nthe kind of data flow from an application for suspicious \nor administrator-blocked content from specific Web \nsites, application-specific viruses, and any attempt to \nexploit an identified weakness in an application’s archi-\ntecture. Though AF systems can conduct intrusion pre-\nvention duties, they typically employ proxies to handle \nfirewall access control and focus on traditional firewall-\ntype functions. Application firewalls can detect the sig-\nnatures of recognized threats and block them before they \ncan infect the network. \n Windows ’ version of an application firewall, called \nData Execution Prevention (DEP), prevents the execution \nof any code that uses system services in such a way that \ncould be deemed harmful to data or Virtual Memory (VM) . \n It does this by considering RAM data as nonexecutable — in \nessence, refusing to run new code coming from the data-\nonly area of RAM, since any harmful or malicious code \nseeking to damage existing data would have to run from \nthis area. \n The Macintosh Operating System (MacOS) Version \n10.5.x also includes a built-in application firewall as a \nstandard feature. The user can configure it to employ \ntwo-layer protection in which installing network-aware \napplications will result in an OS-generated warning that \nprompts for user authorization of network access. If \nauthorized, MacOS will digitally sign the application in \nsuch a way that subsequent application activity will not \nprompt for further authorization. Updates invalidate the \noriginal certificate, and the user will have to revalidate \nbefore the application can run again. \n The Linux OS has, for example, an application firewall \ncalled AppArmor that allows the admin to create and \nlink to every application a security policy that restricts \nits access capabilities. \n Access Control Systems \n Access control systems (ACSs) rely on administrator-\ndefined rules that allow or restrict user access to protected \n" }, { "page_number": 82, "text": "Chapter | 3 Preventing System Intrusions\n49\n network resources. These access rules can, for example, \nrequire strong user authentication such as tokens or bio-\nmetric devices to prove the identity of users requesting \naccess. They can also restrict access to various network \nservices based on time of day or group need. \n Some ACS products allow for the creation of an \n access control list (ACL), which is a set of rules that \ndefine security policy. These ACLs contain one or more \n access control entries (ACEs), which are the actual \nrule definitions themselves. These rules can restrict \naccess by specific user, time of day, IP address, func-\ntion (department, management level, etc.), or specific \nsystem from which a logon or access attempt is being \nmade. \n A good example of an ACS is SafeWord by Aladdin \nKnowledge Systems. SafeWord is considered a two-factor \nauthentication system in that it uses what the user knows \n(such as a personal identification number, or PIN) and \nwhat the user has (such as a one-time passcode, or OTP, \ntoken) to strongly authenticate users requesting net-\nwork access. SafeWord allows administrators to design \ncustomized access rules and restrictions to network \nresources, applications, and information. \n In this scheme, the tokens are a key component. The \ntoken’s internal cryptographic key algorithm is made \n “ known ” to an authentication server when the token’s \nfile is imported into a central database. \n When the token is assigned to a user, its serial \nnumber is linked to that user in the user’s record. On \nmaking an access request, the authentication server \nprompts the user to enter a username and the OTP gen-\nerated by the token. If a PIN was also assigned to that \nuser, she must either prepend or append that PIN to the \ntoken-generated passcode. As long as the authentication \nserver receives what it expects, the user is granted what-\never access privileges she was assigned. \n Unified Threat Management \n The latest trend to emerge in the network intrusion pre-\nvention arena is referred to as unified threat manage-\nment , or UTM. UTM systems are multilayered and \nincorporate several security technologies into a single \nplatform, often in the form of a plug-in appliance. UTM \nproducts can provide such diverse capabilities as anti-\nvirus, VPN, firewall services, and antispam as well as \nintrusion prevention. \n The biggest advantages of a UTM system are its \nease of operation and configuration and the fact that its \nsecurity features can be quickly updated to meet rapidly \nevolving threats. \n Sidewinder by Secure Computing is a UTM sys-\ntem that was designed to be flexible, easily and quickly \nadaptable, and easy to manage. It incorporates firewall, \nVPN, trusted source, IPS, antispam and antivirus, URL \nfiltering, SSL decryption, and auditing/reporting. \n Other UTM systems include Symantec’s Enterprise \nFirewall and Gateway Security Enterprise Firewall App-\nliance, Fortinet, LokTek’s AIRlok Firewall Appliance, \nand SonicWall’s NSA 240 UTM Appliance, to name \na few. \n 12 . CONTROLLING USER ACCESS \n Traditionally users — also known as employees — have \nbeen the weakest link in a company’s defensive armor. \nThough necessary to the organization, they can be a \nnightmare waiting to happen to your network. How do \nyou let them work within the network while controlling \ntheir access to resources? You have to make sure your \nsystem of user authentication knows who your users are. \n Authentication, Authorization, and \nAccounting \n Authentication is simply proving that a user’s identity \nclaim is valid and authentic. Authentication requires \nsome form of “ proof of identity. ” In network technolo-\ngies, physical proof (such as a driver’s license or other \nphoto ID) cannot be employed, so you have to get some-\nthing else from a user. That typically means having the \nuser respond to a challenge to provide genuine creden-\ntials at the time he requests access. \n For our purposes, credentials can be something the \nuser knows, something the user has, or something they \nare. Once they provide authentication, there also has \nto be authorization, or permission to enter. Finally, \nyou want to have some record of users ’ entry into your \nnetwork — username, time of entry, and resources. That is \nthe accounting side of the process. \n What the User Knows \n Users know a great many details about their own lives —\n birthdays, anniversaries, first cars, their spouse’s name —\n and many will try to use these nuggets of information as \na simple form of authentication. What they don’t realize \nis just how insecure those pieces of information are. \n In network technologies, these pieces of informa-\ntion are often used as fixed passwords and PINs because \nthey’re easy to remember. Unless some strict guidelines \nare established on what form a password or PIN can take \n" }, { "page_number": 83, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n50\n (for example, a minimum number of characters or a mix-\nture of letters and numbers), a password will offer little \nto no real security. \n Unfortunately, to hold down costs, some organiza-\ntions allow users to set their own passwords and PINs \nas credentials, then rely on a simple challenge-response \nmechanism in which these weak credentials are pro-\nvided to gain access. Adding to the loss of security is the \nfact that not only are the fixed passwords far too easy \nto guess, but because the user already has too much to \nremember, she writes them down somewhere near the \ncomputer she uses (often in some “ cryptic ” scheme to \nmake it more difficult to guess). To increase the effec-\ntiveness of any security system, that system needs to \nrequire a much stronger form of authentication. \n What the User Has \n The most secure means of identifying users is by a \ncombination of (1) hardware device in their possession \nthat is “ known ” to an authentication server in your net-\nwork, coupled with (2) what they know. A whole host \nof devices available today — tokens, smart cards, biomet-\nric devices — are designed to more positively identify a \nuser. Since it’s my opinion that a good token is the most \nsecure of these options, I focus on them here. \n Tokens \n A token is a device that employs an encrypted key for \nwhich the encryption algorithm — the method of gener-\nating an encrypted password — is known to a network’s \nauthentication server. There are both software and hard-\nware tokens. The software tokens can be installed on \na user’s desktop system, in their cellular phone, or on \ntheir smart phone. The hardware tokens come in a vari-\nety of form factors, some with a single button that both \nturns the token on and displays its internally generated \npasscode; others with a more elaborate numerical key-\npad for PIN input. If lost or stolen, tokens can easily \nbe removed from the system, quickly rendering them \ncompletely ineffective. And the passcodes they generate \nare of the “ one-time-passcode, ” or OTP, variety, mean-\ning that a generated passcode expires once it’s been \nused and cannot be used again for a subsequent logon \nattempt. \n Tokens are either programmed onsite with token pro-\ngramming software or offsite at the time they are ordered \nfrom their vendor. During programming, functions such \nas a token’s cryptographic key, password length, whether \na PIN is required, and whether it generates passwords \nbased on internal clock timing or user PIN input are \nwritten into the token’s memory. When programming \nis complete, a file containing this information and the \ntoken’s serial number are imported into the authentica-\ntion server so that the token’s characteristics are known. \n A token is assigned to a user by linking its serial \nnumber to the user’s record, stored in the system data-\nbase. When a user logs onto the network and needs \naccess to, say, her email, she is presented with some \nchallenge that she must answer using her assigned token. \n Tokens operate in one of three ways: time synchro-\nnous, event synchronous, or challenge-response (also \nknown as asynchronous). \n Time Synchronous \n In time synchronous operation, the token’s internal \nclock is synched with the network’s clock. Each time \nthe token’s button is pressed, it generates a passcode in \nhash form, based on its internal timekeeping. As long \nas the token’s clock is synched with the network clock, \nthe passcodes are accepted. In some cases (for example, \nwhen the token hasn’t been used for some time or its \nbattery dies), the token gets out of synch with the system \nand needs to be resynched before it can be used again. \n Event Synchronous \n In event synchronous operations, the server maintains \nan ordered passcode sequence and determines which \npasscode is valid based on the current location in that \nsequence. \n Challenge-Response \n In challenge-response, a challenge, prompting for user-\nname, is issued to the user by the authentication server \nat the time of access request. Once the user’s name is \nentered, the authentication server checks to see what \nform of authentication is assigned to that user and issues \na challenge back to the user. The user inputs the chal-\nlenge into the token, then enters the token’s generated \nresponse to the challenge. As long as the authentication \nserver receives what it expected, authentication is suc-\ncessful and access is granted. \n The User Is Authenticated, But Is She \nAuthorized? \n Authorization is independent of authentication. A \nuser can be permitted entry into the network but not \nbe authorized to access a resource. You don’t want an \nemployee having access to HR information or a corporate \n" }, { "page_number": 84, "text": "Chapter | 3 Preventing System Intrusions\n51\n partner getting access to confidential or proprietary \ninformation. \n Authorization requires a set of rules that dictate the \nresources to which a user will have access. These per-\nmissions are established in your security policy. \n Accounting \n Say that our user has been granted access to the \nrequested resource. But you want (or in some cases are \nrequired to have) the ability to call up and view activity \nlogs to see who got into what resource. This information \nis mandated for organizations that deal with user finan-\ncial or medical information or DoD classified informa-\ntion or that go through annual inspections to maintain \ncertification for international operations. \n Accounting refers to the recording, logging, and \narchiving of all server activity, especially activity related \nto access attempts and whether they were successful. \nThis information should be written into audit logs that \nare stored and available any time you want or need to \nview them. The audit logs should contain, at minimum, \nthe following information: \n ● The user’s identity \n ● The date and time of the request \n ● Whether the request passed authentication and was \ngranted \n Any network security system you put into place \nshould store, or archive, these logs for a specified period \nof time and allow you to determine for how long these \narchives will be maintained before they start to age out \nof the system. \n Keeping Current \n One of the best ways to stay ahead is to not fall behind \nin the first place. New systems with increasing sophisti-\ncation are being developed all the time. They can incor-\nporate a more intelligent and autonomous process in the \nway the system handles a detected threat, a faster and \nmore easily accomplished method for updating threat \nfiles, and configuration flexibility that allows for very \nprecise customization of access rules, authentication \nrequirements, user role assignment, and how tightly it \ncan protect specific applications. \n Register for newsletters, attend seminars and net-\nwork security shows, read white papers, and, if needed, \ncontract the services of network security specialists. \nThe point is, you shouldn’t go cheap on network security. \nThe price you pay to keep ahead will be far less than the \nprice you pay to recover from a security breach or attack. \n 13 . CONCLUSION \n Preventing network intrusions is no easy task. Like cops on \nthe street — usually outnumbered and underequipped com-\npared to the bad guys — you face an enemy with determina-\ntion, skill, training, and a frightening array of increasingly \nsophisticated tools for hacking their way through your best \ndefenses. And no matter how good your defenses are today, \nit’s only a matter of time before a tool is developed that can \npenetrate them. If you know that ahead of time, you’ll be \nmuch more inclined to keep a watchful eye for what “ they ” \nhave and what you can use to defeat them. \n Your best weapon is a logical, thoughtful, and nimble \napproach to network security. You have to be nimble — to \nevolve and grow with changes in technology, never being \ncontent to keep things as they are because “ Hey, they’re \nworking just fine. ” Today’s “ just fine ” will be tomorrow’s \n “ What the hell happened? ” \n Stay informed. There is no shortage of information \navailable to you in the form of white papers, seminars, \ncontract security specialists, and online resources, all \ndealing with various aspects of network security. \n Have a good, solid, comprehensive, yet easy-to-\nunderstand network security policy in place. The very \nprocess of developing one will get all involved parties \nthinking about how to best secure your network while \naddressing user needs. When it comes to your users, you \nsimply can’t overeducate them where network security \nawareness is concerned. The more they know, the better \nequipped they’ll be to act as allies against, rather than \naccomplices of, the hoards of crackers looking to steal, \ndamage, hobble, or completely cripple your network. \n Do your research and invest in good, multipurpose \nnetwork security systems. Select systems that are easy to \ninstall and implement, are adaptable and quickly config-\nurable, can be customized to suit your needs of today as \nwell as tomorrow, and are supported by companies that \nkeep pace with current trends in cracker technology. \n" }, { "page_number": 85, "text": "This page intentionally left blank\n" }, { "page_number": 86, "text": "53\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n Guarding Against Network Intrusions \n Tom Chen \n Swansea University \n Patrick J. Walsh \n eSoft Inc. \n Chapter 4 \n Virtually all computers today are connected to the \nInternet through dialup, broadband, Ethernet, or wire-\nless technologies. The reason for this Internet ubiquity \nis simple: Applications depending on the network, such \nas email, Web, remote login, instant messaging, and \nVoIP, have become essential to the computing experi-\nence. Unfortunately, the Internet exposes computer users \nto risks from a wide variety of possible attacks. Users \nhave much to lose — their privacy, valuable data, control \nof their computers, and possibly theft of their identities. \nThe network enables attacks to be carried out remotely, \nwith relative anonymity and low risk of traceability. \n The nature of network intrusions has evolved over the \nyears. A few years ago, a major concern was fast worms \nsuch as Code Red, Nimda, Slammer, and Sobig. More \nrecently, concerns shifted to spyware, Trojan horses, and \nbotnets. Although these other threats still continue to be \nmajor problems, the Web has become the primary vector \nfor stealthy attacks today. 1 \n 1 . TRADITIONAL RECONNAISSANCE \nAND ATTACKS \n Traditionally, attack methods follow sequential steps \nanalogous to physical attacks, as shown in Figure 4.1 : \nreconnaissance, compromise, and cover-up. 2 Here we \nare only addressing attacks directed at a specific target \nhost. Some other types of attacks, such as worms, are not \ndirected at specific targets. Instead, they attempt to hit as \nmany targets as quickly as possible without caring who \nor what the targets are. \n In the first step of a directed attack, the attacker per-\nforms reconnaissance to learn as much as possible about \nthe chosen target before carrying out an actual attack. \nA thorough reconnaissance can lead to a more effective \nattack because the target’s weaknesses can be discovered. \nOne might expect the reconnaissance phase to possibly \ntip off the target about an impending attack, but scans and \nprobes are going on constantly in the “ background noise ” \nof network traffic, so systems administrators might ignore \nattack probes as too troublesome to investigate. \n Through pings and traceroutes, an attacker can dis-\ncover IP addresses and map the network around the \ntarget. Pings are ICMP echo request and echo reply \nmessages that verify a host’s IP address and availabil-\nity. Traceroute is a network mapping utility that takes \nadvantage of the time to live (TTL) field in IP packets. It \nsends out packets with TTL \u0003 1, then TTL \u0003 2, and so \non. When the packets expire, the routers along the pack-\nets ’ path report that the packets have been discarded, \nreturning ICMP “ time exceeded ” messages and thereby \n 1 Dean Turner, et al., Symantec Global Internet Security Threat \nReport: Trends for July – December 2007, available at www.symantec.\ncom (date of access: July, 1, 2008). \n 2 Ed Skoudis, Counter Hack Reloaded: A Step-by-Step Guide to \nComputer Attacks and Effective Defenses, 2nd ed ., Prentice Hall, 2006. \nReconnaissance to\nlearn about target\nCompromise of target\nCover up and maintain\ncovert control\n FIGURE 4.1 Steps in directed attacks. \n" }, { "page_number": 87, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n54\nallowing the traceroute utility to learn the IP addresses \nof routers at a distance of one hop, two hops, and so on. \n Port scans can reveal open ports. Normally, a host \nmight be expected to have certain well-known ports open, \nsuch as TCP port 80 (HTTP), TCP port 21 (FTP), TCP \nport 23 (Telnet), or TCP port 25 (SMTP). A host might \nalso happen to have open ports in the higher range. For \nexample, port 12345 is the default port used by the Netbus \nremote access Trojan horse, or port 31337 is the default \nport used by the Back Orifice remote access Trojan horse. \nDiscovery of ports indicating previous malware infections \ncould obviously help an attacker considerably. \n In addition to discovering open ports, the popular \nNMAP scanner ( www.insecure.org/nmap ) can discover \nthe operating system running on a target. NMAP uses a \nlarge set of heuristic rules to identify an operating system \nbased on a target’s responses to carefully crafted TCP/IP \nprobes. The basic idea is that different operating systems \nwill make different responses to probes to open TCP/\nUDP ports and malformed TCP/IP packets. Knowledge \nof a target’s operating system can help an attacker iden-\ntify vulnerabilities and find effective exploits. \n Vulnerability scanning tests a target for the presence \nof vulnerabilities. Vulnerability scanners such as SATAN, \nSARA, SAINT, and Nessus typically contain a database \nof known vulnerabilities that is used to craft probes to a \nchosen target. The popular Nessus tool ( www.nessus.org ) \nhas an extensible plug-in architecture to add checks for \nbackdoors, misconfiguration errors, default accounts and \npasswords, and other types of vulnerabilities. \n In the second step of a directed attack, the attacker \nattempts to compromise the target through one or more \nmethods. Password attacks are common because pass-\nwords might be based on common words or names and are \nguessable by a dictionary attack, although computer sys-\ntems today have better password policies that forbid easily \nguessable passwords. If an attacker can obtain the password \nfile from the target, numerous password-cracking tools are \navailable to carry out a brute-force password attack. In addi-\ntion, computers and networking equipment often ship with \ndefault accounts and passwords intended to help systems \nadministrators set up the equipment. These default accounts \nand passwords are easy to find on the Web (for example, \n www.phenoelit-us.org/dpl/dpl.html ). Occasionally users \nmight neglect to change or delete the default accounts, \noffering intruders an easy way to access the target. \n Another common attack method is an exploit attack \ncode written to take advantage of a specific vulnerability. 3 \nMany types of software, including operating systems and \napplications, have vulnerabilities. In the second half of \n2007, Symantec observed an average of 11.7 vulnerabilities \nper day. 4 Vulnerabilities are published by several organiza-\ntions such as CERT and MITRE as well as vendors such \nas Microsoft through security bulletins. MITRE maintains \na database of publicly known vulnerabilities identified by \ncommon vulnerabilities and exposures (CVE) numbers. The \nseverity of vulnerabilities is reflected in the industry-standard \ncommon vulnerability scoring system (CVSS). In the \nsecond half of 2007, Symantec observed that 3% of vulner-\nabilities were highly severe, 61% were medium-severe, and \n36% were low-severe. 5 Furthermore, 73% of vulnerabili-\nties were easily exploitable. For 2007, Microsoft reported \nthat 32% of known vulnerabilities in Microsoft products \nhad publicly available exploit code. 6 Microsoft released 69 \nsecurity bulletins covering 100 unique vulnerabilities. \n Historically, buffer overflows have been the most \ncommon type of vulnerability. 7 They have been popular \nbecause buffer overflow exploits can often be carried out \nremotely and lead to complete compromise of a target. \nThe problem arises when a program has allocated a fixed \namount of memory space (such as in the stack) for stor-\ning data but receives more data than expected. If the vul-\nnerability exists, the extra data will overwrite adjacent \nparts of memory, which could mess up other variables or \npointers. If the extra data is random, the computer might \ncrash or act unpredictably. However, if an attacker crafts \nthe extra data carefully, the buffer overflow could over-\nwrite adjacent memory with a consequence that benefits \nthe attacker. For instance, an attacker might overwrite \nthe return pointer in a stack, causing the program control \nto jump to malicious code inserted by the attacker. \n An effective buffer overflow exploit requires techni-\ncal knowledge of the computer architecture and operat-\ning system, but once the exploit code is written, it can be \nreused again. Buffer overflows can be prevented by the \nprogrammer or compiler performing bounds checking or \nduring runtime. Although C/C \u0002\u0002 has received a good \ndeal of blame as a programming language for not having \nbuilt-in checking that data written to arrays stays within \n 3 S. McClure, J. Scambray, G. Kutz, Hacking Exposed, third ed., \n McGraw-Hill, 2001. \n 4 Dean Turner, et al., Symantec Global Internet Security Threat \nReport: Trends for July – December 2007, available at www.symantec.\ncom (date of access: July, 1, 2008). \n 5 Dean Turner, et al., Symantec Global Internet Security Threat \nReport: Trends for July – December 2007, available at www.symantec.\ncom (date of access: July, 1, 2008). \n 6 B. Arsenault and V. Gullutto, Microsoft Security Intelligence Report: \nJuly – December 2007, available at www.microsoft.com (date of access: \nJuly 1, 2008). \n 7 J. Foster, V. Osipov, and N. Bhalla, Buffer Overfl ow Attacks: Detect, \nExploit, Prevent, Syngress, 2005. \n" }, { "page_number": 88, "text": "Chapter | 4 Guarding Against Network Intrusions\n55\nbounds, buffer overflow vulnerabilities appear in a wide \nvariety of other programs, too. \n Structured Query Language injection is a type of vul-\nnerability relevant to Web servers with a database back-\nend. 8 SQL is an internationally standardized interactive and \nprogramming language for querying data and managing \ndatabases. Many commercial database products support \nSQL, sometimes with proprietary extensions. Web applica-\ntions often take user input (usually from a Web form) and \npass the input into an SQL statement. An SQL injection \nvulnerability can arise if user input is not properly filtered \nfor string literal escape characters, which can allow an \nattacker to craft input that is interpreted as embedded SQL \nstatements and thereby manipulate the application running \non the database. \n Servers have been attacked and compromised by \ntoolkits designed to automate customized attacks. For \nexample, the MPack toolkit emerged in early 2007 and \nis sold commercially in Russia, along with technical \nsupport and regular software updates. It is loaded into \na malicious or compromised Web site. When a visitor \ngoes to the site, a malicious code is launched through \nan iframe (inline frame) within the HTML code. It can \nlaunch various exploits, expandable through modules, \nfor vulnerabilities in Web browsers and client software. \n Metasploit ( www.metasploit.com ) is a popular Perl-\nbased tool for developing and using exploits with an \neasy-to-use Web or command-line interface. Different \nexploits can be written and loaded into Metasploit and \nthen directed at a chosen target. Exploits can be bundled \nwith a payload (the code to run on a compromised tar-\nget) selected from a collection of payloads. The tool also \ncontains utilities to experiment with new vulnerabilities \nand help automate the development of new exploits. \n Although exploits are commonplace, not all attacks \nrequire an exploit. Social engineering refers to types \nof attacks that take advantage of human nature to com-\npromise a target, typically through deceit. A common \nsocial engineering attack is phishing, used in identity \ntheft. 9 Phishing starts with a lure, usually a spam mes-\nsage that appears to be from a legitimate bank or ecom-\nmerce business. The message attempts to provoke the \nreader into visiting a fraudulent Web site pretending to \nbe a legitimate business. These fraudulent sites are often \nset up by automated phishing toolkits that spoof legiti-\nmate sites of various brands, including the graphics of \nthose brands. The fraudulent site might even have links \nto the legitimate Web site, to appear more valid. Victims \nare thus tricked into submitting valuable personal infor-\nmation such as account numbers, passwords, and Social \nSecurity numbers. \n Other common examples of social engineering are \nspam messages that entice the reader into opening an \nemail attachment. Most people know by now that attach-\nments could be dangerous, perhaps containing a virus or \nspyware, even if they appear to be innocent at first glance. \nBut if the message is sufficiently convincing, such as \nappearing to originate from an acquaintance, even wary \nusers might be tricked into opening an attachment. Social \nengineering attacks can be simple but effective because \nthey target people and bypass technological defenses. \n The third step of traditional directed attacks involves \ncover-up of evidence of the compromise and establish-\nment of covert control. After a successful attack, intrud-\ners want to maintain remote control and evade detection. \nRemote control can be maintained if the attacker has \nmanaged to install any of a number types of malicious \nsoftware: a backdoor such as Netcat; a remote access \nTrojan such as BO2K or SubSeven; or a bot, usually lis-\ntening for remote instructions on an Internet relay chat \n(IRC) channel, such as phatbot. \n Intruders obviously prefer to evade detection after a \nsuccessful compromise, because detection will lead the \nvictim to take remedial actions to harden or disinfect the \ntarget. Intruders might change the system logs on the tar-\nget, which will likely contain evidence of their attack. In \nWindows, the main event logs are secevent.evt, sysevent.\nevt, and appevent.evt. A systems administrator looking \nfor evidence of intrusions would look in these files with \nthe built-in Windows Event Viewer or a third-party log \nviewer. An intelligent intruder would not delete the logs \nbut would selectively delete information in the logs to \nhide signs of malicious actions. \n A rootkit is a stealthy type of malicious software ( mal-\nware ) designed to hide the existence of certain processes \nor programs from normal methods of detection. 10 Rootkits \nessentially alter the target’s operating system, perhaps by \nchanging drivers or dynamic link libraries (DLLs) and pos-\nsibly at the kernel level. An example is the kernel-mode \nFU rootkit that manipulates kernel memory in Windows \n2000, XP, and 2003. It consists of a device driver, msdi-\nrectx.sys, that might be mistaken for Microsoft’s DirectX \ntool. The rootkit can hide certain events and processes and \nchange the privileges of running processes. \n 8 D. Litchfi eld, SQL Server Security , McGraw-Hill Osborne, 2003. \n 9 Markus Jakobsson and Steven Meyers, eds., Phishing and \nCountermeasures: Understanding the Increasing Problem of Electronic \nIdentity Theft , Wiley-Interscience, 2006. \n 10 Greg Hoglund and Jamie Butler, Rootkits: Subverting the Windows \nKernel , Addison-Wesley Professional, 2005. \n" }, { "page_number": 89, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n56\n If an intruder has installed malware for covert control, he \nwill want to conceal the communications between himself \nand the compromised target from discovery by network-\nbased intrusion detection systems (IDSs). Intrusion detec-\ntion systems are designed to listen to network traffic and \nlook for signs of suspicious activities. Several conceal-\nment methods are used in practice. Tunneling is a com-\nmonly used method to place packets of one protocol into \nthe payload of another packet. The “ exterior ” packet \nserves a vehicle to carry and deliver the “ interior ” packet \nintact. Though the protocol of the exterior packet is eas-\nily understood by an IDS, the interior protocol can be any \nnumber of possibilities and hence difficult to interpret. \n Encryption is another obvious concealment method. \nEncryption relies on the secrecy of an encryption key \nshared between the intruder and the compromised target. \nThe encryption key is used to mathematically scramble \nthe communications into a form that is unreadable with-\nout the key to decrypt it. Encryption ensures secrecy in \npractical terms but does not guarantee perfect security. \nEncryption keys can be guessed, but the time to guess the \ncorrect key increases exponentially with the key length. \nLong keys combined with an algorithm for periodically \nchanging keys can ensure that encrypted communications \nwill be difficult to break within a reasonable time. \n Fragmentation of IP packets is another means to con-\nceal the contents of messages from IDSs, which often \ndo not bother to reassemble fragments. IP packets may \nnormally be fragmented into smaller packets anywhere \nalong a route and reassembled at the destination. An IDS \ncan become confused with a flood of fragments, bogus \nfragments, or deliberately overlapping fragments. \n 2 . MALICIOUS SOFTWARE \n Malicious software, or malware, continues to be an enor-\nmous problem for Internet users because of its variety \nand prevalence and the level of danger it presents. 11 , 12 , 13 \nIt is important to realize that malware can take many \nforms. A large class of malware is infectious , which \nincludes viruses and worms. Viruses and worms are self-\nreplicating, meaning that they spread from host to host \nby making copies of themselves. Viruses are pieces of \ncode attached to a normal file or program. When the pro-\ngram is run, the virus code is executed and copies itself \nto (or infects) another file or program. It is often said that \nviruses need a human action to spread, whereas worms \nare standalone automated programs. Worms look for \nvulnerable targets across the network and transfer a copy \nof themselves if a target is successfully compromised. \n Historically, several worms have become well known \nand stimulated concerns over the possibility of a fast epi-\ndemic infecting Internet-connected hosts before defenses \ncould stop it. The 1988 Robert Morris Jr. worm infected \nthousands of Unix hosts, at the time a significant por-\ntion of the Arpanet (the predecessor to the Internet). \nThe 1999 Melissa worm infected Microsoft Word docu-\nments and emailed itself to addresses found in a victim’s \nOutlook address book. Melissa demonstrated that email \ncould be a very effective vector for malware distribu-\ntion, and many subsequent worms have continued to use \nemail, such as the 2000 Love Letter worm. In the 2001 –\n 04 interval, several fast worms appeared, notably Code \nRed, Nimda, Klez, SQL Slammer/Sapphire, Blaster, \nSobig, and MyDoom. \n An important feature of viruses and worms is their \ncapability to carry a payload — malicious code that is \nexecuted on a compromised host. The payload can be \nvirtually anything. For instance, SQL Slammer/Sapphire \nhad no payload, whereas Code Red carried an agent to \nperform a denial-of-service (DoS) attack on certain fixed \naddresses. The Chernobyl or CIH virus had one of the \nmost destructive payloads, attempting to overwrite criti-\ncal system files and the system BIOS that is needed for \na computer to boot up. Worms are sometimes used to \ndeliver other malware, such as bots, in their payload. They \nare popular delivery vehicles because of their ability to \nspread by themselves and carry anything in their payload. \n Members of a second large class of malware are \ncharacterized by attempts to conceal themselves. This \nclass includes Trojan horses and rootkits. Worms are \nnot particularly stealthy (unless they are designed to be), \nbecause they are typically indiscriminate in their attacks. \nThey probe potential targets in the hope of compromis-\ning many targets quickly. Indeed, fast-spreading worms \nare relatively easy to detect because of the network con-\ngestion caused by their probes. \n Stealth is an important feature for malware because \nthe critical problem for antivirus software is obviously \ndetection of malware. Trojan horses are a type of mal-\nware that appears to perform a useful function but hides \na malicious function. Thus, the presence of the Trojan \nhorse might not be concealed, but functionality is not \nfully revealed. For example, a video codec could offer \nto play certain types of video but also covertly steal \nthe user’s data in the background. In the second half of \n 11 David Harley and David Slade, Viruses Revealed , McGraw-Hill, \n2001. \n 12 Ed Skoudis, Malware: Fighting Malicious Code , Prentice Hall \nPTR, 2004. \n 13 Peter Szor, The Art of Computer Virus Research and Defense , \nAddison-Wesley, 2005. \n" }, { "page_number": 90, "text": "Chapter | 4 Guarding Against Network Intrusions\n57\n2007, Microsoft reported a dramatic increase of 300% in \nthe number of Trojan downloaders and droppers, small \nprograms to facilitate downloading more malware later. 4 \n Rootkits are essentially modifications to the operat-\ning system to hide the presence of files or processes from \nnormal means of detection. Rootkits are often installed \nas drivers or kernel modules. A highly publicized exam-\nple was the extended copy protection (XCP) software \nincluded in some Sony BMG audio CDs in 2005, to pre-\nvent music copying. The software was installed automati-\ncally on Windows PCs when a CD was played. Made by \na company called First 4 Internet, XCP unfortunately con-\ntained a hidden rootkit component that patched the oper-\nating system to prevent it from displaying any processes, \nRegistry entries, or files with names beginning with $sys$ . \nAlthough the intention of XCP was not malicious, there \nwas concern that the rootkit could be used by malware \nwriters to conceal malware. \n A third important class of malware is designed \nfor remote control. This class includes remote access \nTrojans (RATs) and bots. Instead of remote access \nTrojan , RAT is sometimes interpreted as remote admin-\nistration tool because it can be used for legitimate pur-\nposes by systems administrators. Either way, RAT refers \nto a type of software usually consisting of server and \nclient parts designed to enable covert communications \nwith a remote controller. The client part is installed on a \nvictim host and mainly listens for instructions from the \nserver part, located at the controller. Notorious examples \ninclude Back Orifice, Netbus, and Sub7. \n Bots are remote-control programs installed covertly \non innocent hosts. 14 Bots are typically programmed to lis-\nten to IRC channels for instructions from a “ bot herder. ” \nAll bots under control of the same bot herder form a bot-\nnet. Botnets have been known to be rented out for pur-\nposes of sending spam or launching a distributed DoS \n(DDoS) attack. 15 The power of a botnet is proportional to \nits size, but exact sizes have been difficult to discover. \n One of the most publicized bots is the Storm worm, \nwhich has various aliases. Storm was launched in January \n2007 as spam with a Trojan horse attachment. As a bot-\nnet, Storm has shown unusual resilience by working in \na distributed peer-to-peer manner without centralized \ncontrol. Each compromised host connects to a small sub-\nset of the entire botnet. Each infected host shares lists of \nother infected hosts, but no single host has a full list of \nthe entire botnet. The size of the Storm botnet has been \nestimated at more than 1 million compromised hosts, but \nan exact size has been impossible to determine because \nof the many bot variants and active measures to avoid \ndetection. Its creators have been persistent in continually \nupdating its lures with current events and evolving tactics \nto spread and avoid detection. \n Another major class of malware is designed for data \ntheft. This class includes keyloggers and spyware. A key-\nlogger can be a Trojan horse or other form of malware. \nIt is designed to record a user’s keystrokes and perhaps \nreport them to a remote attacker. Keyloggers are planted \nby criminals on unsuspecting hosts to steal passwords \nand other valuable personal information. It has also been \nrumored that the Federal Bureau of Investigation (FBI) \nhas used a keylogger called Magic Lantern. \n As the name implies, spyware is stealthy software \ndesigned to monitor and report user activities for the pur-\nposes of learning personal information without the user’s \nknowledge or consent. Surveys have found that spyware \nis widely prevalent on consumer PCs, usually without \nknowledge of the owners. Adware is viewed by some \nas a mildly objectionable form of spyware that spies on \nWeb browsing behavior to target online advertisements \nto a user’s apparent interests. More objectionable forms \nof spyware are more invasive of privacy and raise other \nobjections related to stealthy installation, interference \nwith normal Web browsing, and difficulty of removal. \n Spyware can be installed in a number of stealthy \nways: disguised as a Trojan horse, bundled with a legiti-\nmate software program, delivered in the payload of a \nworm or virus, or downloaded through deception. For \ninstance, a deceptive Web site might pop up a window \nappearing to be a standard Windows dialog box, but \nclicking any button will cause spyware to be down-\nloaded. Another issue is that spyware might or might not \ndisplay an end-user license agreement (EULA) before \ninstallation. If an EULA is displayed, the mention of \nspyware is typically unnoticeable or difficult to find. \n More pernicious forms of spyware can change compu-\nter settings, reset homepages, and redirect the browser to \nunwanted sites. For example, the notorious CoolWebSearch \nchanged homepages to Coolwebsearch.com , rewrote \nsearch engine results, and altered host files, and some \nvariants added links to pornographic and gambling sites \nto the browser’s bookmarks. \n Lures and “ Pull ” Attacks \n Traditional network attacks can be viewed as an “ active ” \napproach in which the attacker takes the initiative of a \n 14 Craig Schiller, et al., Botnets: the Killer Web App , Syngress \nPublishing, 2007. \n 15 David Dittrich, “ Distributed denial of service (DDoS) attacks/\ntools, ” available at http://staff.washington.edu/dittrich/misc/ddos/ (date \nof access: July 1, 2008). \n" }, { "page_number": 91, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n58\nseries of actions directed at a target. Attackers face the \nrisk of revealing their malicious intentions through these \nactions. For instance, port scanning, password guessing, \nor exploit attempts can be readily detected by an IDS as \nsuspicious activities. Sending malware through email \ncan only be seen as an attack attempt. \n Security researchers have observed a trend away \nfrom direct attacks toward more stealthy attacks that \nwait for victims to visit malicious Web sites, as shown \nin Figure 4.2 . 16 The Web has become the primary vector \nfor infecting computers, in large part because email has \nbecome better secured. Sophos discovers a new mali-\ncious Webpage every 14 seconds, on average. 17 \n Web-based attacks have significant advantages for \nattackers. First, they are stealthier and not as “ noisy ” as \nactive attacks, making it easier to continue undetected \nfor a longer time. Second, Web servers have the intel-\nligence to be stealthy. For instance, Web servers have \nbeen found that serve up an attack only once per IP \naddress, and otherwise serve up legitimate content. The \nmalicious server remembers the IP addresses of visitors. \nThus, a visitor will be attacked only once, which makes \nthe attack harder to detect. Third, a Web server can serve \nup different attacks, depending on the visitor’s operating \nsystem and browser. \n As mentioned earlier, a common type of attack car-\nried out through the Web is phishing. A phishing site is \ntypically disguised as a legitimate financial organization \nor ecommerce business. During the month of December \n2007, the Anti-Phishing Working Group found 25,328 \nnew unique phishing sites hijacking 144 brands ( www.\nantiphishing.org ). \n Another type of Web-based attack is a malicious site \nthat attempts to download malware through a visitor’s \nbrowser, called a drive-by download . A Web page usually \nloads a malicious script by means of an iframe (inline \nframe). It has been reported that most drive-by down-\nloads are hosted on legitimate sites that have been com-\npromised. For example, in June 2007 more than 10,000 \nlegitimate Italian Web sites were discovered to be com-\npromised with malicious code loaded through iframes. \nMany other legitimate sites are regularly compromised. \n Drive-by downloading through a legitimate site holds \ncertain appeal for attackers. First, most users would be \nreluctant to visit suspicious and potentially malicious sites \nbut will not hesitate to visit legitimate sites in the belief \nthat they are always safe. Even wary Web surfers may be \ncaught off-guard. Second, the vast majority of Web serv-\ners run Apache (approximately 50%) or Microsoft IIS \n(approximately 40%), both of which have vulnerabilities \nthat can be exploited by attackers. Moreover, servers with \ndatabase applications could be vulnerable to SQL injec-\ntion attacks. Third, if a legitimate site is compromised \nwith an iframe, the malicious code might go unnoticed by \nthe site owner for some time. \n Pull-based attacks pose one challenge to attackers: \nThey must attract visitors to the malicious site some-\nhow while avoiding detection by security researchers. \nOne obvious option is to send out lures in spam. Lures \nhave been disguised as email from the Internal Revenue \nService, a security update from Microsoft, or a greeting \ncard. The email attempts to entice the reader to visit a \nlink. On one hand, lures are easier to get through spam \nfilters because they only contain links and not attach-\nments. It is easier for spam filters to detect malware \nattachments than to determine whether links in email \nare malicious. On the other hand, spam filters are eas-\nily capable of extracting and following links from spam. \nThe greater challenge is to determine whether the linked \nsite is malicious. \n 3 . DEFENSE IN DEPTH \n Most security experts would agree with the view that \nperfect network security is impossible to achieve and \nthat any single defense can always be overcome by an \nattacker with sufficient resources and motivation. The \nbasic idea behind the defense-in-depth strategy is to \nhinder the attacker as much as possible with multiple \nlayers of defense, even though each layer might be sur-\nmountable. More valuable assets are protected behind \nmore layers of defense. The combination of multiple \nlayers increases the cost for the attacker to be successful, \nand the cost is proportional to the value of the protected \n 16 Joel Scambray, Mike Shema, and Caleb Sima, Hacking Exposed \nWeb Applications , 2nd ed., McGraw-Hill, 2006. \n 17 Sophos, “ Security Threat Report 2008, ” available at http://research.\nsophos.com/sophosthreatreport08 (date of access: July 1, 2008). \nMalicious site\nURL\nSpam\n FIGURE 4.2 Stealthy attacks lure victims to malicious servers. \n" }, { "page_number": 92, "text": "Chapter | 4 Guarding Against Network Intrusions\n59\nassets. Moreover, a combination of multiple layers will \nbe more effective against unpredictable attacks than will \na single defense optimized for a particular type of attack. \n The cost for the attacker could be in terms of addi-\ntional time, effort, or equipment. For instance, by \ndelaying an attacker, an organization would increase \nthe chances of detecting and reacting to an attack in \nprogress. The increased costs to an attacker could deter \nsome attempts if the costs are believed to outweigh the \npossible gain from a successful attack. \n Defense in depth is sometimes said to involve peo-\nple, technology, and operations. Trained security people \nshould be responsible for securing facilities and infor-\nmation assurance. However, every computer user in an \norganization should be made aware of security policies \nand practices. Every Internet user at home should be \naware of safe practices (such as avoiding opening email \nattachments or clicking suspicious links) and the benefits \nof appropriate protection (antivirus software, firewalls). \n A variety of technological measures can be used \nfor layers of protection. These should include firewalls, \nIDSs, routers with ACLs, antivirus software, access con-\ntrol, spam filters, and so on. These topics are discussed \nin more depth later. \n The term operations refers to all preventive and reac-\ntive activities required to maintain security. Preventive \nactivities include vulnerability assessments, software \npatching, system hardening (closing unnecessary ports), \nand access controls. Reactive activities should detect \nmalicious activities and react by blocking attacks, isolat-\ning valuable resources, or tracing the intruder. \n Protection of valuable assets can be a more complicated \ndecision than simply considering the value of the assets. \nOrganizations often perform a risk assessment to determine \nthe value of assets, possible threats, likelihood of threats, \nand possible impact of threats. Valuable assets facing \nunlikely threats or threats with low impact might not need \nmuch protection. Clearly, assets of high value facing likely \nthreats or high-impact threats merit the strongest defenses. \nOrganizations usually have their own risk management \nprocess for identifying risks and deciding how to allocate a \nsecurity budget to protect valuable assets under risk. \n 4 . PREVENTIVE MEASURES \n Most computer users are aware that Internet use poses \nsecurity risks. It would be reasonable to take precautions \nto minimize exposure to attacks. Fortunately, several \noptions are available to computer users to fortify their \nsystems to reduce risks. \n Access Control \n In computer security, access control refers to mechanisms \nto allow users to perform functions up to their authorized \nlevel and restrict users from performing unauthorized \nfunctions. 18 Access control includes: \n ● Authentication of users \n ● Authorization of their privileges \n ● Auditing to monitor and record user actions \n All computer users will be familiar with some type \nof access control. \n Authentication is the process of verifying a user’s \nidentity. Authentication is typically based on one or \nmore of these factors: \n ● Something the user knows, such as a password or \nPIN \n ● Something the user has, such as a smart card or \ntoken \n ● Something personal about the user, such as a finger-\nprint, retinal pattern, or other biometric identifier \n Use of a single factor, even if multiple pieces of evi-\ndence are offered, is considered weak authentication. A \ncombination of two factors, such as a password and a \nfingerprint, called two-factor (or multifactor ) authenti-\ncation, is considered strong authentication. \n Authorization is the process of determining what an \nauthenticated user can do. Most operating systems have \nan established set of permissions related to read, write, or \nexecute access. For example, an ordinary user might have \npermission to read a certain file but not write to it, whereas \na root or superuser will have full privileges to do anything. \n Auditing is necessary to ensure that users are account-\nable. Computer systems record actions in the system \nin audit trails and logs. For security purposes, they are \ninvaluable forensic tools to recreate and analyze inci-\ndents. For instance, a user attempting numerous failed \nlogins might be seen as an intruder. \n Vulnerability Testing and Patching \n As mentioned earlier, vulnerabilities are weaknesses in \nsoftware that might be used to compromise a computer. \nVulnerable software includes all types of operating sys-\ntems and application programs. New vulnerabilities are \nbeing discovered constantly in different ways. New vul-\nnerabilities discovered by security researchers are usually \n 18 B. Carroll, Cisco Access Control Security: AAA Administration \nServices , Cisco Press, 2004. \n" }, { "page_number": 93, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n60\nreported confidentially to the vendor, which is given \ntime to study the vulnerability and develop a path. Of \nall vulnerabilities disclosed in 2007, 50% could be cor-\nrected through vendor patches. 19 When ready, the ven-\ndor will publish the vulnerability, hopefully along with \na patch. \n It has been argued that publication of vulnerabilities \nwill help attackers. Though this might be true, publica-\ntion also fosters awareness within the entire community. \nSystems administrators will be able to evaluate their sys-\ntems and take appropriate precautions. One might expect \nsystems administrators to know the configuration of \ncomputers on their network, but in large organizations, \nit would be difficult to keep track of possible configura-\ntion changes made by users. Vulnerability testing offers \na simple way to learn about the configuration of comput-\ners on a network. \n Vulnerability testing is an exercise to probe systems \nfor known vulnerabilities. It requires a database of known \nvulnerabilities, a packet generator, and test routines to \ngenerate a sequence of packets to test for a particular vul-\nnerability. If a vulnerability is found and a software patch \nis available, that host should be patched. \n Penetration testing is a closely related idea but takes \nit further. Penetration testing simulates the actions of a \nhypothetical attacker to attempt to compromise hosts. \nThe goal is, again, to learn about weaknesses in the net-\nwork so that they can be remedied. \n Closing Ports \n Transport layer protocols, namely Transmission Control \nProtocol (TCP) and User Datagram Protocol (UDP), \nidentify applications communicating with each other by \nmeans of port numbers. Port numbers 1 to 1023 are well \nknown and assigned by the Internet Assigned Numbers \nAuthority (IANA) to standardized services running with \nroot privileges. For example, Web servers listen on TCP \nport 80 for client requests. Port numbers 1024 to 49151 \nare used by various applications with ordinary user priv-\nileges. Port numbers above 49151 are used dynamically \nby applications. \n It is good practice to close ports that are unneces-\nsary, because attackers can use open ports, particularly \nthose in the higher range. For instance, the Sub7 Trojan \nhorse is known to use port 27374 by default, and Netbus \nuses port 12345. Closing ports does not by itself guar-\nantee the safety of a host, however. Some hosts need to \nkeep TCP port 80 open for HyperText Transfer Protocol \n(HTTP), but attacks can still be carried out through that \nport. \n Firewalls \n When most people think of network security, firewalls \nare one of the first things to come to mind. Firewalls are \na means of perimeter security protecting an internal net-\nwork from external threats. A firewall selectively allows \nor blocks incoming and outgoing traffic. Firewalls can \nbe standalone network devices located at the entry to a \nprivate network or personal firewall programs running \non PCs. An organization’s firewall protects the internal \ncommunity; a personal firewall can be customized to an \nindividual’s needs. \n Firewalls can provide separation and isolation among \nvarious network zones, namely the public Internet, pri-\nvate intranets, and a demilitarized zone (DMZ), as \nshown in Figure 4.3 . The semiprotected DMZ typically \nincludes public services provided by a private organiza-\ntion. Public servers need some protection from the public \nInternet so they usually sit behind a firewall. This fire-\nwall cannot be completely restrictive because the public \nservers must be externally accessible. Another firewall \ntypically sits between the DMZ and private internal \nnetwork because the internal network needs additional \nprotection. \n There are various types of firewalls: packet-filtering \nfirewalls, stateful firewalls, and proxy firewalls. In any \ncase, the effectiveness of a firewall depends on the con-\nfiguration of its rules. Properly written rules require \ndetailed knowledge of network protocols. Unfortunately, \nsome firewalls are improperly configured through neglect \nor lack of training. \n Packet-filtering firewalls analyze packets in both \ndirections and either permit or deny passage based on a \nset of rules. Rules typically examine port numbers, proto-\ncols, IP addresses, and other attributes of packet headers. \nDMZ\nPublic Internet\nPrivate network\n FIGURE 4.3 A firewall isolating various network zones. \n 19 IBM Internet Security Systems, X-Force 2007 Trend Statistics , \nJanuary 2008 (date of access: July 1, 2008). \n" }, { "page_number": 94, "text": "Chapter | 4 Guarding Against Network Intrusions\n61\nThere is no attempt to relate multiple packets with a flow \nor stream. The firewall is stateless, retaining no memory \nof one packet to the next. \n Stateful firewalls overcome the limitation of packet-\nfiltering firewalls by recognizing packets belonging to \nthe same flow or connection and keeping track of the \nconnection state. They work at the network layer and \nrecognize the legitimacy of sessions. \n Proxy firewalls are also called application-level fire-\nwalls because they process up to the application layer. \nThey recognize certain applications and can detect \nwhether an undesirable protocol is using a nonstandard \nport or an application layer protocol is being abused. \nThey protect an internal network by serving as primary \ngateways to proxy connections from the internal network \nto the public Internet. They could have some impact on \nnetwork performance due to the nature of the analysis. \n Firewalls are essential elements of an overall defen-\nsive strategy but have the drawback that they only protect \nthe perimeter. They are useless if an intruder has a way \nto bypass the perimeter. They are also useless against \ninsider threats originating within a private network. \n Antivirus and Antispyware Tools \n The proliferation of malware prompts the need for antivi-\nrus software. 20 Antivirus software is developed to detect \nthe presence of malware, identify its nature, remove the \nmalware (disinfect the host), and protect a host from \nfuture infections. Detection should ideally minimize false \npositives (false alarms) and false negatives (missed mal-\nware) at the same time. Antivirus software faces a number \nof difficult challenges: \n ● Malware tactics are sophisticated and constantly \nevolving. \n ● Even the operating system on infected hosts cannot \nbe trusted. \n ● Malware can exist entirely in memory without \naffecting files. \n ● Malware can attack antivirus processes. \n ● The processing load for antivirus software cannot \ndegrade computer performance such that users \nbecome annoyed and turn the antivirus software off. \n One of the simplest tasks performed by antivirus \nsoftware is file scanning. This process compares the \nbytes in files with known signatures that are byte pat-\nterns indicative of a known malware. It represents the \ngeneral approach of signature-based detection. When \nnew malware is captured, it is analyzed for unique char-\nacteristics that can be described in a signature. The new \nsignature is distributed as updates to antivirus programs. \nAntivirus looks for the signature during file scanning, \nand if a match is found, the signature identifies the mal-\nware specifically. There are major drawbacks to this \nmethod, however: New signatures require time to develop \nand test; users must keep their signature files up to date; \nand new malware without a known signature may escape \ndetection. \n Behavior-based detection is a complementary approach. \nInstead of addressing what malware is, behavior-based \ndetection looks at what malware tries to do. In other words, \nanything attempting a risky action will come under suspi-\ncion. This approach overcomes the limitations of signature-\nbased detection and could find new malware without a \nsignature, just from its behavior. However, the approach \ncan be difficult in practice. First, we must define what is \nsuspicious behavior, or conversely, what is normal behav-\nior. This definition often relies on heuristic rules developed \nby security experts, because normal behavior is difficult \nto define precisely. Second, it might be possible to dis-\ncern suspicious behavior, but it is much more difficult to \n determine malicious behavior, because malicious intention \nmust be inferred. When behavior-based detection flags sus-\npicious behavior, more follow-up investigation is usually \nneeded to better understand the threat risk. \n The ability of malware to change or disguise appear-\nances can defeat file scanning. However, regardless of \nits form, malware must ultimately perform its mission. \nThus, an opportunity will always arise to detect mal-\nware from its behavior if it is given a chance to execute. \nAntivirus software will monitor system events, such as \nhard-disk access, to look for actions that might pose a \nthreat to the host. Events are monitored by intercepting \ncalls to operating system functions. \n Although monitoring system events is a step beyond \nfile scanning, malicious programs are running in the \nhost execution environment and could pose a risk to the \nhost. The idea of emulation is to execute suspected code \nwithin an isolated environment, presenting the appear-\nance of the computer resources to the code, and to look \nfor actions symptomatic of malware. \n Virtualization takes emulation a step further and exe-\ncutes suspected code within a real operating system. A \nnumber of virtual operating systems can run above the \nhost operating system. Malware can corrupt a virtual \noperating system, but for safety reasons a virtual operat-\ning system has limited access to the host operating sys-\ntem. A “ sandbox ” isolates the virtual environment from \n 20 Peter Szor, The Art of Computer Virus Research and Defense , \nAddison-Wesley, 2005. \n" }, { "page_number": 95, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n62\ntampering with the host environment, unless a specific \naction is requested and permitted. In contrast, emulation \ndoes not offer an operating system to suspected code; \nthe code is allowed to execute step by step, but in a con-\ntrolled and restricted way, just to discover what it will \nattempt to do. \n Antispyware software can be viewed as a specialized \nclass of antivirus software. Somewhat unlike traditional \nviruses, spyware can be particularly pernicious in mak-\ning a vast number of changes throughout the hard drive \nand system files. Infected systems tend to have a large \nnumber of installed spyware programs, possibly includ-\ning certain cookies (pieces of text planted by Web sites \nin the browser as a means of keeping them in memory). \n Spam Filtering \n Every Internet user is familiar with spam email. There \nis no consensus on an exact definition of spam, but most \npeople would agree that spam is unsolicited, sent in \nbulk, and commercial in nature. There is also consensus \nthat the vast majority of email is spam. Spam continues \nto be a problem because a small fraction of recipients do \nrespond to these messages. Even though the fraction is \nsmall, the revenue generated is enough to make spam \nprofitable because it costs little to send spam in bulk. \nIn particular, a large botnet can generate an enormous \namount of spam quickly. \n Users of popular Webmail services such as Yahoo! \nand Hotmail are attractive targets for spam because their \naddresses might be easy to guess. In addition, spammers \nharvest email addresses from various sources: Web sites, \nnewsgroups, online directories, data-stealing viruses, and so \non. Spammers might also purchase lists of addresses from \ncompanies who are willing to sell customer information. \n Spam is more than an inconvenience for users and a \nwaste of network resources. Spam is a popular vehicle to \ndistribute malware and lures to malicious Web sites. It is \nthe first step in phishing attacks. \n Spam filters work at an enterprise level and a per-\nsonal level. At the enterprise level, mail gateways can \nprotect an entire organization by scanning incoming \nmessages for malware and blocking messages from sus-\npicious or fake senders. A concern at the enterprise level \nis the rate of false positives, which are legitimate mes-\nsages mistaken for spam. Users may become upset if \ntheir legitimate mail is blocked. Fortunately, spam filters \nare typically customizable, and the rate of false positives \ncan be made very low. Additional spam filtering at the \npersonal level can customize filtering even further, to \naccount for individual preferences. \n Various spam-filtering techniques are embodied in \nmany commercial and free spam filters, such as DSPAM \nand SpamAssassin, to name two. Bayesian filtering is \none of the more popular techniques. 21 First, an incoming \nmessage is parsed into tokens, which are single words or \nword combinations from the message’s header and body. \nSecond, probabilities are assigned to tokens through a \ntraining process. The filter looks at a set of known spam \nmessages compared to a set of known legitimate mes-\nsages and calculates token probabilities based on Bayes ’ \ntheorem (from probability theory). Intuitively, a word \nsuch as Viagra would appear more often in spam, and \ntherefore the appearance of a Viagra token would increase \nthe probability of that message being classified as spam. \n The probability calculated for a message is compared \nto a chosen threshold; if the probability is higher, the \nmessage is classified as spam. The threshold is chosen \nto balance the rates of false positives and false negatives \n(missed spam) in some desired way. An attractive feature \nof Bayesian filtering is that its probabilities will adapt to \nnew spam tactics, given continual feedback, that is, cor-\nrection of false positives and false negatives by the user. \n It is easy to see why spammers have attacked \nBayesian filters by attempting to influence the prob-\nabilities of tokens. For example, spammers have tried \nfilling messages with large amounts of legitimate text \n(e.g., drawn from classic literature) or random innocu-\nous words. The presence of legitimate tokens tends to \ndecrease a message’s score because they are evidence \ncounted toward the legitimacy of the message. \n Spammers are continually trying new ways to get \nthrough spam filters. At the same time, security compa-\nnies respond by adapting their technologies. \n Honeypots \n The basic idea of a honeypot is to learn about attacker \ntechniques by attracting attacks to a seemingly vulner-\nable host. 22 It is essentially a forensics tool rather than a \nline of defense. A honeypot could be used to gain valu-\nable information about attack methods used elsewhere \nor imminent attacks before they happen. Honeypots are \nused routinely in research and production environments. \n A honeypot has more special requirements than a reg-\nular PC. First, a honeypot should not be used for legiti-\nmate services or traffic. Consequently, every activity \n 21 J. Zdziarski, Ending Spam: Bayesian Content Filtering and the Art \nof Statistical Language Classifi cation , No Starch Press, 2005. \n 22 The Honeynet Project, Know Your Enemy: Learning About Security \nThreats , 2nd ed., Addison-Wesley, 2004. \n" }, { "page_number": 96, "text": "Chapter | 4 Guarding Against Network Intrusions\n63\nseen by the honeypot will be illegitimate. Even though \nhoneypots typically record little data compared to IDS, \nfor instance, their data has little “ noise, ” whereas the \nbulk of IDS data is typically uninteresting from a secu-\nrity point of view. \n Second, a honeypot should have comprehensive and \nreliable capabilities for monitoring and logging all activ-\nities. The forensic value of a honeypot depends on the \ndetailed information it can capture about attacks. \n Third, a honeypot should be isolated from the real \nnetwork. Since honeypots are intended to attract attacks, \nthere is a real risk that the honeypot could be compro-\nmised and used as a launching pad to attack more hosts \nin the network. \n Honeypots are often classified according to their level \nof interaction, ranging from low to high. Low-interaction \nhoneypots, such as Honeyd, offer the appearance of sim-\nple services. An attacker could try to compromise the \nhoneypot but would not have much to gain. The limited \ninteractions pose a risk that an attacker could discover \nthat the host is a honeypot. At the other end of the range, \nhigh-interaction honeypots behave more like real sys-\ntems. They have more capabilities to interact with an \nattacker and log activities, but they offer more to gain if \nthey are compromised. \n Honeypots are related to the concepts of black holes \nor network telescopes, which are monitored blocks of \nunused IP addresses. Since the addresses are unused, \nany traffic seen at those addresses is naturally suspicious \n(although not necessarily malicious). \n Traditional honeypots suffer a drawback in that they \nare passive and wait to see malicious activity. The idea \nof honeypots has been extended to active clients that \nsearch for malicious servers and interact with them. The \nactive version of a honeypot has been called a honey-\nmonkey or client honeypot . \n Network Access Control \n A vulnerable host might place not only itself but an \nentire community at risk. For one thing, a vulnerable \nhost might attract attacks. If compromised, the host \ncould be used to launch attacks on other hosts. The com-\npromised host might give information to the attacker, \nor there might be trust relationships between hosts that \ncould help the attacker. In any case, it is not desirable to \nhave a weakly protected host on your network. \n The general idea of network access control (NAC) \nis to restrict a host from accessing a network unless the \nhost can provide evidence of a strong security posture. \nThe NAC process involves the host, the network (usually \nrouters or switches, and servers), and a security policy, \nas shown in Figure 4.4 . \n The details of the NAC process vary with various \nimplementations, which unfortunately currently lack \nstandards for interoperability. A host’s security posture \nincludes its IP address, operating system, antivirus soft-\nware, personal firewall, and host intrusion detection sys-\ntem. In some implementations, a software agent runs on \nthe host, collects information about the host’s security \nposture, and reports it to the network as part of a request \nfor admission to the network. The network refers to a \npolicy server to compare the host’s security posture to \nthe security policy, to make an admission decision. \n The admission decision could be anything from rejec-\ntion to partial admission or full admission. Rejection \nmight be prompted by out-of-date antivirus software, an \noperating system needing patches, or firewall miscon-\nfiguration. Rejection might lead to quarantine (routing to \nan isolated network) or forced remediation. \n 5 . INTRUSION MONITORING AND \nDETECTION \n Preventive measures are necessary and help reduce the \nrisk of attacks, but it is practically impossible to prevent \nall attacks. Intrusion detection is also necessary to detect \nand diagnose malicious activities, analogous to a burglar \nalarm. Intrusion detection is essentially a combination \nof monitoring, analysis, and response. 23 Typically an \nIDS supports a console for human interface and display. \nMonitoring and analysis are usually viewed as passive \ntechniques because they do not interfere with ongoing \nactivities. The typical IDS response is an alert to sys-\ntems administrators, who might choose to pursue further \ninvestigation or not. In other words, traditional IDSs do \nPolicy\nSecurity\ncredentials\n FIGURE 4.4 Network access control. \n 23 Richard Bejtlich, The Tao of Network Security Monitoring: Beyond \nIntrusion Detection , Addison-Wesley, 2005. \n" }, { "page_number": 97, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n64\nnot offer much response beyond alerts, under the pre-\nsumption that security incidents need human expertise \nand judgment for follow-up. \n Detection accuracy is the critical problem for intru-\nsion detection. Intrusion detection should ideally mini-\nmize false positives (normal incidents mistaken for \nsuspicious ones) and false negatives (malicious incidents \nescaping detection). Naturally, false negatives are con-\ntrary to the essential purpose of intrusion detection. False \npositives are also harmful because they are troublesome \nfor systems administrators who must waste time inves-\ntigating false alarms. Intrusion detection should also \nseek to more than identify security incidents. In addition \nto relating the facts of an incident, intrusion detection \nshould ascertain the nature of the incident, the perpetra-\ntor, the seriousness (malicious vs. suspicious), scope, \nand potential consequences (such as stepping from one \ntarget to more targets). \n IDS approaches can be categorized in at least two \nways. One way is to differentiate host-based and network-\nbased IDS, depending on where sensing is done. A \nhost-based IDS monitors an individual host, whereas \na network-based IDS works on network packets. Another \nway to view IDS is by their approach to analysis. \nTraditionally, the two analysis approaches are misuse \n(signature-based) detection and anomaly (behavior-based) \ndetection. As shown in Figure 4.5 , these two views are \ncomplementary and are often used in combination. \n In practice, intrusion detection faces several difficult \nchallenges: signature-based detection can recognize only \nincidents matching a known signature; behavior-based \ndetection relies on an understanding of normal behavior, \nbut “ normal ” can vary widely. Attackers are intelligent \nand evasive; attackers might try to confuse IDS with frag-\nmented, encrypted, tunneled, or junk packets; an IDS might \nnot react to an incident in real time or quickly enough to \nstop an attack; and incidents can occur anywhere at any \ntime, which necessitates continual and extensive monitor-\ning, with correlation of multiple distributed sensors. \n Host-Based Monitoring \n Host-based IDS runs on a host and monitors system activ-\nities for signs of suspicious behavior. Examples could \nbe changes to the system Registry, repeated failed login \nattempts, or installation of a backdoor. Host-based IDSs \nusually monitor system objects, processes, and regions \nof memory. For each system object, the IDS will usually \nkeep track of attributes such as permissions, size, modifi-\ncation dates, and hashed contents, to recognize changes. \n A concern for a host-based IDS is possible tampering \nby an attacker. If an attacker gains control of a system, the \nIDS cannot be trusted. Hence, special protection of the \nIDS against tampering should be architected into a host. \n A host-based IDS is not a complete solution by itself. \nThough monitoring the host is logical, it has three sig-\nnificant drawbacks: visibility is limited to a single host; \nthe IDS process consumes resources, possibly impacting \nperformance on the host; and attacks will not be seen \nuntil they have already reached the host. Host-based and \nnetwork-based IDS are often used together to combine \nstrengths. \n Traffic Monitoring \n Network-based IDSs typically monitor network packets \nfor signs of reconnaissance, exploits, DoS attacks, and \nmalware. They have strengths to complement host-based \nIDSs: network-based IDSs can see traffic for a popula-\ntion of hosts; they can recognize patterns shared by mul-\ntiple hosts; and they have the potential to see attacks \nbefore they reach the hosts. \n IDSs are placed in various locations for different \nviews, as shown in Figure 4.6 . An IDS outside a firewall \nis useful for learning about malicious activities on the \nInternet. An IDS in the DMZ will see attacks originating \nfrom the Internet that are able to get through the outer \nfirewall to public servers. Lastly, an IDS in the private \nnetwork is necessary to detect any attacks that are able \nto successfully penetrate perimeter security. \n Signature-Based Detection \n Signature-based intrusion detection depends on patterns \nthat uniquely identify an attack. If an incident matches \nDefine\nKnown\nattacks\nMisuse\ndetection\nNormal if\nnot attack\nAnomaly\ndetection\nNormal\nbehavior\nDefine\nSuspicious if\nnot normal\n FIGURE 4.5 Misuse detection and anomaly detection. \n" }, { "page_number": 98, "text": "Chapter | 4 Guarding Against Network Intrusions\n65\na known signature, the signature identifies the specific \nattack. The central issue is how to define signatures or \nmodel attacks. If signatures are too specific, a change in \nan attack tactic could result in a false negative (missed \nalarm). An attack signature should be broad enough to \ncover an entire class of attacks. On the other hand, if sig-\nnatures are too general, it can result in false positives. \n Signature-based approaches have three inherent draw-\nbacks: new attacks can be missed if a matching signature \nis not known; signatures require time to develop for new \nattacks; and new signatures must be distributed continually. \n Snort is a popular example of a signature-based IDS \n( www.snort.org ). Snort signatures are rules that define \nfields that match packets of information about the rep-\nresented attack. Snort is packaged with more than 1800 \nrules covering a broad range of attacks, and new rules \nare constantly being written. \n Behavior Anomalies \n A behavior-based IDS is appealing for its potential to \nrecognize new attacks without a known signature. It pre-\nsumes that attacks will be different from normal behav-\nior. Hence the critical issue is how to define normal \nbehavior, and anything outside of normal (anomalous) \nis classified as suspicious. A common approach is to \ndefine normal behavior in statistical terms, which allows \nfor deviations within a range. \n Behavior-based approaches have considerable chal-\nlenges. First, normal behavior is based on past behav-\nior. Thus, data about past behavior must be available \nfor training the IDS. Second, behavior can and does \nchange over time, so any IDS approach must be adap-\ntive. Third, anomalies are just unusual events, not neces-\nsarily malicious ones. A behavior-based IDS might point \nout incidents to investigate further, but it is not good at \ndiscerning the exact nature of attacks. \n Intrusion Prevention Systems \n IDSs are passive techniques. They typically notify the \nsystems administrator to investigate further and take \nthe appropriate action. The response might be slow if \nthe systems administrator is busy or the incident is time \n consuming to investigate. \n A variation called an intrusion prevention system \n(IPS) seeks to combine the traditional monitoring and \nanalysis functions of an IDS with more active automated \nresponses, such as automatically reconfiguring firewalls \nto block an attack. An IPS aims for a faster response \nthan humans can achieve, but its accuracy depends on \nthe same techniques as the traditional IDS. The response \nshould not harm legitimate traffic, so accuracy is critical. \n 6 . REACTIVE MEASURES \n When an attack is detected and analyzed, systems \nadministrators must exercise an appropriate response to \nthe attack. One of the principles in security is that the \nresponse should be proportional to the threat. Obviously, \nthe response will depend on the circumstances, but vari-\nous options are available. Generally, it is possible to \nblock, slow, modify, or redirect any malicious traffic. \nIt is not possible to delineate every possible response. \nHere we describe only two responses: quarantine and \ntraceback. \n Quarantine \n Dynamic quarantine in computer security is analogous \nto quarantine for infectious diseases. It is an appropri-\nate response, particularly in the context of malware, to \nprevent an infected host from contaminating other hosts. \nInfectious malware requires connectivity between an \ninfected host and a new target, so it is logical to disrupt \nDMZ\nPublic Internet\nPrivate network\nIDS\nIDS\nIDS\n FIGURE 4.6 IDSs monitoring various network zones. \n" }, { "page_number": 99, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n66\nthe connectivity between hosts or networks as a means \nto impede the malware from spreading further. \n Within the network, traffic can be blocked by fire-\nwalls or routers with access control lists (ACLs). ACLs \nare similar to firewall rules, allowing routers to selec-\ntively drop packets. \n Traceback \n One of the critical aspects of an attack is the identity or \nlocation of the perpetrator. Unfortunately, discovery of \nan attacker in IP networks is almost impossible because: \n ● The source address in IP packets can be easily \nspoofed (forged). \n ● Routers are stateless by design and do not keep \nrecords of forwarded packets. \n ● Attackers can use a series of intermediary hosts \n(called stepping stones or zombies ) to carry out their \nattacks. \n Intermediaries are usually innocent computers taken \nover by an exploit or malware and put under control of \nthe attacker. In practice, it might be possible to trace an \nattack back to the closest intermediary, but it might be \ntoo much to expect to trace an attack all the way back to \nthe real attacker. \n To trace a packet’s route, some tracking information \nmust be either stored at routers when the packet is for-\nwarded or carried in the packet, as shown in Figure 4.7 . \nAn example of the first approach is to store a hash of \na packet for some amount of time. If an attack occurs, \nthe target host will query routers for a hash of the attack \npacket. If a router has the hash, it is evidence that the \npacket had been forwarded by that router. To reduce \nmemory consumption, the hash is stored instead of stor-\ning the entire packet. The storage is temporary instead of \npermanent so that routers will not run out of memory. \n An example of the second approach is to stamp pack-\nets with a unique router identifier, such as an IP address. \nThus the packet carries a record of its route. The main \nadvantage here is that routers can remain stateless. The \nproblem is that there is no space in the IP packet header \nfor this scheme. \n 7 . CONCLUSIONS \n To guard against network intrusions, we must understand \nthe variety of attacks, from exploits to malware to social \nengineering. Direct attacks are prevalent, but a class of \n pull attacks has emerged, relying on lures to bring vic-\ntims to a malicious Web site. Pull attacks are much more \ndifficult to uncover and in a way defend against. Just \nabout anyone can become victimized. \n Much can be done to fortify hosts and reduce their risk \nexposure, but some attacks are unavoidable. Defense in \ndepth is a most practical defense strategy, combining lay-\ners of defenses. Although each defensive layer is imper-\nfect, the cost becomes harder to surmount for intruders. \n One of the essential defenses is intrusion detection . \nHost-based and network-based intrusion detection sys-\ntems have their respective strengths and weaknesses. \nResearch continues to be needed to improve intrusion \ndetection, particularly behavior-based techniques. As \nmore attacks are invented, signature-based techniques \nwill have more difficulty keeping up. \n \nID1\nID2\nPacket\nID1\nPacket\nRouter ID1\nRouter ID2\nHash (packet)\nPacket\nPacket\nHash (packet)\nTracking info\nkept at routers\nTracking info\nkept in packets\n FIGURE 4.7 Tracking information stored at routers or carried in packets to enable packet traceback. \n" }, { "page_number": 100, "text": "67\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n Unix and Linux Security \n Gerald Beuchelt \n Sun Microsystems \n Chapter 5 \n When Unix was first booted on a PDP-8 computer at \nBell Labs, it already had a basic notion of user isolation, \nseparation of kernel and user memory space, and pro-\ncess security. It was originally conceived as a multiuser \nsystem, and as such, security could not be added on as \nan afterthought. In this respect, Unix was different from \na whole class of computing machinery that had been tar-\ngeted at single-user environments. \n 1 . UNIX AND SECURITY \n The examples in this chapter refer to the Solaris operat-\ning system and Debian-based Linux distributions, a com-\nmercial and a community developed operating system. \n Solaris is freely available in open source and binary \ndistributions. It derives directly from AT & T System \nV R4.2 and is one of the few operating systems that \ncan legally be called Unix. It is distributed by Sun \nMicrosystems, but there are independent distributions \nbuilt on top of the open source version of Solaris. \n The Aims of System Security \n Linux is mostly a GNU software-based operating sys-\ntem with a kernel originally written by Linus Torvalds. \nDebian is a distribution originally developed by Ian \nMurdock of Purdue University. Debian’s express goal \nis to use only open and free software, as defined by its \nguidelines. \n Authentication \n When a user is granted access to resources on a comput-\ning system, it is of vital importance to verify that he was \ngranted permission for access. This process — establishing \nthe identity of the user — is commonly referred to as \n authentication (sometimes abbreviated AuthN ). \n Authorization \n As we mentioned, Unix was originally devised as a mul-\ntiuser system. To protect user data from other users and \nnonusers, the operating system has to put up safeguards \nagainst unauthorized access. Determining the eligibil-\nity of an authenticated (or anonymous) user to access a \nresource is usually called authorization ( AuthZ ). \n Availability \n Guarding a system (including all its subsystems, such \nas the network) against security breaches is vital to keep \nthe system available for its intended use. Availability of \na system must be properly defined: Any system is physi-\ncally available, even if it is turned off — however, a shut-\ndown system would not be too useful. In the same way, \na system that has only the core operating system running \nbut not the services that are supposed to run on the sys-\ntem is considered not available. \n Integrity \n Similar to availability, a system that is compromised can-\nnot be considered available for regular service. Ensuring \nthat the Unix system is running in the intended way is \nmost crucial, especially since the system might other-\nwise be used by a third party for malicious uses, such as \na relay or member in a botnet. \n Achieving Unix Security \n Prior to anything else, it is vitally important to empha-\nsize the need to keep Unix systems up to date. No oper-\nating system or other program can be considered safe \nwithout being patched up; this point cannot be stressed \nenough. Having a system with the latest security patches \nis the first and most often the best line of defense against \nintruders. \n" }, { "page_number": 101, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n68\n All Unix systems have a patching mechanism; this is \na way to get the system up to date. Depending on the \nvendor and the mechanism used, it is possible to “ back \nout ” the patches. For example, on Solaris it is usually \npossible to remove a patch through the patchrm(1 m) \ncommand. On Debian-based systems this is not quite as \neasy, since in a patch the software package to be updated \nis replaced by a new version. Undoing this is only pos-\nsible by installing the earlier package. \n Detecting Intrusions with Audits \nand Logs \n By default, most Unix systems log kernel messages and \nimportant system events from core services. The most \ncommon logging tool is the syslog facility, which is con-\ntrolled from the /etc/syslog.conf file. \n 2 . BASIC UNIX SECURITY \n Unix security has a long tradition, and though many \nconcepts of the earliest Unix systems still apply, there \nhave been a large number of changes that fundamentally \naltered the way the operating system implements these \nsecurity principles. \n One of the reasons that it’s complicated to talk about \nUnix security is that there are a lot of variants of Unix \nand Unix-like operating systems on the market. In fact, \nif you only look at some of the core Portable Operating \nSystem Interface (POSIX) standards that have been set \nforth to guarantee a minimal consistency across differ-\nent Unix flavors (see Figure 5.1 ), almost every operat-\ning system on the market qualifies as Unix (or, more \nprecisely, POSIX compliant). Examples include not only \nthe traditional Unix operating systems such as Solaris, \nHP-UX, or AIX but also Windows NT-based operating \nsystems (such as Windows XP, either through the native \nPOSIX subsystem or the Services for Windows exten-\nsions) or even z/OS. \n Traditional Unix Systems \n Most traditional Unix systems do share some internal \nfeatures, though: Their authentication and authoriza-\ntion approaches are similar, their delineation between \nkernel space and user space goes along the same lines, \nand their security-related kernel structures are roughly \ncomparable. In the last few years, however, there have \nbeen major advancements in extending the original secu-\nrity model by adding role-based access control (RBAC) \nmodels to some operating systems. \n Kernel Space versus User Land \n Unix systems typically execute instructions in one of \ntwo general contexts: the kernel or the user space. Code \nexecuted in a kernel context has (at least in traditional \nsystems) full access to the entire hardware and software \ncapabilities of the computing environment. Though there \nare some systems that extend security safeguards into the \nkernel, in most cases, not only can a rogue kernel execu-\ntion thread cause massive data corruption, it can effec-\ntively bring down the entire operating system. \n Obviously, a normal user of an operating system \nshould not wield so much power. To prevent this, user \nexecution threads in Unix systems are not executed in \nthe context of the kernel but in a less privileged context, \nthe user space — sometimes also facetiously called “ user \nland. ” The Unix kernel defines a structure called process \n(see Figure 5.2 ) that associates metadata about the user \nas well as, potentially, other environmental factors with \nthe execution thread and its data. Access to computing \nresources such as memory, I/O subsystems, and so on \nis safeguarded by the kernel; if a user process wants to \nallocate a segment of memory or access a device, it has \nto make a system call, passing some of its metadata as \nparameters to the kernel. The kernel then performs an \nauthorization decision and either grants the request or \nreturns an error. It is then the process’s responsibility to \nproperly react to either the results of the access or the \nerror. \nThe term POSIX stands (loosely) for “Portable Operating System Interface for uniX”. From the IEEE\n1003.1 Standard, 2004 Edition: \n“This standard defines a standard operating system interface and environment, including a command\ninterpreter (or “shell”), and common utility programs to support applications portability at the source\ncode level. This standard is the single common revision to IEEE Std 1003.1-1996, IEEE Std\n1003.2-1992, and the Base Specifications of The Open Group Single UNIX Specification, Version 2.”\nPartial or full POSIX compliance is often required for government contracts.\n FIGURE 5.1 Various Unix and POSIX standards. \n" }, { "page_number": 102, "text": "Chapter | 5 Unix and Linux Security\n69\n If this model of user space process security is so \neffective, why not implement it for all operating system \nfunctions, including the majority of kernel operations? \nThe answer to this question is that to a large extent the \noverhead of evaluating authorization metadata is very \ncompute expensive. If most or all operations (that are, in \nthe classical kernel space, often hardware-related device \naccess operations) are run in user space or a comparable \nway, the performance of the OS would severely suffer. \nThere is a class of operating system with a microkernel \nthat implements this approach; the kernel implements \nonly the most rudimentary functions (processes, sched-\nuling, basic security), and all other operations, includ-\ning device access and other operations that are typically \ncarried out by the kernel, run in separate user processes. \nThe advantage is a higher level of security and better \nsafeguards against rogue device drivers. Furthermore, \nnew device drivers or other operating system functional-\nity can be added or removed without having to reboot \nthe kernel. The performance penalties are so severe, \nhowever, that no major commercial operating system \nimplements a microkernel architecture. \n User Space Security \n In traditional Unix systems, security starts with access \ncontrol to resources. Since users interact with the sys-\ntems through processes, it is important to know that \nevery user space process structure has two important \nsecurity fields: the user identifier, or UID, and the group \nidentifier, or GID. These identifiers are typically positive \nintegers, which are unique for each user. 1 Every process \nthat is started by (or on behalf of) a user inherits the UID \nand GID values for that user account. These values are \nusually immutable for the live time of the process. \n Access to system resources must go through the ker-\nnel by calling the appropriate function that is accessible \nto user processes. For example, a process that wants to \nreserve some system memory for data access will use \nthe malloc() system call and pass the requested size and \nan (uninitialized) pointer as parameters. The kernel then \nevaluates this request, determines whether enough virtual \nmemory (physical memory plus swap space) is available, \nreserves a section of memory, and sets the pointer to the \naddress where the block starts. \n Users who have the UID zero have special privileges: \nThey are considered superusers , able to override many \nof the security guards that the kernel sets up. The default \nUnix superuser is named root . \n Standard File and Device Access Semantics \n File access is a very fundamental task, and it is impor-\ntant that only authorized users get read or write access \nto a given file. If any user was able to access any file, \nthere would be no privacy at all, and security could not \nbe maintained, since the operating system would not be \nable to protects its own permanent records, such as con-\nfiguration information or user credentials. \n The metadata describing who may access or mod-\nify files and directories is commonly referred to as an \n access control list (ACL). Note that there is more than \njust one type of ACL; the standard Unix ACLs are very \nwell known, but different Unix variants or POSIX-like \noperating systems might implement different ACLs \nand only define a mapping to the simple POSIX 1003 \nsemantics. A good example is the Windows NTFS ACL \nor the NFS v4 ACLs. \n Read, Write, Execute \n From its earliest days, Unix implemented a simple but \neffective way to set access rights for users. Normal files \ncan be accessed in three fundamental ways: read, write, \nand execute. The first two ways are obvious; the execu-\ntion requires a little more explanation. A file on disk may \nonly be executed as either a binary program or a script if \nthe user has the right to execute this file. If the execute \npermission is not set, the system call exec() will fail. In \naddition to a user’s permissions, there must be a notion of \nownership of files and sometimes other resources. In fact, \neach file on a traditional Unix file system is associated \nwith a user and a group. The user and group are not iden-\ntified by their name but by UID and GID instead. \nData/Text/Heap\nPID\n(Process ID)\nPPID\n(Parent PID)\n(Status)\nThread 1\nUnix Process\nThread 2\n FIGURE 5.2 Kernel structure of a typical Unix process. \n 1 If two usernames are associated with the same UID, the operating \nsystem will treat them as the same user. Their authentication creden-\ntials (username and password) are different, but their authorization \nwith respect to system resources is the same. \n" }, { "page_number": 103, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n70\n In addition to setting permissions for the user owning \nthe file, two other sets of permissions are set for files: \nfor the group and for all others. Similar to being owned \nby a user, a file is also associated with one group. All \nmembers of this group 2 can access the file with the per-\nmissions set for the group. In the same way, the other set \nof permissions applies to all users of the system. \n Special Permissions \n In addition to the standard permissions, there are a few \nspecial permissions, discussed here. \n Set-ID Bit \n This permission only applies to executable files, and it \ncan only be set for the user or the group. If this bit is \nset, the process for this program is not set to the UID or \nGID of the invoking user but instead the UID or GID of \nthe file. For example, a program owned by the superuser \ncan have the Set-ID bit set and execution allowed for all \nusers. This way a normal user can execute a specific pro-\ngram with elevated privileges. \n Sticky Bit \n When the sticky bit is set on an executable file, its data \n(specifically the text segment) is kept in memory, even \nafter the process exits. This is intended to speed execu-\ntion of commonly used programs. A major drawback \nof setting the sticky bit is that when the executable file \nchanges (for example, through a patch), the permission \nmust be unset and the program started once more. When \nthis process exits, the executable is unloaded from mem-\nory and the file can be changed. \n Mandatory Locking \n Mandatory file and record locking refers to a file’s abil-\nity to have its reading or writing permissions locked \nwhile a program is accessing that file. \n In addition, there might be additional, implementation-\nspecific permissions. These depend on the capabilities of \nthe core operating facilities, including the kernel, but \nalso on the type of file system. For example, most Unix \noperating systems can mount FAT-based file systems, \nwhich do not support any permissions or user and group \nownership. Since the internal semantics require some \nvalues for ownership and permissions, these are typically \nset for the entire file system. \n Permissions on Directories \n The semantics of permissions on directories (see Figure 5.3 ) \nare different from those on files. \n Read and Write \n Mapping these permissions to directories is fairly \nstraightforward: The read permission allows listing files \nin the directory, and the write permission allows us to \ncreate files. For some applications it can be useful to \nallow writing but not reading. \n Execute \n If this permission is set, a process can set its working \ndirectory to this directory. Note that with the basic per-\nmissions, there is no limitation on traversing directories, \nso a process might change its working directory to a \nchild of a directory, even if it cannot do so for the direc-\ntory itself. \nMaking a directory readable for everyone:\n# chmod o+r /tmp/mydir\n# ls -ld /tmp/mydir\ndrwxr-xr-x 2 root root 117 Aug 9 12:12 /tmp/mydir \nSetting the SetID bit on an executable, thus enabling it to be run with super-user privileges: \n# chmod u+s specialprivs \n# ls -ld specialprivs \n-rwsr-xr-x 2 root root 117 Aug 9 12:12 specialprivs\n FIGURE 5.3 Examples of chmod for files and directories. \n 2 It should be noted that users belong to one primary group, identifi ed \nby the GID set in the password database. However, group membership \nis actually determined separately through the /etc/group fi le. As such, \nuser can be (and often is) a member of more than one group. \n" }, { "page_number": 104, "text": "Chapter | 5 Unix and Linux Security\n71\n SetID \n Semantics may differ here. For example, on Solaris this \nchanges the behavior for default ownership of newly cre-\nated files from the System V to the BSD semantics. \n Other File Systems \n As mentioned, the set of available permissions and \nauthorization policies depends on the underlying oper-\nating system capabilities, including the file system. For \nexample, the UFS file system in Solaris since version 2.5 \nallows additional ACLs on a per-user basis. Furthermore, \nNFS version 4 defines additional ACLs for file access; it \nis obvious that the NFS server must have an underlying \nfiles system that is capable of recording this additional \nmetadata. \n 4 . PROTECTING USER ACCOUNTS AND \nSTRENGTHENING AUTHENTICATION \n For any interactive session, Unix systems require the user \nto log into the system. To do so, the user must present \na valid credential that identifies him (he must authenti-\ncate to the system). \n Establishing Secure Account Use \n The type of credentials a Unix system uses depends on \nthe capabilities of the OS software itself and on the con-\nfiguration set forth by the systems administrator. The \nmost traditional user credential is a username and a text \npassword, but there are many other ways to authenticate \nto the operating system, including Kerberos, SSH, or \nsecurity certificates. \n The Unix Login Process \n Depending on the desired authentication mechanism (see \n Figure 5.4 ), the user will have to use different access \nprotocols or processes. For example, console or directly \nattached terminal sessions usually supports only pass-\nword credentials or smart card logins, whereas a secure \nshell connection supports only RSA- or DSA-based \ncryptographic tokens over the SSH protocol. \n The login process is a system daemon that is respon-\nsible for coordinating authentication and process setup \nfor interactive users. To do this, the login process does \nthe following: \n 1. Draw or display the login screen. \n 2. Collect the credential. \n 3. Present the user credential to any of the config-\nured user databases (typically these can be files, \nNIS, Kerberos servers, or LDAP directories) for \nauthentication. \n 4. Create a process with the user’s default command-line \nshell, with the home directory as working directory. \n 5. Execute systemwide, user, and shell-specific startup \nscripts. \n The commonly available X11 windowing system \ndoes not use the text-oriented login process but instead \nprovides its own facility to perform roughly the same \nkind of login sequence. \n Access to interactive sessions using the SSH proto-\ncol follows a similar general pattern, but the authentica-\ntion is significantly different from the traditional login \nprocess. \n Controlling Account Access \n Simple files were the first method available to store user \naccount data. Over the course of years many other user \ndatabases have been implemented. We examine these \nhere. \n The Local Files \n Originally, Unix only supported a simple password file \nfor storing account information. The username and the \ninformation required for the login process (UID, GID, \nshell, home directory, and GECOS information) are \nstored in this file, which is typically at /etc/passwd . This \n• \nSimple: a username and a password are used to \n \nlogin to the operating system. The login process \n \nmust receive both in cleartext. For the password, \n \nthe Unix crypt hash is calculated and compared \n \nto the value in the password or shadow file. \n• \nKerberos: The user is supposed to have a ticket- \n \ngranting ticket from the Kerberos Key \n \nDistribution Server (KDC). Using the ticket- \n \ngranting ticket, he obtains a service ticket for an \n \ninteractive login to the Unix host. This service \n \nticket (encrypted, time limited) is then presented \n \nto the login process, and the Unix host validates \n \nit with the KDC. \n• \nPKI based Smartcard: the private key on the \n \nsmart card is used to authenticate with the \n \nsystem. \nOverview of Unix authentication methods\n FIGURE 5.4 Various authentication mechanisms for Unix systems. \n" }, { "page_number": 105, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n72\n approach is highly insecure, since this file needs to be \nreadable by all for a number of different services, thus \nexposing the password hashes to potential hackers. In \nfact, a simple dictionary or even brute-force attack can \nreveal simple or even more complex passwords. \n To protect against an attack like this, most Unix vari-\nants use a separate file for storing the password hashes \n( /etc/shadow ) that is only readable and writable by the \nsystem. \n Network Information System \n The Network Information System (NIS) was introduced \nto simplify the administration of small groups of com-\nputers. Originally, Sun Microsystems called this service \nYellow Pages, but the courts decided that this name con-\nstituted a trademark infringement on the British Telecom \nYellow Pages. However, most commands that are used \nto administer the NIS still start with the yp prefix (such \nas ypbind , ypcat , etc.). \n Systems within the NIS are said to belong to a NIS \ndomain. Although there is absolutely no correlation \nbetween the NIS domain and the DNS domain of the \nsystem, it is quite common to use DNS-style domain \nnames for naming NIS domains. For example, a system \nwith DNS name system1.sales.example.com might be a \nmember of the NIS domain nis.sales.Example.COM . Note \nthat NIS domains — other than DNS domains — are case \nsensitive. \n The NIS uses a simple master/slave server system: \nThe master NIS server holds all authoritative data and \nuses an ONC-RPC-based protocol to communicate with \nthe slave servers and clients. Slave servers cannot be \neasily upgraded to a master server, so careful planning \nof the infrastructure is highly recommended. \n Client systems are bound to one NIS server (master \nor slave) during runtime. The addresses for the NIS mas-\nter and the slaves must be provided when joining a sys-\ntem to the NIS domain. Clients (and servers) can always \nbe members of only one NIS domain. To use the NIS \nuser database (and other NIS resources, such as auto-\nmount maps, netgroups, and host tables) after the system \nis bound, use the name service configuration file ( /etc/nss-\nwitch.conf ), as shown in Figure 5.5 . \n Using PAMs to Modify AuthN \n These user databases can easily be configured for use on \na given system through the /etc/nsswitch.conf file. However, \nin more complex situations, the administrator might want \nto fine-tune the types of acceptable authentication methods, \nsuch as Kerberos, or even configure multifactor authenti-\ncation. On many Unix systems, this is typically achieved \nthrough the pluggable authentication mechanism (PAM), \nas shown in Figure 5.6 . Traditionally, the PAM is config-\nured through the /etc/pam.conf file, but more modern imple-\nmentations use a directory structure, similar to the System \nV init scripts. For these systems the administrator needs to \nmodify the configuration files in the /etc/pam.d/ directory. \n Noninteractive Access \n The security configuration of noninteractive services can \nvary quite significantly. Especially popular network serv-\nices, such as LDAP, HTTP, or NFS, can use a wide vari-\nety of authentication and authorization mechanisms that \ndo not even need to be provided by the operating sys-\ntem. For example, an Apache Web server or a MySQL \ndatabase server might use its own user database, without \nrelying on any operating system services. \n# /etc/nsswitch.conf \n# \n# Example configuration of GNU Name Service Switch functionality.\n# \npasswd: files nis \ngroup: files nis\nshadow: files nis\nhosts: files nis dns\nnetworks: files\nprotocols: db files\nservices: db files\nethers: db files\nrpc: db files\nnetgroup: nis\n FIGURE 5.5 Sample nsswitch.conf for a Debian system. \n" }, { "page_number": 106, "text": "Chapter | 5 Unix and Linux Security\n73\n Other Network Authentication \nMechanisms \n In 1983, BSD introduced the rlogin service. Unix \nadministrators have been using RSH, RCP, and other \ntools from this package for a long time; they are very \neasy to use and configure and provide simple access \nacross a small network of computers. The login was \nfacilitated through a very simple trust model: Any user \ncould create a .rhosts file in her home directory and \nspecify foreign hosts and users from which to accept \nlogins without proper credential checking. Over the \nrlogin protocol (TCP 513), the username of the rlogin \nclient would be transmitted to the host system, and \nin lieu of an authentication, the rshd daemon would \nsimply verify the preconfigured values. To prevent \naccess from untrusted hosts, the administrator could \nuse the /etc/hosts.equiv file to allow or deny individ-\nual hosts or groups of hosts (the latter through the use \nof NIS netgroups). \n Risks of Trusted Hosts and Networks \n Since no authentication ever takes place, this trust mech-\nanism should not be used. Not only does this system \nrely entirely on the correct functioning of the hostname \nresolution system, but in addition, there is no way to \ndetermine whether a host was actually replaced. 3 Also, \nthough rlogin-based trust systems might work for very \nsmall deployments, they become extremely hard to set \nup and operate with large numbers of machines. \n Replacing Telnet, rlogin, and FTP Servers \nand Clients with SSH \n The most sensible alternative to the traditional interac-\ntive session protocols such as Telnet is the secure shell \n(SSH) system. It is very popular on Unix systems, and \n# /etc/pam.d/common-password - password-related modules common to all services\n#\n# This file is included from other service-specific PAM config files,\n# and should contain a list of modules that define the services to be\n# used to change user passwords. The default is pam_unix.\n# Explanation of pam_unix options:\n#\n# The \"nullok\" option allows users to change an empty password, else\n# empty passwords are treated as locked accounts.\n# \n# The \"md5\" option enables MD5 passwords. Without this option, the\n# default is Unix crypt.\n#\n# The \"obscure\" option replaces the old `OBSCURE_CHECKS_ENAB' option in\n# login.defs.\n# \n# You can also use the \"min\" option to enforce the length of the new\n# password.\n#\n# See the pam_unix manpage for other options.\npassword requisite pam_unix.so nullok obscure md5\n# Alternate strength checking for password. Note that this\n# requires the libpam-cracklib package to be installed.\n# You will need to comment out the password line above and\n# uncomment the next two in order to use this.\n# (Replaces the `OBSCURE_CHECKS_ENAB', `CRACKLIB_DICTPATH')\n#\npassword required pam_cracklib.so retry=3 minlen=6 difok=3\npassword required pam_unix.so use_authtok nullok md5\n FIGURE 5.6 Setting the password strength on a Debian-based system through the PAM system. \n 3 This could actually be addressed through host authentication, but it \nis not a feature of the rlogin protocol. \n" }, { "page_number": 107, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n74\n pretty much all versions ship with a version of SSH. \nWhere SSH is not available, the open source package \nOpenSSH can easily be used instead 4 . \n SSH combines the ease-of-use features of the rlogin \ntools with a strong cryptographic authentication system. \nOn one hand, it is fairly easy for users to enable access \nfrom other systems; on the other hand, the secure shell \nprotocol uses strong cryptography to: \n ● Authenticate the connection, that is, establish the \nauthenticity of the user \n ● Protect the privacy of the connection through \nencryption \n ● Guarantee the integrity of the channel through \nsignatures \n This is done using either the RSA or DSA security \nalgorithm, which are both available for the SSH v2 5 pro-\ntocol. The cipher (see Figure 5.7 ) used for encryption \ncan be explicitly selected. \n The user must first create a public/private key pair \nthrough the ssh-keygen(1) tool. The output of the key \ngenerator is placed in the .ssh subdirectory of the user’s \nhome directory. This output consists of a private key file \ncalled id_dsa or id_rsa . This file must be owned by the \nuser and can only be readable by the user. In addition, \na file containing the public key is created, named in the \nsame way, with the extension .pub appended. The public \nkey file is then placed into the .ssh subdirectory of the \nuser’s home directory on the target system. \n Once the public and private keys are in place and the \nSSH daemon is enabled on the host system, all clients \nthat implement the SSH protocol can create connections. \nThere are four common applications using SSH: \n ● Interactive session is the replacement for Telnet and \nrlogin. Using the ssh(1) command line, the sshd daemon \ncreates a new shell and transfers control to the user. \n ● In a remotely executed script/command , ssh(1) allows \na single command with arguments to pass. This way, \na single remote command (such as a backup script) \ncan be executed on the remote system as long as this \ncommand is in the default path for the user. \n ● An SSH-enabled file transfer program can be used to \nreplace the standard FTP or FTP over SSL protocol. \n ● Finally, the SSH protocol is able to tunnel arbitrary \nprotocols. This means that any client can use the \nprivacy and integrity protection offered by SSH. In \nparticular, the X-Window system protocol can tunnel \nthrough an existing SSH connection by using the -X \ncommand-line switch. \n 5 . REDUCING EXPOSURE TO THREATS \nBY LIMITING SUPERUSER PRIVILEGES \n The superuser has almost unlimited power on a Unix \nsystem, which can be a significant problem. \n Controlling Root Access \n There are a number of ways to limit access for the root user. \n Configuring Secure Terminals \n Most Unix systems allow us to restrict root logins to \nspecial terminals, typically the system console. This \napproach is quite effective, especially if the console or \nthe allowed terminals are under strict physical access \ncontrol. The obvious downside of this approach is that \nremote access to the system can be very limited: using \nthis approach, access through any TCP/IP-based connec-\ntion cannot be configured, thus requiring a direct con-\nnection, such as a directly attached terminal or a modem. \n Configuration is quite different for the various Unix \nsystems. Figure 5.8 shows the comparison between \nSolaris and Debian. \n Gaining Root Privileges with su \n The su(1) utility allows changing the identity of an \ninteractive session. This is an effective mediation of the \nissues that come with restricting root access to secure \nterminals: Though only normal users can get access to \nthe machine through the network (ideally by limiting \nthe access protocols to those that protect the privacy of \nthe communication, such as SSH), they can change their \ninteractive session to a superuser session. \n Using Groups Instead of Root \n If users should be limited to executing certain commands \nwith superuser privileges, it is possible and common to \ncreate special groups of users. For these groups, we can set \nthe execution bit on programs (while disabling execution \nfor all others) and the SetID bit for the owner, in this case \nthe superuser. Therefore, only users of such a special group \ncan execute the given utility with superuser privileges. \n$ ssh host -luser1 -c aes192-cbc\n FIGURE 5.7 Create an interactive session on Solaris to host for user1 \nusing the AES cipher with 192 bits. \n 5 See [IETF4252]. http://tools.ietf.org/html/rfc4252 \n 4 See [IEEE04]. www.opengroup.org/onlinepubs/009695399/ \n" }, { "page_number": 108, "text": "Chapter | 5 Unix and Linux Security\n75\nOn Solaris simply edit the file /etc/default/login: \n# Copyright 2004 Sun Microsystems, Inc. All rights reserved.\n# Use is subject to license terms.\n# If CONSOLE is set, root can only login on that device.\n# Comment this line out to allow remote login by root.\n#\nCONSOLE=/dev/console\n# PASSREQ determines if login requires a password.\n#\nPASSREQ=YES\n# SUPATH sets the initial shell PATH variable for root\n#\nSUPATH=/usr/sbin:/usr/bin\n# SYSLOG determines whether the syslog(3) LOG_AUTH facility should be used\n# to log all root logins at level LOG_NOTICE and multiple failed login\n# attempts at LOG_CRIT.\n#\nSYSLOG=YES\n# The SYSLOG_FAILED_LOGINS variable is used to determine how many failed\n# login attempts will be allowed by the system before a failed login\n# message is logged, using the syslog(3) LOG_NOTICE facility. For \nexample,\n# if the variable is set to 0, login will log -all- failed login attempts.\n#\nSYSLOG_FAILED_LOGINS=5\nOn Debian: \n# The PAM configuration file for the Shadow `login' service\n#\n# Disallows root logins except on tty's listed in /etc/securetty\n# (Replaces the `CONSOLE' setting from login.defs)\nauth requisite pam_securetty.so\n# Disallows other than root logins when /etc/nologin exists\n# (Replaces the `NOLOGINS_FILE' option from login.defs)\nauth requisite pam_nologin.so\n# Standard Un*x authentication.\n@include common-auth\n# This allows certain extra groups to be granted to a user\n# based on things like time of day, tty, service, and user.\n# Please edit /etc/security/group.conf to fit your needs\n# (Replaces the `CONSOLE_GROUPS' option in login.defs)\n FIGURE 5.8 Restricting root access. \n" }, { "page_number": 109, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n76\n Using the sudo(1) Mechanism \n By far more flexible and easier to manage than the \napproach for enabling privileged execution based on \ngroups is the sudo(1) mechanism. Originally an open \nsource program, sudo(1) is available for most Unix dis-\ntributions. The detailed configuration is quite complex, \nand the manual page is quite informative. \n 6 . SAFEGUARDING VITAL DATA \nBY SECURING LOCAL AND \nNETWORK FILE SYSTEMS \n For production systems, there is a very effective way of \npreventing the modification of system-critical resources \nby unauthorized users or malicious software. Critical \nportions of the file systems (such as the locations of \nbinary files, system libraries, and some configuration \nfiles) do not necessarily change very often. \n Directory Structure and Partitioning for \nSecurity \n In fact, any systemwide binary code should probably only \nbe modified by the systems administrators. In these cases, \nit is very effective to properly partition the file system. \n Employing Read-Only Partitions \n The reason to properly partition the file system (see \n Figure 5.9 ) is so that only frequently changing files \n(such as user data, log files, and the like) are hosted \non readable file systems. All other storage can then be \nmounted on read-only partitions. \nThe following scheme is a good start for partitioning \nwith read-only partitions: \n • \nBinaries and Libraries: /bin, /lib, /sbin, /usr - \n \nread-only\n • \nLogs and frequently changing system data: /var, \n \n/usr/var - writable\n • \nUser home directories: /home, /export/home - \n \nwritable \n • \nAdditional software packages: /opt, /usr/local - \n \nread-only\n • \nSystem configuration: /etc, /usr/local/etc - \n \nwritable \n • \nEverything else: Root (l ) - read-only \nObviously, this can only be a start and should be \nevaluated for each system and application. Updating \noperating system files, including those on the root file \nsystem, should be performed in single-user mode with all\npartitions mounted writable.\n FIGURE 5.9 Secure partitioning. \nauth optional pam_group.so\n# Uncomment and edit /etc/security/time.conf if you need to set\n# time restrainst on logins.\n# (Replaces the `PORTTIME_CHECKS_ENAB' option from login.defs\n# as well as /etc/porttime)\naccount requisite pam_time.so\n# Uncomment and edit /etc/security/access.conf if you need to\n# set access limits.\n# (Replaces /etc/login.access file)\naccount required pam_access.so\n# Sets up user limits according to /etc/security/limits.conf\n# (Replaces the use of /etc/limits in old login)\nsession required pam_limits.so\n# Prints the last login info upon succesful login\n# (Replaces the `LASTLOG_ENAB' option from login.defs)\nsession optional pam_lastlog.so\n# Standard Un*x account and session\n@include common-account\n@include common-session\n@include common-password\n FIGURE 5.8 (Continued). \n" }, { "page_number": 110, "text": "Chapter | 5 Unix and Linux Security\n77\n Ownership and Access Permissions \n To prevent inadvertent or malicious access to critical \ndata, it is vitally important to verify the correct ownership \nand permission set for all critical files in the file system. \nThe Unix find(1) command is an effective way to locate \nfiles with certain characteristics. In the following, a \nnumber of sample command-line options for this utility \nare given to locate files. \n Locate SetID Files \n Since executables with the SetID bit set are often used \nto allow the execution of a program with superuser \n privileges, it is vitally important to monitor these files on \na regular basis. \n Another critical permission set is that of world-\nwritable files; there should be no system-critical files in \nthis list, and users should be aware of any files in their \nhome directories that are world-writable (see Figure 5.10 ). \n Finally, files and directories that are not owned by cur-\nrent users can be found by the code shown in Figure 5.11 . \n For groups, just use -nogroup instead. \n$ find / \\( -perm -04000 -o -perm -02000\\) -type f -xdev -print\n FIGURE 5.10 Finding files with SUID and SGID set. \n$ find / -nouser\n FIGURE 5.11 Finding files without users. \n" }, { "page_number": 111, "text": "This page intentionally left blank\n" }, { "page_number": 112, "text": "79\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n Eliminating the Security Weakness of \nLinux and UNIX Operating Systems \n Mario Santana \n Terremark \n Chapter 6 \n Linux and other Unix-like operating systems are preva-\nlent on the Internet for a number of reasons. As an \noperating system designed to be flexible and robust, \nUnix lends itself to providing a wide array of host- and \nnetwork-based services. Unix also has a rich culture \nfrom its long history as a fundamental part of comput-\ning research in industry and academia. Unix and related \noperating systems play a key role as platforms for deliv-\nering the key services that make the Internet possible. \n For these reasons, it is important that information secu-\nrity practitioners understand fundamental Unix concepts in \nsupport of practical knowledge of how Unix systems might \nbe securely operated. This chapter is an introduction to \nUnix in general and to Linux in particular, presenting some \nhistorical context and describing some fundamental aspects \nof the operating system architecture. Considerations for \nhardening Unix deployments will be contemplated from \nnetwork-centric, host-based, and systems management per-\nspectives. Finally, proactive considerations are presented \nto identify security weaknesses to correct them and to deal \neffectively with security breaches when they do occur. \n 1. INTRODUCTION TO LINUX AND UNIX \n A simple Google search for “ define:unix ” yields many \ndefinitions, including this one from Microsoft: “ A pow-\nerful multitasking operating system developed in 1969 \nfor use in a minicomputer environment; still a widely \nused network operating system. ” 1 \n What Is Unix? \n Unix is many things. Officially, it is a brand and an oper-\nating system specification. In common usage the word \n Unix is often used to refer to one or more of many operat-\ning systems that derive from or are similar to the oper-\nating system designed and implemented about 40 years \nago at AT & T Bell Laboratories. Throughout this chapter, \nwe’ll use the term Unix to include official Unix-branded \noperating systems as well as Unix-like operating systems \nsuch as BSD, Linux, and even Macintosh OS X. \n History \n Years after AT & T’s original implementation, there fol-\nlowed decades of aggressive market wars among many \noperating system vendors, each claiming that its operating \nsystem was Unix. The ever-increasing incompatibilities \nbetween these different versions of Unix were seen as a \nmajor deterrent to the marketing and sales of Unix. As per-\nsonal computers grew more powerful and flexible, running \ninexpensive operating systems like Microsoft Windows \nand IBM OS/2, they threatened Unix as the server plat-\nform of choice. In response to these and other marketplace \npressures, most major Unix vendors eventually backed \nefforts to standardize the Unix operating system. \n Unix Is a Brand \n Since the early 1990s, the Unix brand has been owned \nby The Open Group. This organization manages a set of \nspecifications with which vendors must comply to use \nthe Unix brand in referring to their operating system \nproducts. In this way, The Open Group provides a guar-\nantee to the marketplace that any system labeled as Unix \nconforms to a strict set of standards. \n Unix Is a Specification \n The Open Group’s standard is called the Single Unix \nSpecification. It is created in collaboration with the \nInstitute of Electrical and Electronics Engineers (IEEE), \nthe International Standards Organization (ISO), and others. \n 1 Microsoft, n.d., “Glossary of Networking Terms for Visio IT Pro-\nfessionals”, retrieved September 22, 2008, from Microsoft TechNet: \n http://technet.microsoft.com/en-us/library/cc751329.aspx#XSLT\nsection142121120120 . \n" }, { "page_number": 113, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n80\nThe specification is developed, refined, and updated in \nan open, transparent process. \n The Single Unix Specification comprises several \ncomponents, covering core system interfaces such as \nsystem calls as well as commands, utilities, and a devel-\nopment environment based on the C programming lan-\nguage. Together, these describe a “ functional superset of \nconsensus-based specifications and historical practice. ” 2 \n Lineage \n The phrase historical practice in the description of the \nSingle Unix Specification refers to the many operating \nsystems historically referring to themselves as Unix. \nThese include everything from AT & T’s original releases \nto the versions released by the University of California \nat Berkeley and major commercial offerings by the likes \nof IBM, Sun, Digital Equipment Corporation (DEC), \nHewlett-Packard (HP), the Santa Cruz Operation (SCO), \nNovell, and even Microsoft. But any list of Unix oper-\nating systems would be incomplete if it didn’t mention \nLinux (see Figure 6.1 ). \n What Is Linux? \n Linux is a bit of an oddball in the Unix operating system \nlineup. That’s because, unlike the Unix versions released \nby the major vendors, Linux did not reuse any existing \nsource code. Instead, Linux was developed from scratch \nby a Finnish university student named Linus Torvalds. \n Most Popular Unix-Like OS \n Linux was written from the start to function very simi-\nlarly to existing Unix products. And because Torvalds \nworked on Linux as a hobby, with no intention of mak-\ning money, it was distributed for free. These factors and \nothers contributed to making Linux the most popular \nUnix operating system today. \n Linux Is a Kernel \n Strictly speaking, Torvalds ’ pet project has provided only \none part of a fully functional Unix operating system: the \nkernel. The other parts of the operating system, includ-\ning the commands, utilities, development environment, \ndesktop environment, and other aspects of a full Unix \noperating system, are provided by other parties, includ-\ning GNU, XOrg, and others. \n Linux Is a Community \n Perhaps the most fundamentally different thing about \nLinux is the process by which it is developed and \nimproved. As the hobby project that it was, Linux was \nreleased by Torvalds on the Internet in the hopes that \nsomeone out there might find it interesting. A few \nprogrammers saw Torvalds ’ hobby kernel and began \nworking on it for fun, adding features and fleshing out \nfunctionality in a sort of unofficial partnership with \nTorvald. At this point, everyone was just having fun, \ntinkering with interesting concepts. As more and more \npeople joined the unofficial club, Torvalds ’ pet project \nballooned into a worldwide phenomenon. \n Today, Linux is developed and maintained by hun-\ndreds of thousands of contributors all over the world. \nIn 1996, Eric S. Raymond 3 famously described the dis-\ntributed development methodology used by Linux as a \nbazaar — a wild, uproarious collection of people, each \ndeveloping whatever feature they most wanted in an \noperating system, or improving whatever shortcoming \nmost impacted them; yet somehow, this quick-moving \ncommunity resulted in a development process that was \nstable as a whole, and that produced an amazing amount \nof progress in a very short time. \n This is radically different from the way in which \nUnix systems have typically been developed. If the \nLinux community is like a bazaar, then other Unix sys-\ntems can be described as a cathedral — carefully pre-\nplanned and painstakingly assembled over a long period \nof time, according to specifications handed down by \nmaster architects from previous generations. Recently, \nhowever, some of the traditional Unix vendors have \nstarted moving toward a more decentralized, bazaar-like \ndevelopment model similar in many ways to the Linux \nmethodology. \n Linux Is Distributions \n The Open Source movement in general is very impor-\ntant to the success of Linux. Thanks to GNU, XOrg, and \nother open-source contributors, there was an almost com-\nplete Unix already available when the Linux kernel was \nreleased. Linux only filled in the final missing component \n 2 The Open Group, n.d., “The Single Unix Specifi cation”, retrieved \nSeptember 22, 2008, from What Is Unix: www.unix.org/what_is_unix/\nsingle_unix_specifi cation.html . \n 3 E. S. Raymond, September 11, 2000, “The Cathedral and the \nBazaar”, retrieved September 22, 2008, from Eric S. Raymond’s homep-\nage: www.catb.org/esr/writings/cathedral-bazaar/cathedral-bazaar/index.\nhtml . \n" }, { "page_number": 114, "text": "Chapter | 6 Eliminating the Security Weakness of Linux and UNIX Operating Systems\n81\n FIGURE 6.1 The simplified Unix family tree presents a timeline of some of today’s most successful Unix variants. 10 \n 10 M. Hutton, July 9, 2008, “Image: Unix History”, retrieved October 6, 2008, from Wikipedia: http://en.wikipedia.org/wiki/Image:Unix_history-simple.svg . \n" }, { "page_number": 115, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n82\nof a no-cost, open source Unix. Because the majority of \nthe other parts of the operating system came from the \nGNU project, Linux is also known as GNU/Linux. \n To actually install and run Linux, it is necessary \nto collect all the other operating system components. \nBecause of the interdependency of the operating system \ncomponents — each component must be compatible with \nthe others — it is important to gather the right versions \nof all these components. In the early days of Linux, this \nwas quite a challenge! \n Soon, however, someone gathered up a self-consistent \nset of components and made them all available from a \ncentral download location. The first such efforts include \nH. J. Lu’s “ boot/root ” floppies and MCC Interim Linux. \nThese folks did not necessarily develop any of these \ncomponents; they only redistributed them in a more con-\nvenient package. Other people did the same, releasing \nnew bundles called distributions whenever a major upgrade \nwas available. \n Some distributions touted the latest in hardware sup-\nport; others specialized in mathematics or graphics or \nanother type of computing; still others built a distribu-\ntion that would provide the simplest or most attractive \nuser experience. Over time, distributions have become \nmore robust, offering important features such as package \nmanagement, which allows a user to safely upgrade parts \nof the system without reinstalling everything else. \n Linux Standard Base \n Today there are dozens of Linux distributions. Different \nflavors of distributions have evolved over the years. \nA primary distinguishing feature is the package manage-\nment system. Some distributions are primarily volunteer \ncommunity efforts; others are commercial offerings. See \n Figure 6.2 for a timeline of Linux development. 4 \n The explosion in the number of different Linux dis-\ntributions created a situation reminiscent of the Unix \nwars of previous decades. To address this issue, the \nLinux Standard Base was created to specify certain key \nstandards of behavior for conforming Linux distribu-\ntions. Most major distributions comply with the Linux \nStandard Base specifications. \n System Architecture \n The architecture of Unix operating systems is relatively \nsimple. The kernel interfaces with hardware and provides \ncore functionality for the system. File systems pro-\nvide permanent storage and access to many other kinds \nof functionality. Processes embody programs as their \ninstructions are being executed. Permissions describe the \nactions that users may take on files and other resources. \n Kernel \n The operating system kernel manages many of the fun-\ndamental details that an operating system needs to deal \nwith, including memory, disk storage, and low-level net-\nworking. In general, the kernel is the part of the operat-\ning system that talks directly to hardware; it presents an \nabstracted interface to the rest of the operating system \ncomponents. \n Because the kernel understands all the different sorts \nof hardware that the operating system deals with, the rest \nof the operating system is freed from needing to under-\nstand all those underlying details. The abstracted inter-\nface presented by the kernel allows other parts of the \noperating system to read and write files or communicate \non the network without knowing or caring what kinds of \ndisks or network adapter are installed. \n File System \n A fundamental aspect of Unix is its file system. Unix \npioneered the hierarchical model of directories that con-\ntain files and/or other directories to allow the organiza-\ntion of data into a tree structure. Multiple file systems \ncould be accessed by connecting them to empty directo-\nries in the root file system. In essence, this is very much \nlike grafting one hierarchy onto an unused branch of \nanother. There is no limit to the number of file systems \nthat can be mounted in this way. \n The file system hierarchy is also used to provide \nmore than just access to and organization of local files. \nNetwork data shares can also be mounted, just like file \nsystems on local disks. And special files such as device \nfiles, first in/first out (FIFO) or pipe files, and others \ngive direct access to hardware or other system features. \n Users and Groups \n Unix was designed to be a time-sharing system, and as \nsuch has been multiuser since its inception. Users are \nidentified in Unix by their usernames, but internally \neach is represented as a unique identifying integer \ncalled a user ID , or UID . Each user can also belong to \none or more groups. Like users, groups are identified \nby their names, but they are represented internally as a \nunique integer called a group ID , or GID . Each file or \n 4 A. Lundqvist, May 12, 2008, “Image:Gldt”, retrieved October 6, \n2008, from Wikipedia: http://en.wikipedia.org/wiki/Image:Gldt.svg . \n" }, { "page_number": 116, "text": "Chapter | 6 Eliminating the Security Weakness of Linux and UNIX Operating Systems\n83\n FIGURE 6.2 History of Linux distributions . \n" }, { "page_number": 117, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n84\ndirectory in a Unix file system is associated with a user \nand a group. \n Permissions \n Unix has traditionally had a simple permissions archi-\ntecture, based on the user and group associated with \nfiles in the file system. This scheme makes it possible \nto specify read, write, and/or execute permissions, along \nwith a special permission setting whose effect is context-\ndependent. Furthermore, it’s possible to set these permis-\nsions independently for the file’s owner; the file’s group, \nin which case the permission applies to all users, other \nthan the owner, who are members of that group; and to \nall other users. The chmod command is used to set the \npermissions by adding up the values of each permission, \nas shown in Table 6.1 . \n The Unix permission architecture has historically \nbeen the target of criticism for its simplicity and inflex-\nibility. It is not possible, for example, to specify a differ-\nent permission setting for more than one user or more \nthan one group. These limitations have been addressed in \nmore recent file system implementations using extended \nfile attributes and access control lists. \n Processes \n When a program is executed, it is represented in a Unix \nsystem as a process. The kernel keeps track of many \npieces of information about each process. This informa-\ntion is required for basic housekeeping and advanced \ntasks such as tracing and debugging. This information \nrepresents the user, group, and other data used for mak-\ning security decisions about a process’s access rights to \nfiles and other resources. \n 2. HARDENING LINUX AND UNIX \n With a basic understanding of the fundamental concepts of \nthe Unix architecture, let’s take a look at the practical work \nof securing a Unix deployment. First we’ll review consid-\nerations for securing Unix machines from network-borne \nattacks. Then we’ll look at security from a host-based per-\nspective. Finally, we’ll talk about systems management \nand how different ways of administering a Unix system \ncan impact security. \n Network Hardening \n Defending from network-borne attacks is arguably the \nmost important aspect of Unix security. Unix machines \nare used heavily to provide network-based services, run-\nning Web sites, DNS, firewalls, and many more. To pro-\nvide these services, Unix systems must be connected to \nhostile networks, such as the Internet, where legitimate \nusers can easily access and make use of these services. \n Unfortunately, providing easy access to legitimate \nusers makes the system easily accessible to bad actors \nwho would subvert access controls and other security \nmeasures to steal sensitive information, change reference \ndata, or simply make services unavailable to legitimate \nusers. Attackers can probe systems for security weak-\nnesses, identify and exploit vulnerabilities, and generally \nwreak digital havoc with relative impunity from any-\nwhere around the globe. \n Minimizing Attack Surface \n Every way in which an attacker can interact with the \nsystem poses a security risk. Any system that makes \navailable a large number of network services, especially \ncomplex services such as the custom Web applications of \ntoday, suffers a higher likelihood that inadequate permis-\nsions or a software bug or some other error will present \nattackers with an opportunity to compromise security. In \ncontrast, even a very insecure service cannot be compro-\nmised if it is not running. \n A pillar of any security architecture is the concept of \nminimizing the attack surface. By reducing the number of \nenabled network services and by reducing the available \n TABLE 6.1 Unix permissions and chmod \n chmod usage \n Read \n Write \n Execute \n Special \n User \n u \u0002 r or 0004 \n u \u0002 w or 0002 \n u \u0002 x or 0001 \n u \u0002 s or 4000 \n Group \n u \u0002 r or 0040 \n u \u0002 w or 0020 \n u \u0002 x or 0010 \n u \u0002 s or 2000 \n Other \n u \u0002 r or 0400 \n u \u0002 w or 0200 \n u \u0002 x or 0100 \n u \u0002 s or 1000 \n" }, { "page_number": 118, "text": "Chapter | 6 Eliminating the Security Weakness of Linux and UNIX Operating Systems\n85\nfunctionality of those services that are enabled, a system \npresents a smaller set of functions that can be subverted \nby an attacker. Other ways to reduce attackable surface \narea are to deny network access from unknown hosts \nwhen possible and to limit the privileges of running serv-\nices, to limit the extent of the damage they might be sub-\nverted to cause. \n Eliminate Unnecessary Services \n The first step in reducing attack surface is to disable \nunnecessary services provided by a server. In Unix, serv-\nices are enabled in one of several ways. The “ Internet \ndaemon, ” or inetd , is a historically popular mechanism \nfor managing network services. Like many Unix pro-\ngrams, inetd is configured by editing a text file. In the \ncase of inetd, this text file is /etc/inetd.conf; unnecessary \nservices should be commented out of this file. Today a \nmore modular replacement for inetd, called xinetd, is \ngaining popularity. The configuration for xinetd is not \ncontained in any single file but in many files located in \nthe /etc/xinetd.d/ directory. Each file in this directory \nconfigures a single service, and a service may be disa-\nbled by removing the file or by making the appropriate \nchanges to the file. \n Many Unix services are not managed by inetd or \nxinetd, however. Network services are often started \nby the system’s initialization scripts during the boot \nsequence. Derivatives of the BSD Unix family histori-\ncally used a simple initialization script located in /etc/rc. \nTo control the services that are started during the boot \nsequence, it is necessary to edit this script. \n Recent Unices (the plural of Unix), even BSD deriv-\natives, use something similar to the initialization scheme \nof the System V family. In this scheme, a “ run level ” \nis chosen at boot time. The default run level is defined \nin /etc/inittab; typically, it is 3 or 5. The initialization \nscripts for each run level are located in /etc/rc X .d, where \n X represents the run-level number. The services that are \nstarted during the boot process are controlled by adding \nor removing scripts in the appropriate run-level direc-\ntory. Some Unices provide tools to help manage these \nscripts, such as the chkconfig command in Red Hat Linux \nand derivatives. There are also other methods of manag-\ning services in Unix, such as the Service Management \nFacility of Solaris 10. \n No matter how a network service is started or man-\naged, however, it must necessarily listen for network \nconnections to make itself available to users. This fact \nmakes it possible to positively identify all running net-\nwork services by looking for processes that are listen-\ning for network connections. Almost all versions of \nUnix provide a command that makes this a trivial task. \nThe netstat command can be used to list various kinds \nof information about the network environment of a Unix \nhost. Running this command with the appropriate flags \n(usually – lut ) will produce a listing of all open network \nports, including those that are listening for incoming \nconnections (see Figure 6.3 ). \n Every such listening port should correspond to a \nnecessary service that is well understood and securely \nconfigured. \n Host-based \n Obviously, it is impossible to disable all the services \nprovided by a server. However, it is possible to limit \nthe hosts that have access to a given service. Often it is \npossible to identify a well-defined list of hosts or sub-\nnets that should be granted access to a network service. \nThere are several ways in which this restriction can be \nconfigured. \n A classical way of configuring these limitations \nis through the tcpwrappers interface. The tcpwrap-\npers functionality is to limit the network hosts that are \nallowed to access services provided by the server. These \ncontrols are configured in two text files, /etc/hosts.\nallow and /etc/hosts.deny. This interface was originally \ndesigned to be used by inetd and xinetd on behalf of the \nservices they manage. Today most service-providing \nsoftware directly supports this functionality. \n Another, more robust method of controlling network \naccess is through firewall configurations. Most modern \nUnices include some form of firewall capability: IPFilter, \nused by many commercial Unices; IPFW, used by most \nof the BSD variants, and IPTables, used by Linux. In all \ncases, the best way to arrive at a secure configuration is \nto create a default rule to deny all traffic, and to then cre-\nate the fewest, most specific exceptions possible. \n Modern firewall implementations are able to analyze \nevery aspect of the network traffic they filter as well as \naggregate traffic into logical connections and track the \nstate of those connections. The ability to accept or deny \nconnections based on more than just the originating \nnetwork address and to end a conversation when cer-\ntain conditions are met makes modern firewalls a much \nmore powerful control for limiting attack surface than \ntcpwrappers. \n chroot and Other Jails \n Eventually some network hosts must be allowed to \naccess a service if it is to be useful at all. In fact, it is \noften necessary to allow anyone on the Internet to access \n" }, { "page_number": 119, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n86\na service, such as a public Web site. Once a malicious \nuser can access a service, there is a risk that the service \nwill be subverted into executing unauthorized instruc-\ntions on behalf of the attacker. The potential for damage \nis limited only by the permissions that the service pro-\ncess has to access resources and to make changes on the \nsystem. For this reason, an important security measure \nis to limit the power of a service to the bare minimum \nnecessary to allow it to perform its duties. \n A primary method of achieving this goal is to associ-\nate the service process with a user who has limited per-\nmissions. In many cases, it’s possible to configure a user \nwith very few permissions on the system and to associ-\nate that user with a service process. In these cases, the \nservice can only perform a limited amount of damage, \neven if it is subverted by attackers. \n Unfortunately, this is not always very effective \nor even possible. A service must often access sensi-\ntive server resources to perform its work. Configuring \na set of permissions to allow access to only the sensi-\ntive information required for a service to operate can be \ncomplex or impossible. \n In answer to this challenge, Unix has long supported \nthe chroot and ulimit interfaces as ways to limit the access \nthat a powerful process has on a system. The chroot \ninterface limits a process’s access on the file system. \nRegardless of actual permissions, a process run under a \n chroot jail can only access a certain part of the file system. \nCommon practice is to run sensitive or powerful services \nin a chroot jail and make a copy of only those file system \nresources that the service needs in order to operate. This \nallows a service to run with a high level of system access, \nyet be unable to damage the contents of the file system \noutside the portion it is allocated. 5 \n The ulimit interface is somewhat different in that it \ncan configure limits on the amount of system resources a \nprocess or user may consume. A limited amount of disk \nspace, memory, CPU utilization, and other resources can \nbe set for a service process. This can curtail the possi-\nbility of a denial-of-service attack because the service \ncannot exhaust all system resources, even if it has been \nsubverted by an attacker. 6 \n Access Control \n Reducing the attack surface area of a system limits the \nways in which an attacker can interact and therefore sub-\nvert a server. Access control can be seen as another way \n FIGURE 6.3 Output of netstat – lut . \n 6 W. Richard Stevens, (1992), Advanced Programming in the UNIX \nEnvironment , Addison-Wesley, Reading. \n 5 W. Richard Stevens, (1992), Advanced Programming in the UNIX \nEnvironment , Addison-Wesley, Reading. \n" }, { "page_number": 120, "text": "Chapter | 6 Eliminating the Security Weakness of Linux and UNIX Operating Systems\n87\nto reduce the attack surface area. By requiring all users to \nprove their identity before making any use of a service, \naccess control reduces the number of ways in which an \nanonymous attacker can interact with the system. \n In general, access control involves three phases. \nThe first phase is identification, where a user asserts his \nidentity. The second phase is authentication, where the \nuser proves his identity. The third phase is authorization, \nwhere the server allows or disallows particular actions \nbased on permissions assigned to the authenticated user. \n Strong Authentication \n It is critical, therefore, that a secure mechanism is used \nto prove the user’s identity. If this mechanism were to \nbe subverted, an attacker would be able to impersonate \na user to access resources or issue commands with what-\never authorization level has been granted to that user. \nFor decades, the primary form of authentication has been \nthrough the use of passwords. However, passwords suf-\nfer from several weaknesses as a form of authentication, \npresenting attackers with opportunities to impersonate \nlegitimate users for illegitimate ends. Bruce Schneier \nhas argued for years that “ passwords have outlived their \nusefulness as a serious security device. ” 7 \n More secure authentication mechanisms include two-\nfactor authentication and PKI certificates. \n Two-Factor Authentication Two-factor authentication \ninvolves the presentation of two of the following types \nof information by users to prove their identity: some-\nthing they know, something they have, or something they \nare. The first factor, something they know, is typified by \na password or a PIN — some shared secret that only the \nlegitimate user should know. The second factor, some-\nthing they have, is usually fulfilled by a unique physical \ntoken (see Figure 6.4 ). RSA makes a popular line of such \ntokens, but cell phones, matrix cards, and other alterna-\ntives are becoming more common. The third factor, some-\nthing they are, usually refers to biometrics. \n Unix supports various ways to implement two-factor \nauthentication into the system. Pluggable Authentication \nModules, or PAMs, allow a program to use arbitrary \nauthentication mechanisms without needing to manage \nany of the details. PAMs are used by Solaris, Linux, and \nother Unices. BSD authentication serves a similar pur-\npose and is used by several major BSD derivatives. \n With PAM or BSD authentication, it is possible to \nconfigure any combination of authentication mecha-\nnisms, including simple passwords, biometrics, RSA \ntokens, Kerberos, and more. It’s also possible to config-\nure a different combination for different services. This \nkind of flexibility allows a Unix security administrator \nto implement a very strong authentication requirement \nas a prerequisite for access to sensitive services. \n PKI Strong authentication can also be implemented using \na Private Key Infrastructure, or PKI. Secure Socket Layer, \nor SSL, is a simplified PKI designed for secure communi-\ncations, familiar from its use in securing traffic on the Web. \nUsing a similar foundation of technologies, it’s possible to \nissue and manage certificates to authenticate users rather \nthan Web sites. Additional technologies, such as a trusted \nplatform module or a smart card, simplify the use of these \ncertificates in support of two-factor authentication. \n Dedicated Service Accounts \n After strong authentication, limiting the complexity of the \nauthorization phase is the most important part of access \ncontrol. User accounts should not be authorized to per-\nform sensitive tasks. Services should be associated with \n FIGURE 6.4 Physical tokens used for two-factor authentication. \n 7 B. Schneier, December 14, 2006, Real-World Passwords , retrieved \nOctober 9, 2008, from Schneier on Security: www.schneier.com/blog/\narchives/2006/12/realworld_passw.html . \n" }, { "page_number": 121, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n88\ndedicated user accounts, which should then be author-\nized to perform only those tasks required for providing \nthat service. \n Additional Controls \n In addition to minimizing the attack surface area and \nimplementing strong access controls, there are several \nimportant aspects of securing a Unix network server. \n Encrypted Communications \n One of the ways an attacker can steal sensitive infor-\nmation is to eavesdrop on network traffic. Information \nis vulnerable as it flows across the network, unless it is \nencrypted. Sensitive information, including passwords \nand intellectual property, are routinely transmitted over \nthe network. Even information that is seemingly useless \nto an attacker can contain important clues to help a bad \nactor compromise security. \n File Transfer Protocol (FTP), World Wide Web \n(WWW), and many other services that transmit informa-\ntion over the network support the Secure Sockets Layer \nstandard, or SSL, for encrypted communications. For \nserver software that doesn’t support SSL natively, wrap-\npers like stunnel provide transparent SSL functionality. \n No discussion of Unix network encryption can be \ncomplete without mention of Secure Shell, or SSH. SSH \nis a replacement for Telnet and RSH, providing remote \ncommand-line access to Unix systems as well as other \nfunctionality. SSH encrypts all network communications \nusing SSL, mitigating many of the risks of Telnet and RSH. \n Log Analysis \n In addition to encrypting network communications, it is \nimportant to keep a detailed activity log to provide an \naudit trail in case of anomalous behavior. At a minimum, \nthe logs should capture system activity such as logon and \nlogoff events as well as service program activity, such as \nFTP, WWW, or Structured Query Language (SQL) logs. \n Since the 1980s, the syslog service has historically \nbeen used to manage log entries in Unix. Over the years, \nthe original implementation has been replaced by more \nfeature-rich implementations, such as syslog-ng and rsys-\nlog . These systems can be configured to send log messages \nto local files as well as remote destinations, based on inde-\npendently defined verbosity levels and message sources. \n The syslog system can independently route messages \nbased on the facility, or message source, and the level, or \nmessage importance. The facility can identify the mes-\nsage as pertaining to the kernel, the email system, user \nactivity, an authentication event, or any of various other \nservices. The level denotes the criticality of the message \nand can typically be one of emergency, alert, critical, \nerror, warning, notice, informational, and debug . Under \nLinux, the klog process is responsible for handling log \nmessages generated by the kernel; typically, klog is con-\nfigured to route these messages through syslog, just like \nany other process. \n Some services, such as the Apache Web server, have \nlimited or no support for syslog. These services typically \ninclude the ability to log activity to a file independently. \nIn these cases, simple scripts can redirect the contents \nof these files to syslog for further distribution and/or \nprocessing. \n Relevant logs should be copied to a remote, secure \nserver to ensure that they cannot be tampered with. \nAdditionally, file hashes should be used to identify any \nattempt to tamper with the logs. In this way, the audit \ntrail provided by the log files can be depended on as a \nsource of uncompromised information about the security \nstatus of the system. \n IDS/IPS \n Intrusion detection systems (IDSs) and intrusion preven-\ntion systems (IPSs) have become commonplace security \nitems on today’s networks. Unix has a rich heritage of \nsuch software, including Snort, Prelude, and OSSEC. \nCorrectly deployed, an IDS can provide an early warn-\ning of probes and other precursors to attack. \n Host Hardening \n Unfortunately, not all attacks originate from the network. \nMalicious users often gain access to a system through \nlegitimate means, bypassing network-based defenses. \nThere are various steps that can be taken to harden a \nUnix system from a host-based attack such as this. \n Permissions \n The most obvious step is to limit the permissions of user \naccounts on the Unix host. Recall that every file and \ndirectory in a Unix file system is associated with a sin-\ngle user and a single group. User accounts should each \nhave permissions that allow full control of their respec-\ntive home directories. Together with permissions to read \nand execute system programs, this allows most of the \ntypical functionality required of a Unix user account. \nAdditional permissions that might be required include \nmail spool files and directories as well as crontab files \nfor scheduling tasks. \n" }, { "page_number": 122, "text": "Chapter | 6 Eliminating the Security Weakness of Linux and UNIX Operating Systems\n89\n Administrative Accounts \n Setting permissions for administrative users is a more \ncomplicated question. These accounts must access very \npowerful system-level commands and resources in the \nroutine discharge of their administrative functions. For \nthis reason, it’s difficult to limit the tasks these users \nmay perform. It’s possible, however, to create special-\nized administrative user accounts, then authorize these \naccounts to access a well-defined subset of administra-\ntive resources. Printer management, Web site administra-\ntion, email management, database administration, storage \nmanagement, backup administration, software upgrades, \nand other specific administrative functions common to \nUnix systems lend themselves to this approach. \n Groups \n Often it is convenient to apply permissions to a set of \nusers rather than a single user or all users. The Unix \ngroup mechanism allows for a single user to belong to \none or more groups and for file system permissions and \nother access controls to be applied to a group. \n File System Attributes and ACLs \n It can become unfeasibly complex to implement and \nmanage anything more than a simple permissions \nscheme using the classical Unix file system permission \ncapabilities. To overcome this issue, modern Unix file \nsystems support access control lists, or ACLs. Most Unix \nfile systems support ACLs using extended attributes that \ncould be used to store arbitrary information about any \ngiven file or directory. By recognizing authorization \ninformation in these extended attributes, the file system \nimplements a comprehensive mechanism to specify arbi-\ntrarily complex permissions for any file system resource. \n ACLs contain a list of access control entries , or \nACEs, which specify the permissions that a user or \ngroup has on the file system resource in question. On \nmost Unices, the chacl command is used to view and \nset the ACEs of a given file or directory. The ACL sup-\nport in modern Unix file systems provides a fine-grained \nmechanism for managing complex permissions require-\nments. ACLs do not make the setting of minimum per-\nmissions a trivial matter, but complex scenarios can now \nbe addressed effectively. \n Intrusion Detection \n Even after hardening a Unix system with restrictive user \npermissions and ACLs, it’s important to maintain logs of \nsystem activity. As with activity logs of network services, \nhost-centric activity logs track security-relevant events \nthat could show symptoms of compromise or evidence \nof attacks in the reconnaissance or planning stages. \n Audit Trails \n Again, as with network activity logs, Unix has leaned \nheavily on syslog to collect, organize, distribute, and store \nlog messages about system activity. Configuring syslog for \nsystem messages is the same as for network service mes-\nsages. The kernel’s messages, including those messages \ngenerated on behalf of the kernel by klogd under Linux, \nare especially relevant from a host-centric point of view. \n An additional source of audit trail data about system \nactivity is the history logs kept by a login shell such as \n bash . These logs record every command the user issued \nat the command line. The bash shell and others can be \nconfigured to keep these logs in a secure location and to \nattach time stamps to each log entry. This information is \ninvaluable in identifying malicious activity, both as it is \nhappening as well as after the fact. \n File Changes \n Besides tracking activity logs, monitoring file changes \ncan be a valuable indicator of suspicious system activity. \nAttackers often modify system files to elevate privileges, \ncapture passwords or other credentials, establish back-\ndoors to ensure future access to the system, and support \nother illegitimate uses. Identifying these changes early \ncan often foil an attack in progress before the attacker is \nable to cause significant damage or loss. \n Programs such as Tripwire and Aide have been \naround for decades; their function is to monitor the file \nsystem for unauthorized changes and raise an alert when \none is found. Historically, they functioned by scan-\nning the file system and generating a unique hash , or \nfingerprint, of each file. On future runs, the tool would \nrecalculate the hashes and identify changed files by \nthe difference in the hash. Limitations of this approach \ninclude the need to regularly scan the entire file system, \nwhich can be a slow operation, as well as the need to \nsecure the database of file hashes from tampering. \n Today many Unix systems support file change moni-\ntoring: Linux has dnotify and inotify; Mac OS X has \nFSEvents, and other Unices have File Alteration Monitor. \nAll these present an alternative method of identifying file \nchanges and reviewing them for security implications. \n Specialized Hardening \n Many Unices have specialized hardening features that \nmake it more difficult to exploit software vulnerabilities \n" }, { "page_number": 123, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n90\nor to do so without leaving traces on the system and/or to \nshow that the system is so hardened. Linux has been a pop-\nular platform for research in this area; even the National \nSecurity Agency (NSA) has released code to implement its \nstrict security requirements under Linux. Here we outline \ntwo of the most popular Linux hardening packages. Other \nsuch packages exist for Linux and other Unices, some of \nwhich use innovative techniques such as virtualization to \nisolate sensitive data, but they are not covered here. \n GRSec/PAX \n The grsecurity package provides several major security \nenhancements for Linux. Perhaps the primary benefit is \nthe flexible policies that define fine-grained permissions \nit can control. This role-based access control capability is \nespecially powerful when coupled with grsecurity’s ability \nto monitor system activity over a period of time and gener-\nate a minimum set of privileges for all users. Additionally, \nthrough the PAX subsystem, grsecurity manipulates pro-\ngram memory to make it very difficult to exploit many \nkinds of security vulnerabilities. Other benefits include \na very robust auditing capability and other features that \nstrengthen existing security features, such as chroot jails. \n SELinux \n Security Enhanced Linux, or SELinux, is a pack-\nage developed by the NSA. It adds Mandatory Access \nControl, or MAC, and related concepts to Linux. MAC \ninvolves assigning security attributes as well as sys-\ntem resources such as files and memory to users. When \na user attempts to read, write, execute, or perform any \nother action on a system resource, the security attributes \nof the user and the resource are both used to determine \nwhether the action is allowed, according to the security \npolicies configured for the system. \n Systems Management Security \n After hardening a Unix host from network-borne attacks \nand hardening it from attacks performed by an author-\nized user of the machine, we will take a look at a few \nsystems management issues. These topics arguably fall \noutside the purview of security as such; however, by tak-\ning certain considerations into account, systems man-\nagement can both improve and simplify the work of \nsecuring a Unix system. \n Account Management \n User accounts can be thought of as keys to the “ castle ” \nof a system. As users require access to the system, they \nmust be issued keys, or accounts, so they can use it. \nWhen a user no longer requires access to the system, her \nkey should be taken away or at least disabled. \n This sounds simple in theory, but account manage-\nment in practice is anything but trivial. In all but the \nsmallest environments, it is infeasible to manage user \naccounts without a centralized account directory where \nnecessary changes can be made and propagated to every \nserver on the network. Through PAM, BSD authen-\ntication, and other mechanisms, modern Unices sup-\nport LDAP, SQL databases, Windows NT and Active \nDirectory, Kerberos, and myriad other centralized \naccount directory technologies. \n Patching \n Outdated software is perhaps the number-one cause of \neasily preventable security incidents. Choosing a mod-\nern Unix with a robust upgrade mechanism and history \nof timely updates, at least for security fixes, makes it \neasier to keep software up to date and secure from well-\nknown exploits. \n Backups \n When all else fails — especially when attackers have suc-\ncessfully modified or deleted data in ways that are dif-\nficult or impossible to positively identify — good backups \nwill save the day. When backups are robust, reliable, and \naccessible, they put a ceiling on the amount of damage an \nattacker can do. Unfortunately, good backups don’t help \nif the greatest damage comes from disclosure of sensitive \ninformation; in fact, backups could exacerbate the prob-\nlem if they are not taken and stored in a secure way. \n 3. PROACTIVE DEFENSE FOR \nLINUX AND UNIX \n As security professionals, we devote ourselves to defend-\ning systems from attack. However, it is important to \nunderstand the common tools, mindsets, and motivations \nthat drive attackers. This knowledge can prove invalu-\nable in mounting an effective defense against attack. It’s \nalso important to prepare for the possibility of a success-\nful attack and to consider organizational issues so that \nyou can develop a secure environment. \n Vulnerability Assessment \n A vulnerability assessment looks for security weaknesses \nin a system. Assessments have become an established \n" }, { "page_number": 124, "text": "Chapter | 6 Eliminating the Security Weakness of Linux and UNIX Operating Systems\n91\nbest practice, incorporated into many standards and reg-\nulations. They can be network-centric or host-based. \n Network-Based Assessment \n Network-centric vulnerability assessment looks for secu-\nrity weaknesses a system presents to the network. Unix \nhas a rich heritage of tools for performing network vul-\nnerability assessments. Most of these tools are available \non most Unix flavors. \n nmap is a free, open source tool for identifying hosts \non a network and the services running on those hosts. It’s a \npowerful tool for mapping out the true services being pro-\nvided on a network. It’s also easy to get started with nmap. \n Nessus is another free network security tool, though its \nsource code isn’t available. It’s designed to check for and \noptionally verify the existence of known security vulner-\nabilities. It works by looking at various pieces of informa-\ntion about a host on the network, such as detailed version \ninformation about the operating system and any software \nproviding services on the network. This information is \ncompared to a database that lists vulnerabilities known \nto exist in certain software configurations. In many cases, \nNessus is also capable of confirming a match in the vul-\nnerability database by attempting an exploit; however, this \nis likely to crash the service or even the entire system. \n Many other tools are available for performing net-\nwork vulnerability assessments. Insecure.Org, the folks \nbehind the nmap tool, also maintain a great list of security \ntools. 8 \n Host-Based Assessment \n Several tools can examine the security settings of a \nsystem from a host-based perspective. These tools are \ndesigned to be run on the system that’s being checked; \nno network connections are necessarily initiated. They \ncheck things such as file permissions and other insecure \nconfiguration settings on Unix systems. \n One such tool, lynis , is available for various Linux \ndistributions as well as some BSD variants. Another tool \nis the Linux Security Auditing Tool, or lsat . Ironically, \nlsat supports more versions of Unix than lynis does, \nincluding Solaris and AIX. \n No discussion of host-based Unix security would be \ncomplete without mentioning Bastille (see Figure 6.5 ). \nThough lynis and lsat are pure auditing tools that report on \nthe status of various security-sensitive host configuration \nsettings, Bastille was designed to help remediate these \nissues. Recent versions have a reporting-only mode that \nmakes Bastille work like a pure auditing tool. \n Incident Response Preparation \n Regardless of how hardened a Unix system is, there is \nalways a possibility that an attacker — whether it’s a \nworm, a virus, or a sophisticated custom attack — will \nsuccessfully compromise the security of the system. For \nthis reason, it is important to think about how to respond \nto a wide variety of security incidents. \n Predefined Roles and Contact List \n A fundamental part of incident response preparation is to \nidentify the roles that various personnel will play in the \nresponse scenario. The manual, hands-on gestalt of Unix \nsystems administration has historically forced Unix sys-\ntems administrators to be familiar with all aspects of the \nUnix systems they manage. These should clearly be on the \nincident response team. Database, application, backup, and \nother administrators should be on the team as well, at least \nas secondary personnel that can be called on as necessary. \n Simple Message for End Users \n Incident response is a complicated process that must deal \nwith conflicting requirements to bring the systems back \nonline while ensuring that any damage caused by the \nattack — as well as whatever security flaws were exploited \nto gain initial access — is corrected. Often, end users with-\nout incident response training are the first to handle a \nsystem after a security incident has been identified. It is \nimportant that these users have clear, simple instructions \nin this case, to avoid causing additional damage or loss \nof evidence. In most situations, it is appropriate to simply \nunplug a Unix system from the network as soon as a com-\npromise of its security is confirmed. It should not be used, \nlogged onto, logged off from, turned off, disconnected \nfrom electrical power, or otherwise tampered with in any \nway. This simple action has the best chance, in most cases, \nto preserve the status of the incident for further investiga-\ntion while minimizing the damage that could ensue. \n Blue Team/Red Team Exercises \n Any incident response plan, no matter how well \ndesigned, must be practiced to be effective. Regularly \nexercising these plans and reviewing the results are \nimportant parts of incident response preparation. \nA common way of organizing such exercises is to assign \n 8 Insecure.Org, 2008, “Top 100 Network Security Tools”, retrieved \nOctober 9, 2008, from http://sectools.org . \n" }, { "page_number": 125, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n92\nsome personnel (the Red Team) to simulate a success-\nful attack, while other personnel (the Blue Team) are \nassigned to respond to that attack according to the estab-\nlished incident response plan. These exercises, referred \nto as Red Team/Blue Team exercises, are invaluable for \ntesting incident response plans. They are also useful in \ndiscovering security weaknesses and in fostering a sense \nof esprit des corps among the personnel involved. \n Organizational Considerations \n Various organizational and personnel management issues \ncan also impact the security of Unix systems. Unix is a \ncomplex operating system. Many different duties must be \nperformed in the day-to-day administration of Unix sys-\ntems. Security suffers when a single individual is respon-\nsible for many of these duties; however, that is commonly \nthe skill set of Unix system administration personnel. \n Separation of Duties \n One way to counter the insecurity of this situation is to \nforce different individuals to perform different duties. \nOften, simply identifying independent functions, such as \nbackups and log monitoring, and assigning appropriate \npermissions to independent individuals is enough. Log \nmanagement, application management, user manage-\nment, system monitoring, and backup operations are just \nsome of the roles that can be separated. \n Forced Vacations \n Especially when duties are appropriately separated, \nunannounced forced vacations are a powerful way to \nbring fresh perspectives to security tasks. It’s also an \neffective deterrent to internal fraud or mismanage-\nment of security responsibilities. A more robust set of \nrequirements for organizational security comes from \nthe Information Security Management Maturity Model, \nincluding its concepts of transparency, partitioning, sep-\naration, rotation, and supervision of responsibilities. 9 \n 9 ISECOM 2008, “Security Operations Maturity Architecture”, \nretrieved October 9, 2008, from ISECOM: www.isecom.org/soma . \n FIGURE 6.5 Bastille screenshot . \n" }, { "page_number": 126, "text": "93\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n Internet Security \n Jesse Walker \n Intel Corporation \n Chapter 7 \n The Internet, and all its accompanying complications, \nhas become integral to our lives. The security problems \nbesetting the Internet are legendary and have been daily \nannoyances to many users. Given the Net’s broad impact \non our lives and the widespread security issues associ-\nated withit, it is worthwhile understanding what can be \ndone to improve the immunity of our communications \nfrom attack. \n The Internet can serve as a laboratory for studying \nnetwork security issues; indeed, we can use it to study \nnearly every kind of security issue. We will pursue only \na modest set of questions related to this theme. The goal \nof this chapter is to understand how cryptography can \nbe used to address some of the security issues besetting \ncommunications protocols. To do so, it will be helpful to \nfirst understand the Internet architecture. After that we \nwill survey the types of attacks that are possible against \ncommunications. With this background we will be in a \nposition to understand how cryptography can be used to \npreserve the confidentiality and integrity of messages. \n Our goal is modest. It is only to describe the network \narchitecture and its cryptographic-based security mecha-\nnisms sufficiently to understand some of the major issues \nconfronting security systems designers and to appreciate \nsome of the major design decisions they have to make to \naddress these issues. \n 1. INTERNET PROTOCOL ARCHITECTURE \n The Internet was designed to create standardized com-\nmunication between computers. Computers commu-\nnicate by exchanging messages. The Internet supports \nmessage exchange through a mechanism called proto-\ncols . Protocols are very detailed and stereotyped rules \nexplaining exactly how to exchange a particular set of \nmessages. Each protocol is defined as a set of finite state \nautomata and a set of message formats. Each protocol \nspecification defines one automaton for sending a mes-\nsage and another for receiving a message. The automata \nspecify the message timing; they play the role of gram-\nmar, indicating whether any particular message is mean-\ningful or is interpreted by the receiver as gibberish. The \nprotocol formats restrict the information that the proto-\ncol can express. \n Security has little utility as an abstract, disembodied \nconcept. What the word security should mean depends \nvery much on the context in which it is applied. The \narchitecture, design, and implementation of a system \neach determine the kind of vulnerabilities and opportu-\nnities for exploits that exist and which features are easy \nor hard to attack or defend. \n It is fairly easy to understand why this is true. An \nattack on a system is an attempt to make the system act \noutside its specification. An attack is different from “ nor-\nmal ” bugs that afflict computers and that occur through \nrandom interactions between the system’s environment \nand undetected flaws in the system architecture, design, \nor implementation. An attack, on the other hand, is an \nexplicit and systematic attempt by a party to search for \nflaws that make the computer act in a way its designers \ndid not intend. \n Computing systems consist of a large number of \nblocks or modules assembled together, each of which \nprovides an intended set of functions. The system archi-\ntecture hooks the modules together through interfaces , \nthrough which the various modules exchange informa-\ntion to activate the functions provided by each module \nin a coordinated way. An attacker exploits the architec-\nture to compromise the computing system by interject-\ning inputs into these interfaces that do not conform to \nthe specification for inputs of a specific module. If the \ntargeted module has not been carefully crafted, unex-\npected inputs can cause it to behave in unintended ways. \nThis implies that the security of a system is determined \nby its decomposition into modules, which an adversary \n" }, { "page_number": 127, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n94\n exploits by injecting messages into the interfaces the \narchitecture exposes. Accordingly, no satisfying discus-\nsion of any system is feasible without an understanding \nof the system architecture. Our first goal, therefore, is to \nreview the architecture of the Internet communication \nprotocols in an effort to gain a deeper understanding of \nits vulnerabilities. \n Communications Architecture Basics \n Since communication is an extremely complex activity, it \nshould come as no surprise that the system components \nproviding communication decompose into modules. One \nstandard way to describe each communication module is \nas a black box with a well-defined service interface. A \nminimal communications service interface requires four \nprimitives: \n ● A send primitive, which an application using the \ncommunications module uses to send a message \nvia the module to a peer application executing on \nanother networked device. The send primitive speci-\nfies a message payload and a destination. The com-\nmunication module responding to the send transmits \nthe message to the specified destination, reporting its \nrequester as the message source. \n ● A confirm primitive, to report that the module has \nsent a message to the designated destination in \nresponse to a send request or to report when the \nmessage transmission failed, along with any failure \ndetails that might be known. It is possible to combine \nthe send and confirm primitives, but network \narchitectures rarely take this approach. The send \nprimitive is normally defined to allow the application \nto pass a message to the communications module \nfor transmission by transferring control of a buffer \ncontaining the message. The confirm primitive then \nreleases the buffer back to the calling application \nwhen the message has indeed been sent. This scheme \neffects “ a conservation of buffers ” and enables the \ncommunications module and the application using \nit to operate in parallel, thus enhancing the overall \ncommunication performance. \n ● A listen primitive, which the receiving application \nuses to provide the communications module with \nbuffers into which it should put messages arriving \nfrom the network. Each buffer the application posts \nmust be large enough to receive a message of the \nmaximum expected size. \n ● A receive primitive, to deliver a received message \nfrom another party to the receiving application. This \nreleases a posted buffer back to the application and \nusually generates a signal to notify the application \nof message arrival. The released buffer contains the \nreceived message and the (alleged) message source. \n Sometimes the listen primitive is replaced with a \n release primitive. In this model the receive buffer is \nowned by the receiving communications module instead \nof the application, and the application must recycle buff-\ners containing received messages back to the communi-\ncation module upon completion. In this case the buffer \nsize selected by the receiving module determines the \nmaximum message size. In a moment we will explain \nhow network protocols work around this restriction. \n It is customary to include a fifth service interface \nprimitive for communications modules: \n ● A status primitive, to report diagnostic and perform-\nance information about the underlying communica-\ntions. This might report statistics, the state of active \nassociations with other network devices, and the like. \n Communications is effected by providing a com-\nmunications module black box on systems, connected \nby a signaling medium. The medium connecting the \ntwo devices constitutes the network communications \npath. The media can consist of a direct link between the \ndevices or, more commonly, several intermediate relay \nsystems between the two communicating endpoints. \nEach relay system is itself a communicating device with \nits own communications module, which receives and \nthen forward messages from the initiating system to the \ndestination system. \n Under this architecture, a message is transferred from \nan application on one networked system to an applica-\ntion on a second networked system as follows: \n First the application sourcing the message invokes \nthe send primitive exported by its communications mod-\nule. This causes the communications module to (attempt) \nto transmit the message to a destination provided by the \napplication in the send primitive. \n The communications module encodes the message \nonto the network’s physical medium representing a \nlink to another system. If the communications module \nimplements a best-effort message service, it generates \nthe confirm primitive as soon as the message has been \nencoded onto the medium. If the communication module \nimplements a reliable message service, the communica-\ntion delays generation of the confirm until it receives an \nacknowledgment from the message destination. If it has \nnot received an acknowledgment from the receiver after \nsome period of time, it generates a confirm indicating \nthat the message delivery failed. \n" }, { "page_number": 128, "text": "Chapter | 7 Internet Security\n95\n The encoded message traverses the network medium \nand is placed into a buffer by the receiving communications \nmodule of another system attached to the medium. This \ncommunications module examines the destination. The \nmodule then examines the destination specified by the mes-\nsage. If the module’s local system is not the destination, \nthe module reencodes the message onto the medium rep-\nresenting another link; otherwise the module uses the \n deliver primitive to pass the message to the receiving \napplication. \n Getting More Specific \n This stereotyped description of networked communi-\ncations is overly simplified. Communications are actu-\nally torturously more difficult in real network modules. \nTo tame this complexity, communications modules are \nthemselves partitioned further into layers, each pro-\nviding a different networking function. The Internet \ndecomposes communications into five layers of commu-\nnications modules: \n ● The PHY layer \n ● The MAC layer \n ● The network layer \n ● The transport layer \n ● The sockets layer \n These layers are also augmented by a handful of \ncross-layer coordination modules. The Internet depends \non the following cross-layer modules: \n ● ARP \n ● DHCP \n ● DNS \n ● ICMP \n ● Routing \n An application using networking is also part of the over-\nall system design, and the way it uses the network has to be \ntaken into consideration to understand system security. \n We next briefly describe each of these in turn. \n The PHY Layer \n The PHY (pronounced fie ) layer is technically not part \nof the Internet architecture per se, but Ethernet jacks and \ncables, modems, Wi-Fi adapters, and the like represent the \nmost visible aspect of networking, and no security treat-\nment of the Internet can ignore the PHY layer entirely. \n The PHY layer module is medium dependent, with \na different design for each type of medium: Ethernet, \nphone lines, Wi-Fi, cellular phone, OC-48, and the like \nare based on different PHY layer designs. It is the job \nof the PHY layer to translate between digital bits as rep-\nresented on a computing device and the analog signals \ncrossing the specific physical medium used by the PHY. \nThis translation is a physics exercise. \n To send a message, the PHY layer module encodes \neach bit of each message from the sending device as a \nmedia-specific signal, representing the bit value 1 or 0. \nOnce encoded, the signal propagates along the medium \nfrom the sender to the receiver. The PHY layer module \nat the receiver decodes the medium-specific signal back \ninto a bit. \n It is possible for the encoding step at the transmitting \nPHY layer module to fail, for a signal to be lost or cor-\nrupted while it crosses the medium, and for the decoding \nstep to fail at the receiving PHY layer module. It is the \nresponsibility of higher layers to detect and recover from \nthese potential failures. \n The MAC Layer \n Like the PHY layer, the MAC (pronounced mack ) layer \nis not properly a part of the Internet architecture, but \nno satisfactory security discussion is possible without \nconsidering it. The MAC module is the “ application ” \nthat uses and controls a particular PHY layer module. \nA MAC layer is always designed in tandem with a spe-\ncific PHY (or vice versa), so a PHY-MAC pair together \nis often referred to as the data link layer. \n MAC is an acronym for media access control . As its \nname suggests, the MAC layer module determines when \nto send and receive frames , which are messages encoded \nin a media-specific format. The job of the MAC is to \npass frames over a link between the MAC layer modules \non different systems. \n Although not entirely accurate, it is useful to think \nof a MAC module as creating links , each of which is a \ncommunication channel between different MAC mod-\nules. It is further useful to distinguish physical links \nand virtual links. A physical link is a direct point-to-\npoint channel between the MAC layers in two endpoint \ndevices. A virtual link can be thought of as a shared \nmedium to which more than two devices can connect \nat the same time. There are no physical endpoints per \nse; the medium acts as though it is multiplexing links \nbetween each pair of attached devices. Some media \nsuch as Ethernet are implemented as physical point-to-\npoint links but act more like virtual links in that more \nthan a single destination is reachable via the link. This \nis accomplished by MAC layer switching, which is also \ncalled bridging . Timing requirements for coordination \n" }, { "page_number": 129, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n96\n among communicating MAC layer modules make it dif-\nficult to build worldwide networks based on MAC layer \nswitching, however. \n A MAC frame consists of a header and a data pay-\nload. The frame header typically specifies information \nsuch as the source and destination for the link endpoints. \nDevices attached to the medium via their MAC \u0002 PHY \nmodules are identified by MAC addresses . Each MAC \nmodule has its own MAC address assigned by its manu-\nfacturer and is supposed to be a globally unique identi-\nfier. The destination MAC address in a frame allows a \nparticular MAC module to identify frames intended for \nit, and the destination address allows it to identify the \npurported frame source. The frame header also usually \nincludes a preamble, which is a set of special PHY tim-\ning signals used to synchronize the interpretation of the \nPHY layer data signals representing the frame bits. \n The payload portion of a frame is the data to be \ntransferred across the network. The maximum payload \nsize is always fixed by the medium type. It is becom-\ning customary for most MACs to support a maximum \npayload size of 1500 bytes \u0003 12,000 bits, but this is not \nuniversal. The maximum fixed size allows the MAC to \nmake efficient use of the underlying physical medium. \nSince messages can be of an arbitrary length exceeding \nthis fixed size, a higher-layer function is needed to parti-\ntion messages into segments of the appropriate length. \n As we have seen, it is possible for bit errors to \ncreep into communications as signals representing bits \ntraverse the PHY medium. MAC layers differ a great \ndeal in how they respond to errors. Some PHY layers, \nsuch as the Ethernet PHY, experience exceedingly low \nerror rates, and for this reason, the MAC layers for these \nPHYs make no attempt to more than detect errors and \ndiscard the mangled frames. Indeed, with these MACs it \nis cheaper for the Internet to resend message segments \nat a higher layer than at the MAC layer. These are called \n best-effort MACs . Others, such as the Wi-Fi MAC, expe-\nrience high error rates due to the shared nature of the \nchannel and natural interference among radio sources, \nand experience has shown that these MACs can deliver \nbetter performance by retransmitting damaged or lost \nframes. It is customary for most MAC layers to append a \nchecksum computed over the entire frame, called a frame \ncheck sequence (FCS). The FCS allows the receiver to \ndetect bit errors accumulated due to random noise and \nother physical phenomena during transmission and due \nto decoding errors. Most MACs discard frames with \nFCS errors. Some MAC layers also perform error cor-\nrection on the received bits to remove random bit errors \nrather than relying on retransmissions. \n The Network Layer \n The purpose of the network layer module is to represent \nmessages in a media-independent manner and forward \nthem between various MAC layer modules represent-\ning different links. The media-independent message for-\nmat is called an Internet Protocol , or IP, datagram . The \nnetwork layer implements the IP layer and is the lowest \nlayer of the Internet architecture per se. \n As well as providing media independence, the net-\nwork layer provides a vital forwarding function that \nworks even for a worldwide network like the Internet. It \nis impractical to form a link directly between each com-\nmunicating system on the planet; indeed, the cabling \ncosts alone are prohibitive — no one wants billions, or \neven dozens, of cables connecting their computer to \nother computers — and too many MAC \u0002 PHY inter-\nfaces can quickly exhaust the power budget for a single \ncomputing system. Hence, each machine is attached by \na small number of links to other devices, and some of \nthe machines with multiple links comprise a switching \nfabric . The computing systems constituting the switch-\ning fabric are called routers . \n The forwarding function supported by the network \nlayer module is the key component of a router and works \nas follows: When a MAC module receives a frame, it \npasses the frame payload to the network layer module. \nThe payload consists of an IP datagram , which is the \nmedia-independent representation of the message. The \nreceiving network layer module examines the datagram \nto see whether to deliver it locally or to pass it on toward \nthe datagram’s ultimate destination. To accomplish the \nlatter, the network layer module consults a forwarding \ntable to identify some neighbor router closer to the ulti-\nmate destination than itself. The forwarding table also \nidentifies the MAC module to use to communicate with \nthe selected neighbor and passes the datagram to that \nMAC layer module. The MAC module in turn retransmits \nthe datagram as a frame encoded for its medium across \nits link to the neighbor. This process happens recursively \nuntil the datagram is delivered to its ultimate destination. \n The network layer forwarding function is based on IP \naddresses , a concept that is critical to understanding the \nInternet architecture. An IP address is a media-independent \nname for one of the MAC layer modules within a com-\nputing system. Each IP address is structured to repre-\nsent the “ location ” of the MAC module within the entire \nInternet. This notion of location is relative to the graph \ncomprising routers and their interconnecting links, called \nthe network topology , not to actual geography. Since this \nname represents a location, the forwarding table within \n" }, { "page_number": 130, "text": "Chapter | 7 Internet Security\n97\n each IP module can use the IP address of the ultimate \ndestination as a sort of signpost pointing at the MAC \nmodule with the greatest likelihood of leading to the ulti-\nmate destination of a particular datagram. \n An IP address is different from the corresponding \nMAC address already described. A MAC address is a \npermanent, globally unique identifier, whereas an IP \naddress can be dynamic due to device mobility; an IP \naddress cannot be assigned by the equipment manufac-\nturer, since a computing device can change locations fre-\nquently. Hence, IP addresses are administered and blocks \nallocated to different organizations with an Internet pres-\nence. It is common, for instance, for an Internet service \nprovider (ISP) to acquire a large block of IP addresses \nfor use by its customers. \n An IP datagram has a structure similar to that of a \nframe: It consists of an IP header, which is “ extra ” over-\nhead used to control the way a datagram passes through \nthe Internet, and a data payload, which contains the mes-\nsage being transferred. The IP header indicates the ulti-\nmate source and destinations, represented as IP addresses. \n The IP header format limits the size of an IP data-\ngram payload to 64K (2 16 \u0003 65,536) bytes. It is com-\nmon to limit datagram sizes to the underlying media size, \nalthough datagrams larger than this do occur. This means \nthat normally each MAC layer frame can carry a single IP \ndatagram as its data payload. IP version 4, still the domi-\nnant version deployed on the Internet today, allows frag-\nmentation of larger datagrams, to split large datagrams \ninto chunks small enough to fit the limited frame size of \nthe underlying MAC layer medium. IPv4 reassembles \nany fragmented datagrams at the ultimate destination. \n Network layer forwarding of IP datagrams is best \neffort, not reliable. Network layer modules along the path \ntaken by any message can lose and reorder datagrams. It \nis common for the network layer in a router to recover \nfrom congestion — that is, when the router is over-\nwhelmed by more receive frames than it can process — by \ndiscarding late-arriving frames until the router has caught \nup with its forwarding workload. The network layer can \nreorder datagrams when the Internet topology changes, \nbecause a new path between source and destination might \nbe shorter or longer than an old path, so datagrams in flight \nbefore the change can arrive after frames sent after the \nchange. The Internet architecture delegates recovery from \nthese problems to high-layer modules. \n The Transport Layer \n The transport layer is implemented by TCP and similar \nprotocols. Not all transport protocols provide the same \nlevel of service as TCP, but a description of TCP will \nsuffice to help us understand the issues addressed by the \ntransport layer. The transport layer provides a multitude \nof functions. \n First, the transport layer creates and manages instances \nof two-way channels between communication endpoints. \nThese channels are called connections . Each connection \nrepresents a virtual endpoint between a pair of commu-\nnication endpoints. A connection is named by a pair of \nIP addresses and port numbers . Two devices can support \nsimultaneous connections using different port numbers \nfor each connection. It is common to differentiate applica-\ntions on the same host through the use of port numbers. \n A second function of the transport layer is to support \ndelivery of messages of arbitrary length. The 64K byte \nlimit of the underlying IP module is too small to carry \nreally large messages, and the transport layer module at \nthe message source chops messages into pieces called \n segments that are more easily digestible by lower-layer \ncommunications modules. The segment size is nego-\ntiated between the two transport endpoints during \nconnection setup. The segment size is chosen by discov-\nering the smallest maximum frame size supported by any \nMAC \u0002 PHY link on the path through the Internet used \nby the connection setup messages. Once this is known, \nthe transmitter typically partitions a large message into \nsegments no larger than this size, plus room for an IP \nheader. The transport layer module passes each segment \nto the network layer module, where it becomes the pay-\nload for a single IP datagram. The destination network \nlayer module extracts the payload from the IP datagram \nand passes it to the transport layer module, which inter-\nprets the information as a message segment. The destina-\ntion transport reassembles this into the original message \nonce all the necessary segments arrive. \n Of course, as noted, MAC frames and IP datagrams \ncan be lost in transit, so some segments can be lost. It \nis the responsibility of the transport layer module to \ndetect this loss and retransmit the missing segments. \nThis is accomplished by a sophisticated acknowledg-\nment algorithm defined by the transport layer. The \ndestination sends a special acknowledgment message, \noften piggybacked with a data segment being sent in \nthe opposite direction, for each segment that arrives. \nAcknowledgments can be lost as well, and if the mes-\nsage source does not receive the acknowledgment within \na time window, the source retransmits the unacknowl-\nedged segment. This process is repeated some number \nof times, and if the failure continues, the network layer \ntears down the connection because it cannot fulfill its \nreliability commitment. \n" }, { "page_number": 131, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n98\n One reason for message loss is congestion at rout-\ners, something blind retransmission of unacknowledged \nsegments will only exacerbate. The network layer is also \nresponsible for implementing congestion control algo-\nrithms as part of its transmit function. TCP, for instance, \nlowers its transmit rate whenever it fails to receive \nan acknowledgment message in time, and it slowly \nincreases its rate of transmission until another acknowl-\nedgment is lost. This allows TCP to adapt to congestion \nin the network, helping to minimize frame loss. \n It can happen that segments arrive at the destina-\ntion out of order, since some IP datagrams for the same \nconnection could traverse the Internet through different \npaths due to dynamic changes in the underlying network \ntopology. The transport layer is responsible for deliver-\ning the segments in the order sent, so the receiver caches \nany segments that arrive out of order prior to delivery. \nThe TCP reordering algorithm is closed tied to the \nacknowledgment and congestion control scheme so that \nthe receiver never has to buffer too many out-of-order \nreceived segments and the sender not too many sent but \nunacknowledged segments. \n Segment data arriving at the receiver can be cor-\nrupted due to undetected bit errors on the data link and \ncopy errors within routers and the sending and receiving \ncomputing systems. Accordingly, all transport layers use \na checksum algorithm called a cyclic redundancy check \n(CRC) to detect such errors. The receiving transport \nlayer module typically discards segments with errors \ndetected by the CRC algorithm, and recovery occurs \nthrough retransmission by the receiver when it fails to \nreceive an acknowledgment from the receiver for a par-\nticular segment. \n The Sockets Layer \n The top layer of the Internet, the sockets layer, does not \n per se appear in the architecture at all. The sockets layer \nprovides a set of sockets, each of which represents a log-\nical communications endpoint. An application can use \nthe sockets layer to create, manage, and destroy connec-\ntion instances using a socket as well as send and receive \nmessages over the connection. The sockets layer has \nbeen designed to hide much of the complexity of utiliz-\ning the transport layer. The sockets layer has been highly \noptimized over the years to deliver as much performance \nas possible, but it does impose a performance penalty. \nApplications with very demanding performance require-\nments tend to utilize the transport layer directly instead \nof through the sockets layer module, but this comes with \na very high cost in terms of software maintenance. \n In most implementations of these communications \nmodules, each message is copied twice, at the sender \nand the receiver. Most operating systems are organized \ninto user space, which is used to run applications, and \nkernel space, where the operating system itself runs. \nThe sockets layer occupies the boundary between user \nspace and kernel space. The sockets layer’s send func-\ntion copies a message from memory controlled by the \nsending application into a buffer controlled by the ker-\nnel for transmission. This copy prevents the application \nfrom changing a message it has posted to send, but it \nalso permits the application and kernel to continue their \nactivities in parallel, thus better utilizing the device’s \ncomputing resources. The sockets layer invokes the \ntransport layer, which partitions the message buffer into \nsegments and passes the address of each segment to the \nnetwork layer. The network layer adds its headers to \nform datagrams from the segments and invokes the right \nMAC layer module to transmit each datagram to its next \nhop. A second copy occurs at the boundary between the \nnetwork layer and the MAC layer, since the data link \nmust be able to asynchronously match transmit requests \nfrom the network layer to available transmit slots on the \nmedium provided by its PHY. This process is reversed at \nthe receiver, with a copy of datagrams across the MAC-\nnetwork layer boundary and of messages between the \nsocket layer and application. \n Address Resolution Protocol \n The network layer uses Address Resolution Protocol, \nor ARP, to translate IP addresses into MAC addresses, \nwhich it needs to give to the MAC layer in order to \ndeliver frames to the appropriate destination. \n The ARP module asks the question, “ Who is using IP \naddress X ? ” The requesting ARP module uses a request/\nresponse protocol, with the MAC layer broadcast-\ning the ARP module’s requests to all the other devices \non the same physical medium segment. A receiving \nARP module generates a response only if its network \nlayer has assigned the IP address to one of its MAC \nmodules. Responses are addressed to the requester’s \nMAC address. The requesting ARP module inserts the \nresponse received in an address translation table used by \nthe network layer to identify the next hop for all data-\ngrams it forwards. \n Dynamic Host Configuration Protocol \n Remember that unlike MAC addresses, IP addresses can-\nnot be assigned in the factory, because they are dynamic \n" }, { "page_number": 132, "text": "Chapter | 7 Internet Security\n99\n and must reflect a device’s current location within the \nInternet’s topology. A MAC module uses Dynamic \nHost Configuration Protocol, or DHCP, to acquire an IP \naddress for itself, to reflect the device’s current location \nwith respect to the Internet topology. \n DHCP makes the request: “ Please configure my \nMAC module with an IP address. ” When one of a \ndevice’s MAC layer modules connects to a new medium, \nit invokes DHCP to make this request. The associated \nDHCP module generates such a request that conveys \nthe MAC address of the MAC module, which the MAC \nlayer module broadcasts to the other devices attached \nto the same physical medium segment. A DHCP server \nresponds with a unicast DHCP response binding an \nIP address to the MAC address. When it receives the \nresponse, the requesting DHCP module passes the \nassigned IP address to the network layer to configure in \nits address translation table. \n In addition to binding an IP address to the MAC \nmodule used by DHCP, the response also contains a \nnumber of network configuration parameters, including \nthe address of one or more routers, to enable reaching \narbitrary destinations, the maximum datagram size sup-\nported, and the addresses of other servers, such as DNS \nservers, that translate human-readable names into IP \naddresses. \n Domain Naming Service \n IP and MAC addresses are efficient means for identify-\ning different network interfaces, but human beings are \nincapable of using these as reliably as computing devices \ncan. Instead, human beings rely on names to identify the \ncomputing devices with which they want to communi-\ncation. These names are centrally managed and called \n domain names. The Domain Naming Service, or DNS, \nis a mechanism for translating human-readable names \ninto IP addresses. \n The translation from human-readable names to IP \naddresses happens within the socket layer module. An \napplication opens a socket with the name of the intended \ndestination. As the first step of opening a connection \nto that destination, the socket sends a request to a DNS \nserver, asking the server to translate the name into an \nIP address. When the server responds, the socket can \nopen the connection to the right destination, using the IP \naddress provided. \n It is becoming common for devices to register their \nIP addresses under their names with DNS once DHCP \nhas completed. This permits other devices to locate the \nregistering device so that they can send messages to it. \n Internet Control Message Protocol \n Internet Control Message Protocol, or ICMP, is an \nimportant diagnostic tool for troubleshooting the \nInternet. Though ICMP provides many specialized mes-\nsage services, three are particularly important: \n ● Ping. Ping is a request/response protocol designed \nto determine reachability of another IP address. The \nrequester sends a ping request message to a desig-\nnated IP address. If it’s delivered, the destination \nIP address sends a ping response message to the \nIP address that sourced the request. The respond-\ning ICMP module copies the contents of the ping \nrequest into the ping response so that the requester \ncan match responses to requests. The requester uses \npings to measure the roundtrip time to a destination. \n ● Traceroute. Traceroute is another request/response \nprotocol. An ICMP module generates a traceroute \nrequest to discover the path it is using to traverse the \nInternet to a destination IP address. The requesting \nICMP module transmits a destination. Each router \nthat handles the traceroute request adds a description \nof its own IP address that received the message and \nthen forwards the updated traceroute request. The \ndestination sends all this information back to the \nmessage source in a traceroute response message. \n ● Destination unreachable. When a router receives a \ndatagram for which it has no next hop, it generates \na “ destination unreachable ” message and sends it \nback to the datagram source. When the message is \ndelivered, the ICMP module marks the forwarding \ntable of the message source so that its network \nlayer will reject further attempts to send messages \nto the destination IP address. An analogous process \nhappens at the ultimate destination when a message \nis delivered to a network layer, but the application \ntargeted to receive the message is no longer on \nline. The purpose of “ destination unreachable ” \nmessages is to suppress messages that will never be \nsuccessfully delivered, to reduce network congestion. \n Routing \n The last cross-layer module we’ll discuss is routing . \nRouting is a middleware application to maintain the for-\nwarding tables used by the network layer. Each router \nadvertises itself by periodically broadcasting “ hello ” mes-\nsages through each of its MAC interfaces. This allows \nrouters to discover the presence or loss of all neighbor-\ning routers, letting them construct the one-hop topol-\nogy of the part of the Internet directly visible through \n" }, { "page_number": 133, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n100\n their directly attached media. The routing application in \na router then uses a sophisticated gossiping mechanism \nto exchange this mechanism with their neighbors. Since \nsome of a router’s neighbors are not its own direct neigh-\nbors, this allows each router to learn the two-hop topol-\nogy of the Internet. This process repeats recursively until \neach router knows the entire topology of the Internet. The \ncost of using each link is part of the information gossiped. \nA routing module receiving this information uses all of it \nto compute a lowest-cost route to each destination. Once \nthis is accomplished, the routing module reconfigures the \nforwarding table maintained by its network layer module. \nThe routine module updates the forwarding table when-\never the Internet topology changes, so each network layer \ncan make optimal forwarding decisions in most situations \nand at the very worst at least reach any other device that \nis also connected to the Internet. \n There are many different routing protocols, each of \nwhich are based on different gossiping mechanisms. The \nmost widely deployed routing protocol between different \nadministrative domains within the Internet is the Border \nGateway Protocol (BGP). The most widely deployed \nrouting protocols within wired networks controlled by a \nsingle administrative domain are OSPF and RIP. AODV, \nOLSR, and TBRPF are commonly used in Wi-Fi meshes. \nDifferent routing protocols are used in different environ-\nments because each one addresses different scaling and \nadministrative issues. \n Applications \n Applications are the ultimate reason for networking, and \nthe Internet architecture has been shaped by applica-\ntions ’ needs. All communicating applications define their \nown language in which to express what they need to say. \nApplications generally use the sockets layer to establish \ncommunication channels, which they then use for their \nown purposes. \n It is worth emphasizing that since the network mod-\nules have been designed to be a generic communications \nvehicle, that is, designed to meet the needs of all (or at \nleast most) applications, it is rarely meaningful for the \nnetwork to attempt to make statements on behalf of the \napplications. There is widespread confusion on this point \naround authentication and key management, which are \nthe source of many exploitable security flaws. \n 2. AN INTERNET THREAT MODEL \n Now that we have reviewed the architecture of the \nInternet protocol suite, it is possible to constructively \nconsider security issues it raises. Before doing so, let’s \nfirst set the scope of the discussion. \n There are two general approaches to attacking a net-\nworked computer. The first is to compromise one of the \ncommunicating parties so that it responds to queries with \nlies or otherwise communicates in a manner not foreseen \nby the system designers of the receiver. For example, it \nhas become common to receive email with virus-infected \nattachments, whereby opening the attachment infects the \nreceiver with the virus. These messages typically are \nsent by a machine that has already been compromised, \nso the sender is no longer acting as intended by the man-\nufacturer of the computing system. Problems of this type \nare called Byzantine failures , named after the Byzantine \nGenerals problem. \n The Byzantine Generals problem imagines several \narmies surrounding Byzantium. The generals command-\ning these armies can communicate only by exchang-\ning messages transported by couriers between them. Of \ncourse the couriers can be captured and the messages \nreplaced by forgeries, but this is not really the issue, \nsince it is possible to devise message schemes that detect \nlost messages or forgeries. All the armies combined are \nsufficient to overwhelm the defenses of Byzantium, but \nif even one army fails to participate in a coordinated \nattack, the armies of Byzantium have sufficient strength \nto repulse the attack. Each general must make a decision \nas to whether to participate in an attack on Byzantium at \ndawn or withdraw to fight another day. The question is \nhow to determine the veracity of the messages received \non which the decision to attack will be made — that is, \nwhether it is possible to detect that one or more generals \nhave become traitors so will say their armies will join \nthe attack when in fact they plan to hold back so that \ntheir allies will be slaughtered by the Byzantines. \n Practical solutions addressing Byzantine failures \nfall largely within the purview of platform rather than \nnetwork architecture. For example, since viruses infect \na platform by buffer overrun attacks, platform mecha-\nnisms to render buffer overrun attacks futile are needed. \nSecure logging, to make an accurate record of messages \nexchanged, is a second deterrent to these sorts of attacks; \nthe way to accomplish secure logging is usually a ques-\ntion of platform design. Most self-propagating viruses \nand worms utilize the Internet to propagate, but they do \nnot utilize any feature of the Internet architecture per se \nfor their success. The success of these attacks instead \ndepends on the architecture, design, implementation, and \npolicies of the receiving system. Although these sorts of \nproblems are important, we will rarely focus on security \nissues stemming from Byzantine failures. \n" }, { "page_number": 134, "text": "Chapter | 7 Internet Security\n101\n What will instead be the focus of the discussion are \nattacks on the messages exchanged between computers \nthemselves. As we will see, even with this more limited \nscope, there are plenty of opportunities for things to go \nwrong. \n The Dolev-Yao Adversary Model \n Security analyses of systems traditionally begin with a \nmodel of the attacker, and we follow this tradition. Dolev \nand Yao formulated the standard attack model against \nmessages exchanged over a network. The Dolev-Yao \nmodel makes the following assumptions about an attacker: \n ● Eavesdrop. An adversary can listen to any message \nexchanged through the network. \n ● Forge. An adversary can create and inject entirely \nnew messages into the datastream or change \nmessages in flight; these messages are called \n forgeries . \n ● Replay. A special type of forgery, called a replay , is \ndistinguished. To replay a message, the adversary \nresends legitimate messages that were sent earlier. \n ● Delay and rush . An adversary can delay the delivery \nof some messages or accelerate the delivery of \nothers. \n ● Reorder . An adversary can alter the order in which \nmessages are delivered. \n ● Delete . An adversary can destroy in-transit messages, \neither selectively or all the messages in a datastream. \n This model assumes a very powerful adversary, and \nmany people who do not design network security solu-\ntions sometime assert that the model grants adversaries \nan unrealistic amount of power to disrupt network com-\nmunications. However, experience demonstrates that it \nis a reasonably realistic set of assumptions in practice; \nexamples of each threat abound, as we will see. One of \nthe reasons for this is that the environment in which the \nnetwork operates is exposed; unlike memory or micro-\nprocessors or other devices comprising a computer, there \nis almost no assurance that the network medium will be \ndeployed in a “ safe ” way. That is, it is comparatively \neasy for an attacker to anonymously access the physi-\ncal network fabric, or at least the medium monitored to \nidentify attacks against the medium and the networked \ntraffic it carries. And since a network is intended as a \ngeneric communications vehicle, it becomes necessary \nto adopt a threat model that addresses the needs of all \npossible applications. \n Layer Threats \n With the Dolev-Yao model in hand, we can examine \neach of the architectural components of the Internet pro-\ntocol suite for vulnerabilities. We next look at threats \neach component of the Internet architecture exposes \nthrough the prism of this model. The first Dolev-Yao \nassumption about adversaries is that they can eavesdrop \non any communications. \n Eavesdropping \n An attacker can eavesdrop on a communications medium \nby connecting a receiver to the medium. Ultimately such \na connection has to be implemented at the PHY layer \nbecause an adversary has to access some physical media \nsomewhere to be able to listen to anything at all. This \nconnection to the PHY medium might be legitimate, \nsuch as when an authorized device is compromised, or \nillegitimate, such as an illegal wiretap; it can be inten-\ntional, as when an eavesdropper installs a rogue device, \nor unintentional, such as a laptop with wireless capabili-\nties that will by default attempt to connect to any Wi-Fi \nnetwork within range. \n With a PHY layer connection, the eavesdropper can \nreceive the analog signals on the medium and decode \nthem into bits. Because of the limited scope of the PHY \nlayer function — there are no messages, only analog sig-\nnals representing bits — the damage an adversary can do \nwith only PHY layer functionality is rather limited. In \nparticular, to make sense of the bits, an adversary has \nto impose the higher-layer frame and datagram formats \nonto the received bits. That is, any eavesdropping attack \nhas to take into account at least the MAC layer to learn \nanything meaningful about the communications. Real \neavesdroppers are more sophisticated than this: They \nknow how to interpret the bits as a medium-specific \nencoding with regards to the frames that are used by the \nMAC layer. They also know how to extract the media-\nindependent representation of datagrams conveyed \nwithin the MAC frames, as well as how to extract the \ntransport layer segments from the datagrams, which can \nbe reassembled into application messages. \n The defenses erected against any threat give some \ninsight into the perceived danger of the threat. People \nare generally concerned about eavesdropping, and it is \neasy to illicitly attach listening devices to most PHY \nmedia, but detection and removal of wiretaps has not \nevolved into a comparatively large industry. An apparent \nexplanation of why this is so is that it is easier and more \ncost effective for an attacker to compromise a legitimate \n" }, { "page_number": 135, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n102\n device on the network and configure it to eavesdrop than \nit is to install an illegitimate device. The evidence for \nthis view is that the antivirus/antibot industry is gigantic \nby comparison. \n There is another reason that an antiwiretapping \nindustry has never developed for the Internet. Almost \nevery MAC module supports a special mode of operation \ncalled promiscuous mode . A MAC module in promiscu-\nous mode receives every frame appearing on the medium, \nnot just the frames addressed to itself. This allows one \nMAC module to snoop on frames that are intended for \nother parties. Promiscuous mode was intended as a trou-\nbleshooting mechanism to aid network administrators in \ndiagnosing the source of problems. However, it is also \na mechanism that can be easily abused by anyone moti-\nvated to enable promiscuous mode. \n Forgeries \n A second Dolev-Yao assumption is that the adversary \ncan forge messages. Eavesdropping is usually fairly \ninnocuous compared to forgeries, because eavesdrop-\nping merely leaks information, whereas forgeries cause \nan unsuspecting receiver to take actions based on false \ninformation. Hence, the prevention or detection of for-\ngeries is one of the central goals of network security \nmechanisms. Different kinds of forgeries are possible \nfor each architectural component of the Internet. We will \nconsider only a few for each layer of the Internet proto-\ncol suite, to give a taste for their variety and ingenuity. \n Unlike the eavesdropping threat, where knowledge \nof higher layers is essential to any successful compro-\nmise, an attacker with only a PHY layer transmitter (and \nno higher-layer mechanisms) can disrupt communica-\ntions by jamming the medium — that is, outputting noise \nonto the medium in an effort to disrupt communications. \nA jammer creates signals that do not necessarily corre-\nspond to any bit patterns. The goal of a pure PHY layer \njammer is denial of service (DoS) — that is, to fill the \nmedium so that no communications can take place. \n Sometimes it is feasible to create a jamming device \nthat is sensitive to the MAC layer formats above it, to \nselectively jam only some frames. Selective jamming \nrequires a means to interpret bits received from the \nmedium as a higher-layer frame or datagram, and the \ntargeted frames to jam are recognized by some crite-\nrion, such as being sent from or to a particular address. \nSo that it can enable its own transmitter before the \nframe has been entirely received by its intended destina-\ntion, the jammer’s receiver must recognize the targeted \nframes before they are fully transmitted. When this is \ndone correctly, the jammer’s transmitter interferes with \nthe legitimate signals, thereby introducing bit errors \nin the legitimate receiver’s decoder. This results in the \nlegitimate receiver’s MAC layer detecting the bit errors \nwhile trying to verify the frame check sequence, caus-\ning it to discard the frame. Selective jamming is harder \nto implement than continuous jamming, PHY layer \njamming, but it is also much harder to detect, because \nthe jammer’s signal source transmits only when legiti-\nmate devices transmit as well, and only the targeted \nframes are disrupted. Successful selective jamming usu-\nally causes administrators to look for the source of the \ncommunications failure on one of the communicating \ndevices instead of in the network for a jammer. \n There is also a higher-layer analog to jamming, \ncalled message flooding . Denial-of-service (DoS) is \nalso the goal of message flooding. The technique used \nby message flooding is to create and send messages \nat a rate high enough to exhaust some resource. It is \npopular today, for instance, for hackers to compromise \nthousands of unprotected machines, which they use \nto generate simultaneous messages to a targeted site. \nExamples of this kind of attack are to completely fill \nthe physical medium connecting the targeted site to the \nInternet with network layer datagrams — this is usually \nhard or impossible — or to generate transport layer con-\nnection requests at a rate faster than the targeted site can \nrespond. Other variants — request operations that lead \nto disk I/O or require expensive cryptographic opera-\ntions — are also common. Message flooding attacks have \nthe property that they are legitimate messages from \nauthorized parties but simply timed so that collectively \ntheir processing exceeds the maximum capacity of the \ntargeted system. \n Let’s turn away from resource-clogging forgeries and \nexamine forgeries designed to cause a receiver to take an \nunintended action. It is possible to construct this type of \nforgery at any higher layer: forged frames, datagrams, \nnetwork segments, or application messages. \n To better understand how forgeries work, we need \nto more closely examine Internet “ identities ” — MAC \naddresses, IP addresses, transport port numbers, and \nDNS names — as well as the modules that use or support \ntheir use. The threats are a bit different at each layer. \n Recall that each MAC layer module is manufactured \nwith its own “ hardware ” address, which is supposed to \nbe a globally unique identifier for the MAC layer mod-\nule instance. The hardware address is configured in \nthe factory into nonvolatile memory. At boot time the \nMAC address is transferred from nonvolatile memory \ninto operational RAM maintained by the MAC module. \n" }, { "page_number": 136, "text": "Chapter | 7 Internet Security\n103\n A transmitting MAC layer module inserts the MAC \naddress from RAM into each frame it sends, thereby \nadvertising an “ identity. ” The transmitter also inserts the \nMAC address of the intended receiver on each frame, \nand the receiving MAC layer matches the MAC address \nin its own RAM against the destination field in each \nframe sent over the medium. The receiver ignores the \nframe if the MAC addresses don’t match and receives \nthe frame otherwise. \n In spite of this system, it is useful — even neces-\nsary sometimes — for a MAC module to change its \nMAC address. For example, sometimes a manufacturer \nrecycles MAC addresses so that two different mod-\nules receive the same MAC address in the factory. If \nboth devices are deployed on the same network, nei-\nther works correctly until one of the two changes its \naddress. Because of this problem, all manufacturers pro-\nvide a way for the MAC module to alter the address in \nRAM. This can always be specified by software via the \nMAC module’s device driver, by replacing the address \nretrieved from hardware at boot time. \n Since it can be changed, attacks will find it . A com-\nmon attack in Wi-Fi networks, for instance, is for the \nadversary to put the MAC module of the attacking device \ninto promiscuous mode, to receive frames from other \nnearby systems. It is usually easy to identify another cli-\nent device from the received frames and extract its MAC \naddress. The attacker then reprograms its own MAC \nmodule to transmit frames using the address of its vic-\ntim. A goal of this attack is usually to “ hijack ” the ses-\nsion of a customer paying for Wi-Fi service; that is, the \nattacker wants free Internet access for which someone \nelse has already paid. Another goal of such an attack is \noften to avoid attribution of the actions being taken by \nthe attacker; any punishment for antisocial or criminal \nbehavior will likely be attributed to the victim instead of \nthe attacker because all the frames that were part of the \nbehavior came from the victim’s address. \n A similar attack is common at the network layer. \nThe adversary will snoop on the IP addresses appearing \nin the datagrams encoded in the frames and use these \ninstead of their own IP addresses to source IP datagrams. \nThis is a more powerful attack than that of utilizing only \na MAC address, because IP addresses are global; an \nIP address is an Internet-wide locator, whereas a MAC \naddress is only an identifier on the medium to which the \ndevice is physically connected. \n Manipulation of MAC and IP addresses leads directly \nto a veritable menagerie of forgery attacks and enables \nstill others. A very selective list of examples must suffice \nto illustrate the ingenuity of attackers: \n ● TCP uses sequence numbers as part of its reli-\nability scheme. TCP is supposed to choose the first \nsequence number for a connection randomly. If an \nattacker can predict the first sequence number for \na TCP connection, an attacker who spoofs the IP \naddress of one of the parties to the connection can \nhijack the session by interjecting its own datagrams \ninto the flow that use the correct sequence numbers. \nThis desynchronizes the retry scheme for the device \nbeing spoofed, which then drops out from the con-\nversation. This attack seems to have become rela-\ntively less common than other attacks over the past \nfew years, since most TCP implementations have \nbegun to utilize better random number generators to \nseed their sequence numbers. \n ● An attacker can generate an ARP response to any \nARP request, thus claiming to use any requested IP \naddress. This is a common method to hijack another \nmachine’s IP address; it is a very effective technique \nwhen the attacker has a fast machine and the victim \nmachine responds more slowly. \n ● An attacker can generate DHCP response messages \nreplying to DHCP requests. This technique is often \nused as part of a larger forgery, such as the evil twin \nattack, whereby an adversary masquerades as an \naccess point for a Wi-Fi public hot spot. The receipt \nof DHCP response messages convinces the victim \nit is connecting to an access point operated by the \nlegitimate hotspot. \n ● A variant is to generate a DHCP request with the \nhardware MAC address of another device. This \nmethod is useful when the attacker wants to ascribe \naction it takes over the Internet to another device. \n ● An attacker can impersonate the DNS server, \nresponding to requests to resolve human-readable \nnames into IP addresses. The IP address in the \nresponse messages point the victim to a site \ncontrolled by the attacker. This is becoming a \ncommon attack used by criminals attempting to \ncommit financial fraud, such as stealing credit card \nnumbers. \n Replay \n Replay is a special forgery attack. It occurs when an \nattacker records frames or datagrams and then retrans-\nmits them unchanged at a later time. \n This might seem like an odd thing to do, but replay \nattacks are an especially useful way to attack stateful \nmessaging protocols, such as a routing protocol. Since \nthe goal of a routing protocol is to allow every router to \n" }, { "page_number": 137, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n104\n know the current topology of the network, a replayed \nrouting message can cause the routers receiving it to uti-\nlize out-of-date information. \n An attacker might also respond to an ARP request \nsent to a sabotaged node or a mobile device that has \nmigrated to another part of the Internet, by sending a \nreplayed ARP response. This replay indicates the node \nis still present, thus masking the true network topology. \n Replay is also often a valuable tool for attacking a \nmessage encryption scheme. By retransmitting a mes-\nsage, an attacker can sometimes learn valuable informa-\ntion from a message decrypted and then retransmitted \nwithout encryption on another link. \n A primary use of replay, however, is to attack session \nstartup protocols. Protocol startup procedures establish \nsession state, which is used to operate the link or con-\nnection, and determine when some classes of failures \noccur. Since this state is not yet established when the \nsession begins, startup messages replayed from prior \ninstances of the protocol will fool the receiver into allo-\ncating a new session. This is a common DoS technique. \n Delay and Rushing \n Delay is a natural consequence of implementations of \nthe Internet architecture. Datagrams from a single con-\nnection typically transit a path across the Internet in \nbursts. This happens because applications at the sender, \nwhen sending large messages, tend to send messages \nlarger than a single datagram. The transport layer parti-\ntions these messages into segments to fit the maximum \nsegment size along the path to the destination. The MAC \ntends to output all the frames together as a single blast \nafter it has accessed the medium. Therefore, routers with \nmany links can receive multiple datagram bursts at the \nsame time. When this happens, a router has to temporar-\nily buffer the burst, since it can output only one frame \nconveying a datagram per link at a time. Simultaneous \narrival of bursts of datagrams is one source of conges-\ntion in routers. This condition usually manifests itself at \nthe application by slow communications time over the \nInternet. Delay can also be introduced by routers inten-\ntionally, such as via traffic shaping. \n There are several ways in which attackers can induce \ndelays. We illustrate this idea by describing two different \nattacks. It is not uncommon for an attacker to take over a \nrouter, and when this happens, the attacker can introduce \nartificial delay, even when the router is uncongested. As \na second example, attackers with bot armies can bom-\nbard a particular router with “ filler ” messages, the only \npurpose of which is to congest the targeted router. \n Rushing is the opposite problem: a technique to make \nit appear that messages can be delivered sooner than can \nbe reasonably expected. Attackers often employ rush-\ning attacks by first hijacking routers that service parts of \nthe Internet that are fairly far apart in terms of network \ntopology. The attackers cause the compromised rout-\ners to form a virtual link between them. A virtual link \nemulates a MAC layer protocol but running over a trans-\nport layer connection between the two routers instead of \na PHY layer. The virtual link, also called a wormhole , \nallows the routers to claim they are connected directly by \na link and so are only one hop apart. The two compro-\nmised routers can therefore advertise the wormhole as a \n “ low-cost ” path between their respective regions of the \nInternet. The two regions then naturally exchange traffic \nthrough the compromised routers and the wormhole. \n An adversary usually launches a rushing attack as \na prelude to other attacks. By attracting traffic to the \nwormhole endpoints, the compromised routers can \neavesdrop and modify the datagrams flowing through \nthem. Compromised routers at the end of a wormhole are \nalso an ideal vehicle for selective deletion of messages. \n Reorder \n A second natural event in the Internet is datagram reor-\ndering . The two most common reordering mechanisms \nare forwarding table updates and traffic-shaping algo-\nrithms. Reordering due to forwarding takes place at the \nnetwork layer; traffic shaping can be applied at the MAC \nlayer or higher. \n The Internet reconfigures itself automatically as rout-\ners set up new links with neighboring routers and tear \ndown links between routers. These changes cause the \nrouting application on each affected router to send an \nupdate to its neighbors, describing the topology change. \nThese changes are gossiped across the network until \nevery router is aware of what happened. Each router \nreceiving such an update modifies its forwarding table to \nreflect the new Internet topology. \n Since the forwarding table updates take place asyn-\nchronously from datagram exchanges, a router can select \na different forwarding path for each datagram between \neven the same two devices. This means that two data-\ngrams sent in order at the message source can arrive in \na different order at the destination, since a router can \nupdate its forwarding table between the selection of a \nnext hop for different datagrams. \n The second reordering mechanism is traffic shaping, \nwhich gets imposed on the message flow to make bet-\nter use of the communication resources. One example is \n" }, { "page_number": 138, "text": "Chapter | 7 Internet Security\n105\n quality of service. Some traffic classes, such as voice or \nstreaming video, might be given higher priority by rout-\ners than best-effort traffic, which constitutes file transfers. \nHigher-priority means the router will send datagrams \ncarrying voice or video first while buffering the traffic \nlonger. Endpoint systems also apply traffic-shaping \nalgorithms in an attempt to make real-time applications \nwork better, without gravely affecting the performance of \napplications that can wait for their data. Any layer of the \nprotocol stack can apply traffic shaping to the messages \nit generates or receives. \n An attacker can emulate reordering any messages it \nintercepts, but since every device in the Internet must \nrecover from message reordering anyway, reordering \nattacks are generally useful only in very specific con-\ntexts. We will not discuss them further. \n Message Deletion \n Like reordering, message deletion can happen through \nnormal operation of the Internet modules. A MAC layer \nwill drop any frame it receives with an invalid frame \ncheck sequence. A network layer module will discard \nany datagram it receives with an IP header error. A \ntransport layer will drop any data segment received with \na data checksum error. A router will drop perfectly good \ndatagrams after receiving too many simultaneous bursts \nof traffic that lead to congestion and exhaustion of its \nbuffers. For these reasons, TCP was designed to retrans-\nmit data segments in an effort to overcome errors. \n The last class of attack possible with a Dolev-Yao \nadversary is message deletion. Two message deletion \nattacks occur frequently enough to be named: black-hole \nattacks and gray-hole attacks . \n Black-hole attacks occur when a router deletes all \nmessages it is supposed to forward. From time to time \na router is misconfigured to offer a zero-cost routes to \nevery destination in the Internet. This causes all traffic \nto be sent to this router. Since no device can sustain such \na load, the router fails. The neighboring routers cannot \ndetect the failure rapidly enough to configure alternate \nroutes, and they fail as well. This continues until a sig-\nnificant portion of the routers in the Internet fail, result-\ning in a black hole: Messages flow into the collapsed \nportion of the Internet and never flow out. A black-hole \nattack intentionally misconfigures a router. Black-hole \nattacks also occur frequently in small-scale sensor, mesh, \nand peer-to-peer file networks. \n A gray-hole attack is a selective deletion attack. \nTargeted jamming is one type of selective message dele-\ntion attack. More generally, an adversary can discard any \nmessage it intercepts in the Internet, thereby prevent-\ning its ultimate delivery. An adversary intercepting and \nselectively deleting messages can be difficult to detect \nand diagnose, so is a powerful attack. It is normally \naccomplished via compromised routers. \n A subtler, indirect form of message deletion is also \npossible through the introduction of forwarding loops . \nEach IP datagram header has a time-to-live (TTL) \nfield, limiting the number of hops that a datagram can \nmake. This field is set to 255 by the initiator and decre-\nmented by each router the datagram passes through. If a \nrouter decrements the TTL field to zero, it discards the \ndatagram. \n The reason for the TTL field is that the routing \nprotocols that update the forwarding tables can tem-\nporarily cause forwarding loops because updates are \napplied asynchronously as the routing updates are gos-\nsiped through the Internet. For instance, if router A gets \nupdated prior to router B, A might believe that the best \npath to some destination C is via B, whereas B believes \nthe best route to C is via A as the next hop. Messages for \nC will ping-pong between A and B until one or both are \nupdated with new topology information. \n An attacker who compromises a router or forges its \nrouting traffic can intentionally introduce forwarding \nroutes. This causes messages addressed to the destina-\ntions affected by the forgery to circulate until the TTL \nfield gets decremented to zero. These attacks are also \ndifficult to detect, because all the routers are behaving \naccording to their specifications, but messages are being \nmysteriously lost. \n 3. DEFENDING AGAINST ATTACKS ON \nTHE INTERNET \n Now that we have a model for thinking about the threats \nagainst communication and we understand how the \nInternet works, we can examine how its communications \ncan be protected. Here we will explain how cryptogra-\nphy is used to protect messages exchanged between vari-\nous devices on the Internet and illustrate the techniques \nwith examples. \n As might be expected, the techniques vary according \nto scenario. Methods that are effective for an active ses-\nsion do not work for session establishment. Methods that \nare required for session establishment are too expensive \nfor an established session. It is interesting that similar \nmethods are used at each layer of the Internet archi-\ntecture for protecting a session and for session estab-\nlishment and that each layer defines its own security \n" }, { "page_number": 139, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n106\n protocols. Many find the similarity of security solutions \nat different layers curious and wonder why security is \nnot centralized in a single layer. We will explain why the \nsame mechanisms solve different problems at different \nlayers of the architecture, to give better insight into what \neach is for. \n Layer Session Defenses \n A session is a series of one or more related messages. \nThe easiest and most straightforward defenses protect \nthe exchange of messages that are organized into ses-\nsions, so we will start with session-oriented defenses. \n Cryptography, when used properly, can provide reli-\nable defenses against eavesdropping. It can also be used \nto detect forgery and replay attacks, and the methods \nused also have some relevance to detecting reordering \nand message deletion attacks. We will discuss how this \nis accomplished and illustrate the techniques with TLS, \nIPsec, and 802.11i. \n Defending against Eavesdropping \n The primary method used to defend against eavesdrop-\nping is encryption. Encryption was invented with the \ngoal of making it infeasible for any computationally lim-\nited adversary to be able to learn anything useful about a \nmessage that cannot already be deduced by some other \nmeans, such as its length. Encryption schemes that appear \nto meet this goal have been invented and are in wide-\nspread use on the Internet. Here we will describe how \nthey are used. \n There are two forms of encryption: symmetric encryp-\ntion, in which the same key is used to both encrypt and \ndecrypt, and asymmetric encryption, in which encryption \nand decryption use distinct but related keys. The proper-\nties of each are different. Asymmetric encryption tends to \nbe used only for applications related to session initiation \nand assertions about policy (although this is not univer-\nsally true). The reason for this is that a single asymmetric \nkey operation is generally too expensive to be applied to \na message stream of arbitrary length. We therefore focus \non symmetric encryption and how it is used by network \nsecurity protocols. \n A symmetric encryption scheme consists of three \noperations: key generate , encrypt , and decrypt . The key \ngenerate operation creates a key, which is a secret. The \nkey generate procedure is usually application specific; \nwe describe some examples of key generate operations \nin our discussion of session startup. Once generated, \nthe key is used by the encrypt operation to transform \n plaintext messages — that is, messages that can be read \nby anyone — into ciphertext , which is messages that can-\nnot be read by any computationally limited party who \ndoes not possess the key. The key is also used by the \ndecrypt primitive to translate ciphertext messages back \ninto plaintext messages. \n There are two kinds of symmetric encryption algo-\nrithms. The first is type is called a block cipher and the \nsecond a stream cipher . Block and stream ciphers make \ndifferent assumptions about the environment in which \nthey operate, making each more effective than the other \nat different protocol layers. \n A block cipher divides a message into chunks of a \nfixed size called blocks and encrypts each block sepa-\nrately. Block ciphers have the random access property, \nmeaning that a block cipher can efficiently encrypt or \ndecrypt any block utilizing an initialization vector in \nconjunction with the key. This property makes block \nciphers a good choice for encrypting the content of \nMAC layer frames and network layer datagrams, for two \nreasons. First, the chunking behavior of a block cipher \ncorresponds nicely to the packetization process used to \nform datagrams from segments and frames from data-\ngrams. Second, and perhaps more important, the Internet \narchitecture models the lower layers as “ best-effort ” \nservices, meaning that it assumes that datagrams and \nframes are sent and then forgotten. If a transmitted data-\ngram is lost due to congestion or bit error (or attack), it \nis up to the transport layer or application to recover. The \nrandom access property makes it easy to restart a block \ncipher anywhere it’s needed in the datastream. Popular \nexamples of block ciphers include AES, DES, and \n3DES, used by Internet security protocols. \n Block ciphers are used by the MAC and network lay-\ners to encrypt as follows: First, a block cipher mode of \noperation is selected. A block cipher itself encrypts and \ndecrypts only single blocks. A mode of operation is a set \nof rules extending the encryption scheme from a single \nblock to messages of arbitrary length. The most popular \nmodes of operation used in the Internet are counter mode \nand cipher-block chaining (CBC) mode. Both require an \ninitialization vector, which is a counter value for coun-\nter mode and a randomly generated bit vector for cipher-\nblock chaining mode. To encrypt a message, the mode \nof operation first partitions the message into a sequence \nof blocks whose sizes equal that of the cipher’s block \nsize, padding if needed to bring the message length up \nto a multiple of the block size. The mode of operation \nthen encrypts each block under the key while combining \ninitialization vectors with the block in a mode-specific \nfashion. \n" }, { "page_number": 140, "text": "Chapter | 7 Internet Security\n107\n For example, counter mode uses a counter as its ini-\ntialization vector, which it increments, encrypts, and \nthen exclusive-ORs the result with the block: \n \n \ncounter\ncounter\n1; E\nEncrypt\n(counter); \nCipherTextBlock\nKey\n→\n←\n←\n\u0002\nE\nPlainTextBlock\n⊕\n \n where \u0002 denotes exclusive OR . The algorithm output \nthe new (unencrypted) counter value, which is used to \nencrypt the next block, and CipherTextBlock . \n The process of assembling a message from a mes-\nsage encrypted under a mode of operation is very sim-\nple: Prepend the original initialization vector to the \nsequence of ciphertext blocks, which together replace \nthe plaintext payload for the message. The right way to \nthink of this is that the initialization vector becomes a \nnew message header layer. Also prepended is a key iden-\ntifier , which indicates to the receiver which key it should \nutilize to decrypt the payload. This is important because \nin many cases it is useful to employ multiple connections \nbetween the same pair of endpoints, and so the receiver \ncan have multiple decryption keys to choose from for \neach message received from a particular source. \n A receiver reverses this process: First it extracts the \ninitialization vector from the data payload, then it uses \nthis and the ciphertext blocks to recover the original \nplaintext message by reversing the steps in the mode of \noperation. \n This paradigm is widely used in MAC and network \nlayer security protocols, including 802.11i, 802.16e, \n802.1ae, and IPsec, each of which utilizes AES in modes \nrelated to counter and cipher-block chaining modes. \n A stream cipher treats the data as a continuous stream \nand can be thought of as encrypting and decrypting data \none bit at a time. Stream ciphers are usually designed \nso that each encrypted bit depends on all previously \nencrypted ones, so decryption becomes possible only if \nall the bits arrive in order; most true stream ciphers lack \nthe random access property. This means that in princi-\nple stream ciphers only work in network protocols when \nthey’re used on top of a reliable data delivery service \nsuch as TCP, and so they only work correctly below the \ntransport layer when used in conjunction with reliable \ndata links. Stream ciphers are attractive from an imple-\nmentation perspective because they can often achieve \nmuch higher throughputs than block ciphers. RC4 is an \nexample of a popular stream cipher. \n Stream ciphers typically do not use a mode of opera-\ntion or an initialization vector at all, or at least not in the \nsame sense as a block cipher. Instead, they are built as \npseudo-random number generators, the output of which \nis based on a key. The random number generator is used \nto create a sequence of bits that appear random, called a \n key stream , and the result is exclusive OR’d with the plain-\ntext data to create ciphertext. Since XOR is an idempo-\ntent operation, decryption with a stream cipher is just the \nsame operation: Generate the same key stream and exclu-\nsive OR it with the ciphertext to recover the plaintext. \nSince stream ciphers do not utilize initialization vectors, \nInternet protocols employing stream ciphers do not need \nthe extra overhead of a header to convey the initialization \nvector needed by the decryptor in the block cipher case. \nInstead, these protocols rely on the sender and receiver \nbeing able to keep their respective key stream generators \nsynchronized for each bit transferred. This implies that \nstream ciphers can only be used over a reliable medium \nsuch as TCP — that is, a transport that guarantees delivery \nof all bits in the proper order and without duplication. \n Transport layer security (TLS) is an example of an \nInternet security protocol that uses the stream cipher \nRC4. TLS runs on top of TCP. \n Assuming that a symmetric encryption scheme is well \ndesigned, its efficacy against eavesdropping depends on \nfour factors. Failing to consider any of these can cause \nthe encryption scheme to fail catastrophically. \n Independence of Keys \n This is perhaps the most important consideration for the \nuse of encryption. All symmetric encryption schemes \nassume that the encryption key for each and every ses-\nsion is generated independently of the encryption keys \nused for every other session. Let’s parse this thought: \n ● Independent means selected or generated by a pro-\ncess that is indistinguishable by any polynomial time \nstatistical test from the uniform distribution applied \nto the key space. One common failure is to utilize a \nkey generation algorithm that is not random, such \nas using the MAC address or IP address of a device \nor time of session creation as the basis for a key. \nSchemes that use such public values instead of ran-\ndomness for keys are easily broken using brute-force \nsearch techniques such as dictionary attacks. A sec-\nond common failure is to pick an initial key randomly \nbut create successive keys by some simple transfor-\nmation, such as incrementing the initial key, exclusive \nOR ’ing the MAC address of the device with the key, \nand so on. Encryption using key generation schemes \nof this sort are easily broken using differential crypta-\nnalysis and related key attacks. \n" }, { "page_number": 141, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n108\n ● Each and every mean each and every. For a block \ncipher, reusing the same key twice with the same \ninitialization vector can allow an adversary to recover \nthe plaintext data from the ciphertext without using \nthe key. Similarly, each key always causes the pseudo-\nrandom number generator at the heart of a stream \ncipher to generate the same key stream, and reuse of \nthe same key stream again will leak the plaintext data \nfrom the ciphertext without using the key. \n ● Methods effective for the coordinated generation \nof random keys at the beginning of each session \nconstitute a complicated topic. We address it in our \ndiscussion of session startup later in the chapter. \n Limited Output \n Perhaps the second most important consideration is to \nlimit the amount of information encrypted under a single \nkey. The modern definition of security for an encryption \nscheme revolves around the idea of indistinguishability \nof the scheme’s output from random. This goes back to \na notion of ideal security proposed by Shannon . This \nhas a dramatic effect on how long an encryption key \nmay be safely used before an adversary has sufficient \ninformation to begin to learn something about the \nencrypted data. \n Every encryption scheme is ultimately a determinis-\ntic algorithm, and no deterministic algorithm can gener-\nate an infinite amount of output that is indistinguishable \nfrom random. This means that encryption keys must be \nreplaced on a regular basis. The amount of data that can \nbe safely encrypted under a single key depends very \nmuch on the encryption scheme. As usual, the limitations \nfor block ciphers and stream ciphers are a bit different. \n Let the block size for a block cipher be some integer \n n \u0005 0. Then, for any key K , for every string S 1 there is \nanother string S 2 so that: \n Encrypt (\n)\n and Decrypt (\n)\n2\n1\n1\nK\nK\nS\nS\nS\nS\n\u0003\n\u0003\n2 \n This says that a block cipher’s encrypt and decrypt \noperations are permutations of the set of all bit strings \nwhose length equals the block size. In particular, this \nproperty says that every pair of distinct n bit strings \nresults in distinct n bit ciphertexts for any block cipher. \nHowever, by an elementary theorem from probabil-\nity called the birthday paradox, random selection of n \nbit strings should result in a 50% probability that some \nstring is chosen at least twice after about 2 n /2 selections. \nThis has an important consequence for block ciphers. It \nsays that an algorithm as simple as na ï ve guessing can \ndistinguish the output of the block cipher from random \nafter about 2 n /2 blocks have been encrypted. This means \nthat an encryption key should never be used to encrypt \neven close to 2 n /2 blocks before a new, independent key \nis generated. \n To make this specific, DES and 3DES have a block \nsize of 64 bits; AES has a 128-bit block size. Therefore \na DES or 3DES key should be used much less than to \nencrypt 2 64/2 \u0003 2 32 blocks, whereas an AES key should \nnever be used to encrypt as many as 2 64 blocks; doing \nso begins to leak information about the encrypted data \nwithout use of the encryption key. As an example, \n802.11i has been crafted to limit each key to encrypting \n2 48 before forcing generation of a new key. \n This kind of arithmetic does not work for a stream \ncipher, since its block size is 1 bit. Instead, the length of \ntime a key can be safely used is governed by the perio-\ndicity of the pseudorandom number generator at the \nheart of the stream cipher. RC4, for instance, becomes \ndistinguishable from random after generating about 2 31 \nbytes. Note that 31 \u0002 32 \u0003 \u0003 256, and 256 bytes is the \nsize of the RC4 internal state. This illustrates the rule of \nthumb that there is a birthday paradox relation between \nthe maximum number of encrypted bits of a stream \ncipher key and its internal state. \n Key Size \n The one “ fact ” about encryption that everyone knows \nis that larger keys result in stronger encryption. This is \nindeed true, provided that the generate keys operation \nis designed according to the independence condition. \nOne common mistake is to properly generate a short \nkey — say, 32 bits long — that is then concatenated to get \na key of the length needed by the selected encryption \nscheme — say, 128 bits. Another similar error is to gener-\nate a short key and manufacture the remainder of the key \nwith known public data, such as an IP address. These \nmethods result in a key that is only as strong as the short \nkey that was generated randomly. \n Mode of Operation \n The final parameter is the mode of operation — that is, \nthe rules for using a block cipher to encrypt messages \nwhose length is different than the block cipher width. \nThe most common problem is failure to respect the doc-\nument terms and conditions defined for using the mode \nof operation. \n As an illustration of what can go wrong — even by \npeople who know what they are doing — cipher-block \nchaining mode requires that the initialization vector be \n" }, { "page_number": 142, "text": "Chapter | 7 Internet Security\n109\n chosen randomly. The earliest version of the IPsec standard \nused cipher-block chaining mode exclusively for encryp-\ntion. This standard recommended choosing initialization \nvectors as the final block of any prior message sent. The \nreasoning behind this recommendation was that, because \nan encrypted block cannot be distinguished from random \nif the number of blocks encrypted is limited, a block of a \npreviously encrypted message ought to suffice. However, \nthe advice given by the standard was erroneous because \nthe initialization vector selection algorithm failed to have \none property that a real random selection property has: The \ninitialization vector is not unpredictable. A better way to \nmeet the randomness requirement is to increment a coun-\nter, prepend it to the message to encrypt, and then encrypt \nthe counter value, which becomes the initialization vector. \nThis preserves the unpredictability property at a cost of \nencrypting one extra block. \n A second common mistake is to design protocols \nusing a mode of operation that was not designed to \nencrypt multiple blocks. For example, failing to use a \nmode of operation at all — using the naked encrypt and \ndecrypt operations, with no initialization vector — is itself \na mode of operation called electronic code book mode. \nElectronic code book mode was designed to encrypt \nmessages that never span more than a single block — for \nexample, encrypting keys to distribute for other opera-\ntions. Using electronic code book mode on a message \nlonger than a single block leaks a bit per block, however, \nbecause this mode allows an attacker to disguise when \ntwo plaintext blocks are the same or different. A classi-\ncal example of this problem is to encrypt a photograph \nusing electronic code book mode. The main outline of \nthe photograph shows through plainly. This is not a fail-\nure of the encryption scheme; it is rather using encryp-\ntion in a way that was never intended. \n Now that we understand how encryption works and \nhow it is used in Internet protocols, we should ask why \nis it needed at different layers. What does encryption at \neach layer of the Internet architecture accomplish? The \nbest way to answer this question is to watch what it does. \n Encryption applied at the MAC layer encrypts a sin-\ngle link. Data is encrypted prior to being put on a link and \nis decrypted again at the other end of a link. This leaves \nthe IP datagrams conveyed by the MAC layer frames \nexposed inside each router as they wend their way across \nthe Internet. Encryption at the MAC layer is a good way \nto transparently prevent data from leaking, since many \ndevices never use encryption. For example, many organiza-\ntions are distributed geographically and use direct point-to-\npoint links to connect sites; encrypting the links connecting \nsites prevents an outsider from learning the organization’s \nconfidential information merely by eavesdropping. Legal \nwiretaps also depend on this arrangement because they \nmonitor data inside routers. The case of legal wiretaps also \nillustrates the problem with link layer encryption only: If \nan unauthorized party assumes control of a router, they are \nfree to read all the datagrams that traverse the router. \n IPsec operates essentially at the network layer. \nApplying encryption via IPsec prevents exposure of the \ndatagrams ’ payload end to end, so the data is still pro-\ntected within routers. Since the payload of a datagram \nincludes both the transport layer header as well as its \ndata segments, applying encryption at the IPsec layer \nhides the applications being used as well as the data. \nThis provides a big boost in confidentiality but also \nleads to more inefficient use of the Internet, since traffic-\nshaping algorithms in routers critically depend on having \ncomplete access to the transport headers. Using encryp-\ntion at the IPsec layer also means the endpoints do not \nhave to know whether each link a datagram traverses \nthrough the Internet applies encryption; using encryption \nat this layer simplifies the security analysis over encryp-\ntion applied at the MAC layer alone. Finally, like MAC \nlayer encryption, IPsec is a convenient tool for introduc-\ning encryption transparently to protect legacy applica-\ntions, which by and large ignored confidentiality issues. \n The transport layer encryption function can be illus-\ntrated by TLS. Like IPsec, TLS operates end to end, but \nTLS encrypts only the application data carried in the \ntransport data segments, leaving the transport header \nexposed. Thus, with TLS, routers can still perform their \ntraffic-shaping function, and we still have the simplified \nsecurity analysis that comes with end-to-end encryption. \nThe first downside of this method is that the exposure \nof the transport headers gives the attacker greater knowl-\nedge about what might be encrypted in the payload. The \nsecond downside is that it is somewhat more awkward to \nintroduce encryption transparently at the transport layer; \nencryption at the transport layer requires cooperation by \nthe application to perform properly. \n This analysis says that it is reasonable to employ \nencryption at any one of the network protocol layers, \nbecause each solves a slightly different problem. \n Before leaving the topic of encryption, it is worth-\nwhile to emphasize what encryption does and does not \ndo. Encryption, when properly used, is a read access \ncontrol . If used properly, no one who lacks access to the \nencryption key can read the encrypted data. Encryption, \nhowever, is not a write access control ; that is, it does \nnot maintain the integrity of the encrypted data. Counter \nmode and stream ciphers are subject to bit-flipping \nattacks, for instance. An attacker launches a bit-flipping \n" }, { "page_number": 143, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n110\n attack by capturing a frame or datagram, changing one or \nmore bits from 0 to 1 (or vice versa) and retransmitting \nthe altered frame. The resulting frame decrypts to some \nresult — the altered message decrypts to something — and \nif bits are flipped judiciously, the result can be intelligi-\nble. As a second example, cipher-block chaining mode \nis susceptible to cut -and-paste attacks, whereby the \nattack cuts the final few blocks from one message in a \nstream and uses them to overwrite the final blocks of a \nlater stream. At most one block decrypts to gibberish; \nif the attacker chooses the paste point judiciously, for \nexample, so that it falls where the application ought to \nhave random data anyway, this can be a powerful attack. \nThe upshot is that even encrypted data needs an integrity \nmechanism to be effective, which leads us to the subject \nof defenses against forgeries. \n Defending against Forgeries and Replays \n Forgery and replay detection are usually treated together \nbecause replays are a special kind of forgery. We follow \nthis tradition in our own discussion. Forgery detection, \nnot eavesdropping protection, is the central concern for \ndesigns to secure network protocol. This is because every \naccepted forgery of an encrypted frame or datagram is \na question for which the answer can tell the adversary \nabout the encryption key or plaintext data. Just as in \nschool, an attacker can learn about the encrypted stream \nor encryption key faster by asking questions rather than \nsitting back and passively listening. \n Since eavesdropping is a passive attack, whereas \ncreating forgeries is active, turning from the subject of \neavesdropping to that of forgeries changes the security \ngoals subtly. Encryption has a security goal of preven-\ntion — to prevent the adversary from learning anything \nuseful about the data that cannot be derived in other \nways. The comparable security goal for forgeries is to \nprevent the adversary from creating forgeries, which is \ninfeasible. This is because any device with a transmitter \nappropriate for the medium can send forgeries by creat-\ning frames and datagrams using addresses employed by \nother parties. What is feasible is a form of asking for-\ngiveness instead of permission: Prevent the adversary \nfrom creating undetected forgeries. \n The cryptographic tool underlying forgery detection \nis called a message authentication code . Like an encryp-\ntion scheme, a message authentication code consists of \nthree operations: a key generation operation, a tagging \noperation, and a verification operation. Also like encryp-\ntion, the key generation operation, which generates a \nsymmetric key shared between the sender and receiver, \nis usually application specific. The tagging and veri-\nfication operations, however, are much different from \nencrypt and decrypt. \n The tagging operation takes the symmetric key, called \nan authentication key , and a message as input parameters \nand outputs a tag , which is a cryptographic checksum \ndepending on the key and message as its output. \n The verification operation takes three input param-\neters: the symmetric key, the message, and its tag. The \nverification algorithm recomputes the tag from the key \nand message and compares the result against the tag input \ninto the algorithm. If the two fail to match, the verify \nalgorithm outputs a signal that the message is a forgery. If \nthe input and locally computed tag match, the verify algo-\nrithm declares that the message is authenticated. \n The conclusion drawn by the verify algorithm of a \nmessage authentication code is not entirely logically \ncorrect. Indeed, if the tag is n bits in length, an attacker \ncould generate a random n bit string as its tag and it \nwould have one chance in 2 n of being valid. A message \nauthentication scheme is considered good if there are no \npolynomial time algorithms that are significantly better \nthan random guessing at producing correct tags. \n Message authentication codes are incorporated into \nnetwork protocols in a manner similar to encryption. \nFirst, a sequence number is prepended to the data that is \nbeing forgery protected; the sequence number, we will \nsee, is used to detect replays. Next, a message authenti-\ncation code tagging operation is applied to the sequence \nnumber and message body to produce a tag. The tag is \nappended to the message, and a key identifier for the \nauthentication key is prepended to the message. The mes-\nsage can then be sent. The receiver determines whether \nthe message was a forgery by first finding the authentica-\ntion key identified by the key identifier, then by checking \nthe correctness of the tag using the message authentica-\ntion code’s verify operation. If these checks succeed, the \nreceiver finally uses the sequence number to verify that \nthe message is not a replay. \n How does replay detection work? When the authenti-\ncation key is established, the sender initializes to zero the \ncounter that is used in the authenticated message. The \nreceiver meanwhile establishes a replay window, which \nis a list of all recently received sequence numbers. The \nreplay window is initially empty. To send a replay pro-\ntected frame, the sender increments his counter by one \nand prepends this at the front of the data to be authenti-\ncated prior to tagging. The receiver extracts the counter \nvalue from the received message and compares this to the \nreplay window. If the counter falls before the replay win-\ndow, which means it is too old to be considered valid, the \n" }, { "page_number": 144, "text": "Chapter | 7 Internet Security\n111\n receiver flags the message as a replay. The receiver does \nthe same thing if the counter is already represented in the \nreplay window data structure. If the counter is greater \nthan the bottom of the replay window and is a counter \nvalue that has not yet been received, the frame or data-\ngram is considered “ fresh ” instead of a replay. \n The process is simplest to illustrate for the MAC \nlayer. Over a single MAC link it is ordinarily impossible \nfor frames to be reordered, because a single device can \naccess the medium at a time and, because of the speed of \nelectrons or photons comprising the signals representing \nbits, at least some of the bits at the start of a frame are \nreceived prior to the final bits being transmitted (satellite \nlinks are an exception). If frames cannot be reordered \nby a correctly operating MAC layer, the replay window \ndata structure records the counter for the last received \nframe, and the replay detection algorithm merely has to \ndecide whether the replay counter value in a received \nframe is larger than that recorded in its replay window. \nIf the counter is less than or equal to the replay window \nvalue, the frame is a forgery; otherwise it is considered \ngenuine. 802.11i, 802.16, and 802.1ae all employ this \napproach to replay detection. This same approach can \nbe used by a message authentication scheme operating \nabove the transport layer, by protocols such as TLS and \nSSH (Secure Shell), since the transport eliminates dupli-\ncates and delivers bits in the order sent. The replay win-\ndow is more complicated at the network layer, however, \nbecause some reordering is natural, given that the net-\nwork reorders datagrams. Hence, for the network layer \nthe replay window is usually sized to account for the \nmaximum reordering expected in the “ normal ” Internet. \nIPsec uses this more complex replay window. \n The reason that this works is the following: Every \nmessage is given a unique, incrementing sequence \nnumber in the form of its counter value. The transmit-\nter computes the message authentication code tag over \nthe sequence number and the message data. Since it is \ninfeasible for a computationally bounded adversary to \ncreate a valid tag for the data with probability signifi-\ncantly greater than 1/2 n , a tag validated by the receiver \nimplies that the message, including its sequence number, \nwas created by the transmitter. The worst thing that \ncould have happened, therefore, is that the adversary has \ndelayed the message. However, if the sequence number \nfalls within the replay window, the message could not \nhave been delayed longer than reordering due to the nor-\nmal operation of forwarding and traffic shaping within \nthe Internet. \n A replay detection scheme limits an adversary’s \nopportunities to delete and to reorder messages. If a \nmessage does not arrive at its destination, its sequence \nnumber is never set in the receive window, so it can be \ndeclared a lost message. It is easy to track the percent-\nage of lost messages, and if this exceeds some thresh-\nold, then communications become unreliable, but more \nimportant, the cause of the unreliability can be investi-\ngated. Similarly, messages received outside the replay \nwindow can also be tracked, and if the percentage \nbecomes too high, messages are arriving out of order \nmore frequently than might be expected from normal \noperation of the Internet, pointing to a configuration \nproblem, an equipment failure, or an attack. Again, the \ncause of the anomaly can be investigated. Mechanisms \nlike these are often the way that attacks are discovered \nin the first place. The important lesson is that attacks and \neven faulty equipment or misconfigurations are often \ndifficult to detect without collecting reliability statistics, \nand the forgery detection mechanisms can provide some \nof the best reliability statistics available. \n Just like encryption, the correctness of this analy-\nsis depends critically on the design enforcing some \nfundamental assumptions, regardless of the quality of \nthe message authentication code on which it might be \nbased. If any of the following assumptions are violated, \nthe forgery detection scheme can fail catastrophically to \naccomplish its mission. \n Independence of Authentication Keys \n This is absolutely paramount for forgery detection. If \nthe message authentication keys are not independent, an \nattacker can easily create forged message authentication \ntags based on authentication keys learned in other ways. \nThis assumption is so important that it is useful to exam-\nine in greater detail. \n The first point is that a message authentication key \nutterly fails to accomplish its mission if it is shared \namong even three parties; only two parties must know \nany particular authentication key. This is very easy to \nillustrate. Suppose A, B, and C were to share a message \nauthentication key, and suppose A creates a forgery-\nprotected message it sends to C. What can C conclude \nwhen it receives this message? C cannot conclude that \nthe message actually originated from A, even though its \naddressing indicates it did, because B could have pro-\nduced the same message and used A’s address. C cannot \neven conclude that B did not change some of the message \nin transit. Therefore, the algorithm loses all its efficacy \nfor detecting forgeries if message authentication keys are \nknown by more than two parties. They must be known \nby at least two parties or the receiver cannot verify that \nthe message and its bits originated with the sender. \n" }, { "page_number": 145, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n112\n This is much different than encryption. An encryp-\ntion/decryption key can be distributed to every member \nof a group, and as long as the key is not leaked from the \ngroup to a third party, the encryption scheme remains an \neffective read access control against parties that are not \nmembers of the group. Message authentication utterly \nfails if the key is shared beyond two parties. This is due \nto the active nature of forgery attacks and the fact that \nforgery handling, being a detection rather than a preven-\ntion scheme, already affords the adversary more latitude \nthan encryption toward fooling the good guys. \n So message authentication keys must be shared \nbetween exactly two communicating devices for forgery \ndetection schemes to be effective. As with encryption \nkeys, a message authentication key must be generated \nrandomly because brute-force searches and related key \nattacks can recover the key by observing messages tran-\nsiting the medium. \n No Reuse of Replay Counter Values with a Key \n Reusing a counter with a message authentication key \nis analogous to reusing an initialization vector with an \nencryption key. Instead of leaking data, however, replay \ncounter value reuse leads automatically to trivial forger-\nies based on replayed messages. The attacker’s algo-\nrithm is trivial: Using a packet sniffer, record each of \nthe messages protected by the same key and file them in \na database. If the attacker ever receives a key identifier \nand sequence number pair already in the database, the \ntransmitter has begun to reuse replay counter values with \na key. The attacker can then replay any message with \na higher sequence number and the same key identifier. \nThe receiver will be fooled into accepting the replayed \nmessage. \n An implication of this approach is that known forgery \ndetection schemes cannot be based on static keys. We \ncould to the contrary attempt to design such a scheme. \nOne could try to checkpoint in nonvolatile memory the \nreplay counter at the transmitter and the replay window \nat the receiver. This approach does not work, however, \nin the presence of a Dolev-Yao adversary. The adver-\nsary can capture a forgery-protected frame in flight \nand then delete all successive messages. At its conveni-\nence later, the adversary resends the captured message. \nThe receiver, using its static message authentication \nkey, will verify the tag and, based on its replay window \nretrieved from nonvolatile storage, verify that the mes-\nsage is indeed in sequence and so accept the message as \nvalid. This experiment demonstrates that forgery detec-\ntion is not entirely satisfactory, because sequence num-\nbers do not take timeliness into account. Secure clock \nsynchronization, however, is a difficult problem with \nsolutions that enjoy only partial success. The construc-\ntion of better schemes that account for timing remains \nan open research problem. \n Key Size \n If message authentication keys must be randomly gener-\nated, they must also be of sufficient size to discourage \nbrute-force attack. The key space has to be large enough \nto make exhaustive search for the message authentication \nkey cost prohibitive. Key sizes for message authentica-\ntion comparable with those for encryption are sufficient \nfor this task. \n Message Authentication Code Tag Size \n We have seen many aspects that make message authenti-\ncation codes somewhat more fragile encryption schemes. \nMessage authentication code size is one in which for-\ngery detection can on the contrary effectively utilize a \nsmaller block size than an encryption scheme. Whereas \nan encryption scheme based on a 128-bit block size has \nto replace keys every 2 48 or so blocks to avoid leak-\ning data, an encryption scheme can maintain the same \nlevel of security with about a 48-bit message authenti-\ncation code tag. The difference is that the block cipher-\nbased encryption scheme leaks information about the \nencrypted data due to the birthday paradox, whereas an \nattacker has to create a valid forgery based on exhaus-\ntive search due to the active nature of a forgery attack. In \ngeneral, to determine the size of a tag needed by a mes-\nsage authentication code, we have only to determine the \nmaximum number of messages sent in the lifetime of the \nkey. If this number of messages is bounded by 2 n , the tag \nneed only be n \u0002 1 bits long. \n As with encryption, many find it confusing that for-\ngery detection schemes are offered at nearly every layer \nof the Internet architecture. To understand this, it is again \nuseful to ask the question about what message forgery \ndetection accomplishes at each layer. \n If a MAC module requires forgery detection for \nevery frame received, physical access to the medium \nbeing used by the module’s PHY layer affords an \nattacker no opportunity to create forgeries. This is a \nvery strong property. It means that the only MAC layer \nmessages attacking the receiver are either generated by \nother devices authorized to attach to the medium or else \nare forwarded by the network layer modules of author-\nized devices, because all frames received directly off the \nmedium generated by unauthorized devices will be dis-\ncarded by the forgery detection scheme. A MAC layer \n" }, { "page_number": 146, "text": "Chapter | 7 Internet Security\n113\n forgery detection scheme therefore essentially provides \na write access control of the physical medium, closing \nit to unauthorized parties. Installing a forgery detection \nscheme at any other layer will not provide this kind of \nprotection. Requiring forgery detection at the MAC layer \nis therefore desirable whenever feasible. \n A different kind of assurance is provided by for-\ngery detection at the network layer. IPsec is the protocol \ndesigned to accomplish this function. If a network layer \nmodule requires IPsec for every datagram received, this \nessentially cuts off attacks against the device hosting \nthe module to other authorized machines in the entire \nInternet; datagrams generated by unauthorized devices \nwill be dropped. With this forgery detection scheme it is \nstill possible for an attacker on the same medium to gen-\nerate frames attacking the device’s MAC layer module, \nbut attacks against higher layers become computation-\nally infeasible. Installing a forgery detection scheme at \nany other layer will not provide this kind of protection. \nRequiring forgery detection at the network layer is there-\nfore desirable whenever feasible as well. \n Applying forgery detection at the transport layer \noffers different assurances entirely. Forgery detection at \nthis level assures the receiving application that the arriv-\ning messages were generated by the peer application, not \nby some virus or Trojan-horse program that has linked \nitself between modules between protocol layers on the \nsame or different machine. This kind of assurance can-\nnot be provided by any other layer. Such a scheme at \nthe network or MAC layers only defends against mes-\nsage injection by unauthorized devices on the Internet \ngenerally or directly attached to the medium, not against \nmessages generated by unauthorized processes running \non an authorized machine. Requiring forgery detection \nat the transport layer therefore is desirable whenever it \nis feasible. \n The conclusion is that forgery detection schemes \naccomplish different desirable functions at each protocol \nlayer. The security goals that are achievable are always \narchitecturally dependent, and this sings through clearly \nwith forgery detection schemes. \n We began the discussion of forgery detection by not-\ning that encryption by itself is subject to attack. One final \nissue is how to use encryption and forgery protection \ntogether to protect the same message. Three solutions \ncould be formulated to this problem. One approach might \nbe to add forgery detection to a message first — add the \nauthentication key identifier, the replay sequence number, \nand the message authentication code tag — followed by \nencryption of the message data and forgery detection \nheaders. TLS is an example Internet protocol that takes \nthis approach. The second approach is to reverse the \norder of encryption and forgery detection: First encrypt, \nthen compute the tag over the encrypted data and the \nencryption headers. IPsec is an example Internet protocol \ndefined to use this approach. The last approach is to \napply both simultaneously to the plaintext data. SSH is \nan Internet protocol constructed in this manner. \n Session Startup Defenses \n If encryption and forgery detection techniques are such \npowerful security mechanisms, why aren’t they used \nuniversally for all network communications? The prob-\nlem is that not everyone is your friend; everyone has \nenemies, and in every human endeavor there are those \nwith criminal mindsets who want to prey on others. Most \npeople do not go out of their way to articulate and main-\ntain relationships with their enemies unless there is some \ncompelling reason to do so, and technology is powerless \nto change this. \n More than anything else, the keys used by encryp-\ntion and forgery detection are relationship signifiers. \nPossession of keys is useful not only because they enable \nencryption and forgery detection but because their use \nassures the remote party that messages you receive will \nremain confidential and that messages the peer receives \nfrom you actually originated from you. They enable the \naccountable maintenance of a preexisting relationship. \nIf you receive a message that is protected by a key that \nonly you and I know, and you didn’t generate the mes-\nsage yourself, it is reasonable for you to conclude that I \nsent the message to you and did so intentionally. \n If keys are signifiers of preexisting relationships, \nmuch of our networked communications cannot be \ndefended by cryptography, because we do not have \npreexisting relationships with everyone. We send and \nreceive email to and from people we have never met. \nWe buy products online from merchants we have never \nmet. None of these relationships would be possible if we \nrequired all messages to be encrypted or authenticated. \nWhat is always required is an open, unauthenticated, \nrisky channel to establish new relationships; cryptogra-\nphy can only assure us that communication from par-\nties with whom we already have relationships is indeed \noccurring with the person with whom we think we are \ncommunicating. \n A salient and central assumption for both encryption \nand forgery detection is that the keys these mechanisms \nuse are fresh and independent across sessions. A ses-\nsion is an instance of exercising a relationship to effect \ncommunication. This means that secure communications \n" }, { "page_number": 147, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n114\n require a state change, transitioning from a state in which \ntwo communicating parties are not engaged in an instance \nof communication to one in which they are. This state \nchange is session establishment . \n Session establishment is like a greeting between human \nbeings. It is designed to synchronize two entities communi-\ncating over the Internet and establish and synchronize their \nkeys, key identifiers, sequence numbers and replay win-\ndows, and, indeed, all the states to provide mutual assur-\nance that the communication is genuine and confidential. \n The techniques and data structures used to establish \na secure session are different from those used to carry \non a conversation. Our next goal is to look at some rep-\nresentative mechanisms in this area. The field is vast and \nit is impossible to do more than skim the surface briefly \nto give the reader a glimpse of the beauty and richness \nof the subject. \n Secure session establishment techniques typically have \nthree goals, as described in the following subsections. \n Mutual Authentication \n First, session establishment techniques seek to mutually \nauthenticate the communicating parties to each other. \n Mutually authenticate means that both parties learn the \n “ identity ” of the other. It is not possible to know what \nis proper to discuss with another party without also \nknowing the identity of the other party. If only one party \nlearns the identity of the other, it is always possible for \nan imposter to masquerade as the unknown party. \n Key Secrecy \n Second, session establishment techniques seek to estab-\nlish a session key that can be maintained as a secret \nbetween the two parties and is known to no one else. \nThe session key must be independent from all other keys \nfor all other session instances and indeed from all other \nkeys. This implies that no adversary with limited com-\nputational resources can distinguish the key from ran-\ndom. Generating such an independent session key is both \nharder and easier than it sounds; it is always possible to \ndo so if a preexisting relationship already exists between \nthe two communicating parties, and it is impossible to \ndo so reliably if a preexisting relationship does not exist. \nRelationships begat other relationships, and nonrelation-\nships are sterile with respect to the technology. \n Session State Consistency \n Finally, the parties need to establish a consistent view \nof the session state. This means that they both agree on \nthe identities of both parties; they agree on the session \nkey instance; they agree on the encryption and forgery \ndetection schemes used, along with any associated state \nsuch as sequence counters and replay windows; and they \nagree on which instance of communication this session \nrepresents. If they fail to agree on a single shared param-\neter, it is always possible for an imposter to convince \none of the parties that it is engaged in a conversation that \nis different from its peer’s conversation. \n Mutual Authentication \n There are an enormous number of ways to accomplish \nthe mutual authentication function needed to initiate a \nnew session. Here we examine two that are used in vari-\nous protocols within the Internet. \n A Symmetric Key Mutual Authentication Method \n Our old friend the message authentication code can be \nused with a static, long-lived key to create a simple and \nrobust mutual authentication scheme. Earlier we stressed \nthat the properties of message authentication are incom-\npatible with the use of a static key to provide forgery \ndetection of session-oriented messages. The incompat-\nibility is due to the use of sequence numbers for replay \ndetection. We will replace sequence numbers with unpre-\ndictable quantities in order to resocialize static keys. The \ncost of this resocialization effort will be a requirement to \nexchange extra messages. \n Suppose parties A and B want to mutually authen-\nticate. We will assume that ID A is B’s name for A, \nwhereas ID B is A’s name for B. We will also assume that \nA and B share a long-lived message authentication key \n K , and that K is known only to A and B. We will assume \nthat A initiates the authentication. A and B can mutually \nauthenticate using a three-message exchange, as follows: \nFor message 1, A generates a random number R A and \nsends a message containing its identity ID A and random \nnumber to B: \n \n A\nB:\n, \n→\nID\nR\nA\nA \n(1) \n The notation A → B: m means that A sends message \n m to B. Here the message being passed is specified as \n ID A , R A , meaning it conveys A’s identity ID A and A’s ran-\ndom number R A . This message asserts B’s name for A, \nto tell B which is the right long-lived key it should use in \nthis instance of the authentication protocol. The random \nnumber R A plays the role of the sequence number in the \nsession-oriented case. \n If B is willing to have a conversation with A at this \ntime, it fetches the correct message authentication key \n" }, { "page_number": 148, "text": "Chapter | 7 Internet Security\n115\n K , generates its own random number R B , and computes \na message authentication code tag T over the message \n ID B , ID A , R A , R B , that is, over the message consisting of \nboth names and both random numbers. B appends the \ntag to the message, which it then sends to A in response \nto message 1: \n \nB\nA:\n \n \n \n \n,\n,\n,\n→\nID\nID\nR\nR\nT\nB\nA\nA\nB,\n \n (2) \n B includes A’s name in the message to tell A which \nkey to use to authenticate the message. It includes A’s \nrandom number R A in the message to signal the protocol \ninstance to which this message responds. \n The magic begins when A validates the message \nauthentication code tag T . Since independently gen-\nerated random numbers are unpredictable, A knows \nthat the second message could not have been produced \nbefore A sent the first, because it returns R A to A. Since \nthe authentication code tag T was computed over the \ntwo identities ID B and ID A and the two random num-\nbers R A and R B using the key K known only to A and B, \nand since A did not create the second message itself, A \nknows that B must have created message 2. Hence, mes-\nsage 2 is a response from B to A’s message 1 for this \ninstance of the protocol. If the message were to contain \nsome other random number than R A , A would know the \nmessage is not a response to its message 1. \n If A verifies message 2, it responds by computing a \nmessage authentication code tag T \u0006 computed over ID A \nand B’s random number RB , which it includes in mes-\nsage 3: \n \nA\nB:\n, \n, \n \n→\nID\nR\nT\nA\nB\n\u0006 \n (3) \n Reasoning as before, B knows A produced message \n3 in response to its message 2, because message 3 could \nnot have been produced prior to message 2 and only A \ncould have produced the correct tag T \u0006 . Thus, after mes-\nsage 3 is delivered, A and B both have been assured of \neach other’s identity, and they also agree on the session \ninstance, which is identified by the pair of random num-\nbers R A and R B . \n A deeper analysis of the protocol reveals that mes-\nsage 2 must convey both identities and both random \nnumbers protected from forgery by the tag T . This con-\nstruction binds A’s view of the session with B’s. This \nbinding prevents interleaving or man-in-the-middle \nattacks. As an example, without this binding, a third \nparty, C, could masquerade as B to A and as A to B. \n It is worth noting that message 1 is not protected from \neither forgery or replay. This lack of any protections is an \nintrinsic part of the problem statement. During the pro-\ntocol, A and B must transition from a state where they \nare unsure about the other’s identity and have no com-\nmunication instance instantiating the long-term relation-\nship signified by the encryption key K to a state where \nthey fully agree on each other’s identities and a common \ninstance of communication expressing their long-lived \nrelationship. A makes the transition upon verifying mes-\nsage 2, and there are no known ways to reassure it about \nB until this point of the protocol. B makes the state tran-\nsition once it has completed verification of message 3. \nThe point of the protocol is to transition from a mutually \nsuspicious state to a mutually trusted state. \n An Asymmetric Key Mutual Authentication Method \n Authentication based on asymmetric keys is also possi-\nble. In addition to asymmetric encryption there is also an \nasymmetric key analog of a message authentication code \ncalled a signature scheme . Just like a message authenti-\ncation code, a signature scheme consists of three opera-\ntions: key generate , sign , and verify . The key generate \noperation outputs two parameters, a signing key S and \na related verification key V. S ’s key holder is never sup-\nposed to reveal S to another party, whereas V is meant \nto be a public value. Under these assumptions the sign \noperation takes the signing key S and a message M as \ninput parameters and output a signature s of M . The \nverify operation takes the verification key V , message M \nand signature s as inputs, and returns whether it verifies \nthat s was created from S and M . If the signing key S \nis indeed known by only one party, the signature s must \nhave been produced by that party. This is because it is \ninfeasible for a computationally limited party to compute \nthe signature s without S . Asymmetric signature schemes \nare often called public/private key schemes because S is \nmaintained as a secret, never shared with another party, \nwhereas the verification key is published to everyone. \n Signature schemes were invented to facilitate authen-\ntication. To accomplish this goal, the verification key \nmust be public, and it is usually published in a certifi-\ncate, which we will denote as cert ( ID A , V ) , where ID A \nis the identity of the key holder of S , and V is the verifi-\ncation key corresponding to A. The certificate is issued \nby a well-known party called a certificate authority . \nThe sole job of the certificate authority is to introduce \none party to another. A certificate cert ( ID A , V ) issued \nby a certificate authority is an assertion that entity A \nhas a public verification key V that is used to prove A’s \nidentity. \n" }, { "page_number": 149, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n116\n As with symmetric authentication, hundreds of dif-\nferent authentication protocols can be based on signature \nschemes. The following is one example among legion: \n \n A\nB:\n(\n, ), \n→\ncert ID\nV\nR\nA\nA \n (4) \n Here cert ( ID A , V ) is A’s certificate, conveying its \nidentity ID A and verification key V; R A is a random \nnumber generated by A. If B is willing to begin a new \nsession with A, it responds with the message: \n B\nA:\n(\n), \n, \n, sig (\n, \n, \n)\n→\ncert ID ,V\nR\nR\nID\nR\nR\nB\nB\nA\nB\nA\nB\nA\n\u0006\n (5) \n R B is a random number generated by B, and sig B ( ID A , \n R B , R A ) is B’s signature over the message with fields ID A , \n R B , and R A . Including IDA under B’s signature is essen-\ntial because it is B’s way of asserting that A is the target \nof message 2. Including RB and RA in the information \nsigned is also necessary to defeat man-in-the-middle \nattacks. A responds with a third message: \n \nA\nB:\n(\n, ), \n, sig (\n, \n)\nA\n→\ncert ID\nV\nR\nID\nR\nb\nB\nB\nB \n(6) \n A Caveat \n Mutual authentication is necessary to establish identities. \nIdentities are needed to decide on the access control poli-\ncies to apply to a particular conversation, that is, to answer \nthe question, Which information that the party knows is \nsuitable for sharing in the context of this communications \ninstance? Authentication — mutual or otherwise — has \nvery limited utility if the communications channel is not \nprotected against eavesdropping and forgeries. \n One of the most common mistakes made by Wi-Fi \nhotspot operators, for instance, is to require authentica-\ntion but disable eavesdropping and forgery protection \nfor the subsequent Internet access via the hotspot. This \nis because anyone with a Wi-Fi radio transmitter can \naccess the medium and hijack the session from a pay-\ning customer. Another way of saying this is that authen-\ntication is useful only when it’s used in conjunction with \na secure channel. This leads to the topic of session key \nestablishment. The most common use of mutual authen-\ntication is to establish ephemeral session keys using the \nlong-lived authentication keys. We will discuss session \nkey establishment next. \n Key Establishment \n Since it is generally infeasible for authentication to be \nmeaningful without a subsequent secure channel, and \nsince we know how to establish a secure channel across \nthe Internet if we have a key, the next goal is to add key \nestablishment to mutual authentication protocols. In this \nmodel, a mutual authentication protocol establishes an \nephemeral session key as a side effect of its successful \noperation; this session key can then be used to construct \nall the encryption and authentication keys needed to \nestablish a secure channel. All the session states, such as \nsequence number, replay windows, and key identifiers, \ncan be initialized in conjunction with the completion of \nthe mutual authentication protocol. \n It is usually feasible to add key establishment to an \nauthentication protocol. Let’s illustrate this with the sym-\nmetric key authentication protocol, based on a message \nauthentication code, discussed previously. To extend the \nprotocol to establish a key, we suppose instead that A \nand B share two long-lived keys K and K \u0006 . The first key \n K is a message authentication key as before. The second \nkey K \u0006 is a derivation key, the only function of which is \nto construct other keys within the context of the authen-\ntication protocol. This is accomplished as follows: After \nverifying message 2 (from line 2 previously), A com-\nputes a session key SK as: \n \nSK\nprf K R\nR\nID\nID\nlength\nA\nB\nA\nB\n←\n(\n, \n, \n, \n, \n, \n)\n\u0006\n \n (7) \n Here prf is another cryptographic primitive called \na pseudo random function . A pseudo random function \nis characterized by the properties that (a) its output is \nindistinguishable from random by any computation-\nally limited adversary and (b) it is hard to invert, that is, \ngiven a fixed output O , it is infeasible for any compu-\ntationally limited adversary to find an input I so that O \n ← prf ( I ). The output SK of (7) is length bits long and \ncan be split into two pieces to become encryption and \nmessage authentication keys. B generates the same SK \nwhen it receives message 3. An example of a pseudo-\nrandom function is any block cipher, such as AES, \nin cipher-block chaining MAC mode. Cipher-block \nchaining MAC mode is just like Cipher-block chaining \nmode, except all but the last block of encrypted data is \ndiscarded. \n This construction meets the goal of creating an inde-\npendent, ephemeral set of encryptions of message authen-\ntication keys for each session. The construction creates \nindependent keys because any two outputs of a prf appear \nto be independently selected at random to any adver-\nsary that is computationally limited. A knows that all \nthe outputs are statistically distinct, because A picks the \nparameter to the prf R A randomly for each instance of the \n" }, { "page_number": 150, "text": "Chapter | 7 Internet Security\n117\n protocol; similarly for B. And using the communications \ninstances identifiers RA, RB along with A and B’s identi-\nties ID A and ID B are interpreted as a “ contract ” to use SK \nonly for this session instance and only between A and B. \n Public key versions of key establishment based on \nsignatures and asymmetric encryption also exist, but we \nwill close with one last public key variant based on a \ncompletely different asymmetric key principle called the \n Diffie-Hellman algorithm . \n The Diffie-Hellman algorithm is based on the dis-\ncrete logarithm problem in finite groups. A group G is \na mathematical object that is closed under an associative \nmultiplication and has inverses for each element in G . \nThe prototypical example of a finite group is the integers \nunder addition modulo a prime number p . \n The idea is to begin with an element g of a finite \ngroup G that has a long period. This means to g 1 \u0003 g , \n g 2 \u0003 g \u0007 g , g 3 \u0003 g 2 \u0007 g , … . Since G is finite, this \nsequence must eventually repeat. It turns out that \n g \u0003 g n \u0002 1 for some integer n \u0005 1, and g n \u0003 e is the \ngroup’s neutral element. The element e has the property \nthat h \u0007 e \u0003 e \u0007 h \u0003 h for every element h in G , and \n n is called the period of g . With such an element it is \neasy to compute powers of g, but it is hard to compute \nthe logarithm of g k . If g is chosen carefully, no polyno-\nmial time algorithm is known that can compute k from \n g k . This property leads to a very elegant key agreement \nscheme: \n \nA\nB:\n(\n, ), \nB\nA:\n, \n(\n, \n), \n(\n, \n, \n→\n→\ncert ID\nV\ng\ng\ncert ID\nV\nsig\ng\ng\nID\nA\na\nb\nB\nB\na\nb\n\u0006\nA\nA\nb\na\nB\nsig\ng\ng\nID\n)\nA\nB:\n(\n, \n, \n)\n→\n \n The session key is then computed as SK ← prf ( K , g a \ng b , ID A , ID B ), where K ← prf (0, g ab ). In this protocol, a \nis a random number chosen by A, b is a random number \nchosen by B, and 0 denotes the all zeros key. Note that A \nsends g a unprotected across the channel to B. \n The quantity g ab is called the Diffie-Hellman key. \nSince B knows the random secret b , it can compute \n g ab \u0003 ( g a ) b from A’s public value g a , and similarly A can \ncompute g ab from B’s public value g b . This construction \nposes no risk, because the discrete logarithm problem \nis intractable, so it is computationally infeasible for an \nattacker to determine a from g a . Similarly, B may send \n g b across the channel in the clear, because a third party \ncannot extract b from g b . B’s signature on message 2 pre-\nvents forgeries and assures that the response is from B. \nSince no method is known to compute g ab from g a and \n g b , only A and B will know the Diffie-Hellman key at \nthe end of the protocol. The step K ← prf (0, g ab ) extracts \nall the computational entropy from the Diffie-Hellman \nkey. The construction SK ← prf ( K , g a g b , ID A , ID B ) com-\nputes a session key, which can be split into encryption \nand message authentication keys as before. \n The major drawback of Diffie-Hellman is that it is \nsubject to man-in-the-middle attacks. The preceding pro-\ntocol uses signatures to remove this threat. B’s signature \nauthenticates B to a and also binds g a and g b together, \npreventing man-in-the-middle attacks. Similarly, A’s sig-\nnature on message 3 assures B that the session is with A. \n These examples illustrate that is practical to con-\nstruct session keys that meet the requirements for cryp-\ntography, if a preexisting long-lived relationship already \nexists. \n State Consistency \n We have already observed that the protocol specified \nin (1) through (3) achieves state consistency when the \nprotocol succeeds. Both parties agree on the identities \nand on the session instance. When a session key SK is \nderived, as in (7), both parties also agree on the key. \nDetermining which parties know which pieces of infor-\nmation after each protocol message is the essential tool \nfor a security analysis of this kind of protocol. The anal-\nysis of this protocol is typical for authentication and key \nestablishment protocols. \n 4. CONCLUSION \n This chapter examined how cryptography is used on the \nInternet to secure protocols. It reviewed the architec-\nture of the Internet protocol suite, as even what security \nmeans is a function of the underlying system architec-\nture. Next it reviewed the Dolev-Yao model, which \ndescribes the threats to which network communications \nare exposed. In particular, all levels of network protocols \nare completely exposed to eavesdropping and manipula-\ntion by an attacker, so using cryptography properly is a \nfirst-class requirement to derive any benefit from its use. \nWe learned that effective security mechanisms to protect \nsession-oriented and session establishment protocols are \ndifferent, although they can share many cryptographic \nprimitives. Cryptography can be very successful at pro-\ntecting messages on the Internet, but doing so requires \npreexisting, long-lived relationships. How to build \nsecure open communities is still an open problem; it is \nprobably intractable because a solution would imply the \nelimination of conflict between human beings who do \nnot know each other. \n" }, { "page_number": 151, "text": "This page intentionally left blank\n" }, { "page_number": 152, "text": "119\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n The Botnet Problem \n Xinyuan Wang \n George Mason University \n Daniel Ramsbrock \n George Mason University \n Chapter 8 \n A botnet is a collection of compromised Internet com-\nputers being controlled remotely by attackers for mali-\ncious and illegal purposes. The term comes from these \nprograms being called robots , or bots for short, due to \ntheir automated behavior. \n Bot software is highly evolved Internet malware, \nincorporating components of viruses, worms, spyware, \nand other malicious software. The person controlling \na botnet is known as the botmaster or bot-herder , and \nhe seeks to preserve his anonymity at all costs. Unlike \nprevious malware such as viruses and worms, the moti-\nvation for operating a botnet is financial. Botnets are \nextremely profitable, earning their operators hundreds \nof dollars per day. Botmasters can either rent botnet \nprocessing time to others or make direct profits by send-\ning spam, distributing spyware to aid in identity theft, \nand even extorting money from companies via the threat \nof a distributed denial-of-service (DDoS) attack. 1 It is no \nsurprise that many network security researchers believe \nthat botnets are one of the most pressing security threats \non the Internet today. \n Bots are at the center of the undernet economy. Almost every \nmajor crime problem on the Net can be traced to them. \n — Jeremy Linden, formerly of Arbor Networks 2 \n 1. INTRODUCTION \n You sit down at your computer in the morning, still \nsquinting from sleep. Your computer seems a little \nslower than usual, but you don’t think much of it. After \nchecking the news, you try to sign into eBay to check \non your auctions. Oddly enough, your password doesn’t \nseem to work. You try a few more times, thinking maybe \nyou changed it recently — but without success. \n Figuring you’ll look into it later, you sign into online \nbanking to pay some of those bills that have been piling \nup. Luckily, your favorite password still works there — so \nit must be a temporary problem with eBay. Unfortunately, \nyou are in for more bad news: The $0.00 balance on your \nchecking and savings accounts isn’t just a “ temporary \nproblem. ” Frantically clicking through the pages, you \nsee that your accounts have been completely cleaned out \nwith wire transfers to several foreign countries. \n You check your email, hoping to find some explanation \nof what is happening. Instead of answers, you have dozens \nof messages from “ network operations centers ” around \nthe world, informing you in no uncertain terms that your \ncomputer has been scanning, spamming, and sending out \nmassive amounts of traffic over the past 12 hours or so. \nShortly afterward, your Internet connection stops working \naltogether, and you receive a phone call from your serv-\nice provider. They are very sorry, they explain, but due to \nsomething called “ botnet activity ” on your computer, they \nhave temporarily disabled your account. Near panic now, \nyou demand an explanation from the network technician \non the other end. “ What exactly is a botnet? How could it \ncause so much damage overnight? ” \n Though this scenario might sound far-fetched, it is \nentirely possible; similar things have happened to thou-\nsands of people over the last few years. Once a single \nbot program is installed on a victim computer, the possi-\nbilities are nearly endless. For example, the attacker can \nget your online passwords, drain your bank accounts, \nand use your computer as a remote-controlled “ zombie ” \nto scan for other victims, send out spam emails, and even \nlaunch DDoS attacks. \n 1 T. Holz, “ A short visit to the bot zoo, ” IEEE Security and Privacy , \n3(3), 2005, pp. 76 – 79. \n 2 S. Berinato, “ Attack of the bots, ” WIRED , Issue 14.11, November \n2006, www.wired.com/wired/archive/14.11/botnet.html . \n" }, { "page_number": 153, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n120\n This chapter describes the botnet threat and the coun-\ntermeasures available to network security professionals. \nFirst, it provides an overview of botnets, including their \norigins, structure, and underlying motivation. Next, the \nchapter describes existing methods for defending com-\nputers and networks against botnets. Finally, it addresses \nthe most important aspect of the botnet problem: how to \nidentify and track the botmaster in order to eliminate the \nroot cause of the botnet problem. \n 2. BOTNET OVERVIEW \n Bots and botnets are the latest trend in the evolution of \nInternet malware. Their black-hat developers have built \non the experience gathered from decades of viruses, \nworms, Trojan horses, and other malware to create \nhighly sophisticated software that is difficult to detect \nand remove. Typical botnets have several hundred to \nseveral thousand members, though some botnets have \nbeen detected with over 1.5 million members. 3 As of \nJanuary 2007, Google’s Vinton Cerf estimated that up to \n150 million computers (about 25% of all Internet hosts) \ncould be infected with bot software. 4 \n Origins of Botnets \n Before botnets, the main motivation for Internet attacks \nwas fame and notoriety. By design, these attacks were \nnoisy and easily detected. High-profile examples are \nthe Melissa email worm (1999), ILOVEYOU (2000), \nCode Red (2001), Slammer (2003), and Sasser (2004). 5 , 6 \nThough the impact of these viruses and worms was \nsevere, the damage was relatively short-lived and con-\nsisted mainly of the cost of the outage plus man-hours \nrequired for cleanup. Once the infected files had been \nremoved from the victim computers and the vulnerabil-\nity patched, the attackers no longer had any control. \n By contrast, botnets are built on the very premise \nof extending the attacker’s control over his victims. To \nachieve long-term control, a bot must be stealthy during \nevery part of its lifecycle, unlike its predecessors. 2 As a \nresult, most bots have a relatively small network foot-\nprint and do not create much traffic during typical opera-\ntion. Once a bot is in place, the only required traffic \nconsists of incoming commands and outgoing responses, \nconstituting the botnet’s command and control (C & C) \nchannel. Therefore, the scenario at the beginning of the \nchapter is not typical of all botnets. Such an obvious \nattack points to either a brazen or inexperienced botmas-\nter, and there are plenty of them. \n The concept of a remote-controlled computer bot \noriginates from Internet Relay Chat (IRC), where benev-\nolent bots were first introduced to help with repetitive \nadministrative tasks such as channel and nickname man-\nagement. 1,2 One of the first implementations of such \nan IRC bot was Eggdrop, originally developed in 1993 \nand still one of the most popular IRC bots. 6, 7 Over time, \nattackers realized that IRC was in many ways a per-\nfect medium for large-scale botnet C & C. It provides an \ninstantaneous one-to-many communications channel and \ncan support very large numbers of concurrent users. 8 \n Botnet Topologies and Protocols \n In addition to the traditional IRC-based botnets, several \nother protocols and topologies have emerged recently. \nThe two main botnet topologies are centralized and \npeer-to-peer (P2P). Among centralized botnets, IRC \nis still the predominant protocol, 9 , 10 , 11 but this trend is \ndecreasing and several recent bots have used HTTP for \ntheir C & C channels. 9,11 Among P2P botnets, many dif-\nferent protocols exist, but the general idea is to use a \ndecentralized collection of peers and thus eliminate the \nsingle point of failure found in centralized botnets. P2P \nis becoming the most popular botnet topology because it \nhas many advantages over centralized botnets. 12 \n 3 Joris Evers, “ ‘ Bot herders ’ may have controlled 1.5 million PCs, ” \n http://news. cnet.com/Bot-herders-may-have-controlled-1.5-million-\nPCs/2100-7350_3-5906896.html \n 4 A. Greenberg, “ Spam crackdown ‘ a drop in the bucket, ’ ” Forbes , \nJune 14, 2007, www.forbes.com/security/2007/06/14/spam-arrest-fbi-\ntech-security-cx_ag_0614spam.html . \n 5 Wikipedia contributors, “ Timeline of notable computer viruses and \nworms, ” http://en.wikipedia.org/w/index.php?title \u0003 Timeline_of_nota-\nble_computer_viruses_and_worms & oldid \u0003 207972502 (accessed May \n3, 2008). \n 6 P. Barford and V. Yegneswaran, “ An inside look at botnets, ” Special \nWorkshop on Malware Detection, Advances in Information Security, \nSpringer Verlag, 2006. \n 7 Wikipedia contributors, “ Eggdrop, ” http://en.wikipedia.org/w/index.\nphp?title \u0003 Eggdrop & oldid \u0003 207430332 (accessed May 3, 2008). \n 8 E. Cooke, F. Jahanian, and D. McPherson, “ The zombie roundup: \nUnderstanding, detecting, and disturbing botnets, ” in Proc. 1st \nWorkshop on Steps to Reducing Unwanted Traffi c on the Internet \n(SRUTI), Cambridge, July 7, 2005, pp. 39 – 44. \n 9 N. Ianelli and A. Hackworth, “ Botnets as a vehicle for online crime, ” \nin Proc. 18th Annual Forum of Incident Response and Security Teams \n(FIRST), Baltimore, June 25 – 30, 2006. \n 10 M. Rajab, J. Zarfoss, F. Monrose, and A. Terzis, “ A multifaceted \napproach to understanding the botnet phenomenon, ” in Proc. of the 6th \nACM SIGCOM Internet Measurement Conference, Rio de Janeiro, \nBrazil, October 2006. \n 11 Trend Micro, “ Taxonomy of botnet threats, ” Trend Micro \nEnterprise Security Library, November 2006. \n 12 Symantec, “ Symantec internet security threat report, trends for \nJuly – December 2007, ” Volume XIII, April 2008. \n" }, { "page_number": 154, "text": "Chapter | 8 The Botnet Problem\n121\n Centralized \n Centralized botnets use a single entity (a host or a small \ncollection of hosts) to manage all bot members. The \nadvantage of a centralized topology is that it is fairly \neasy to implement and produces little overhead. A major \ndisadvantage is that the entire botnet becomes useless if \nthe central entity is removed, since bots will attempt to \nconnect to nonexistent servers. To provide redundancy \nagainst this problem, many modern botnets rely on \ndynamic DNS services and/or fast-flux DNS techniques. \nIn a fast-flux configuration, hundreds or thousands of \ncompromised hosts are used as proxies to hide the iden-\ntities of the true C & C servers. These hosts constantly \nalternate in a round-robin DNS configuration to resolve \none hostname to many different IP addresses (none of \nwhich are the true IPs of C & C servers). Only the proxies \nknow the true C & C servers, forwarding all traffic from \nthe bots to these servers. 13 \n As we’ve described, the IRC protocol is an ideal \ncandidate for centralized botnet control, and it remains \nthe most popular among in-the-wild botmasters, 9,10,11 \nalthough it appears that will not be true much longer. \nPopular examples of IRC bots are Agobot, Spybot, and \nSdbot. 13 Variants of these three families make up most \nactive botnets today. By its nature, IRC is centralized \nand allows nearly instant communication among large \nbotnets. One of the major disadvantages is that IRC traf-\nfic is not very common on the Internet, especially in an \nenterprise setting. As a result, standard IRC traffic can \nbe easily detected, filtered, or blocked. For this reason, \nsome botmasters run their IRC servers on nonstandard \nports. Some even use customized IRC implementations, \nreplacing easily recognized commands such as JOIN and \nPRIVMSG with other text. Despite these countermeas-\nures, IRC still tends to stick out from the regular Web \nand email traffic due to uncommon port numbers. \n Recently, botmasters have started using HTTP to man-\nage their centralized botnets. The advantage of using reg-\nular Web traffic for C & C is that it must be allowed to pass \nthrough virtually all firewalls, since HTTP comprises a \nmajority of Internet traffic. Even closed firewalls that only \nprovide Web access (via a proxy service, for example) \nwill allow HTTP traffic to pass. It is possible to inspect \nthe content and attempt to filter out malicious C & C traffic, \nbut this is not feasible due to the large number of exist-\ning bots and variants. If botmasters use HTTPS (HTTP \nencrypted using SSL/TLS), then even content inspection \nbecomes useless and all traffic must be allowed to pass \nthrough the firewall. However, a disadvantage of HTTP \nis that it does not provide the instant communication and \nbuilt-in, scale-up properties of IRC: Bots must manually \npoll the central server at specific intervals. With large \nbotnets, these intervals must be large enough and distrib-\nuted well to avoid overloading the server with simultane-\nous requests. Examples of HTTP bots are Bobax 14 ,11 and \nRustock, with Rustock using a custom encryption scheme \non top of HTTP to conceal its C & C traffic. 15 \n Peer-to-Peer \n As defenses against centralized botnets have become \nmore effective, more and more botmasters are explor-\ning ways to avoid the pitfalls of relying on a central-\nized architecture and therefore a single point of failure. \nSymantec reports a “ steady decrease ” in centralized IRC \nbotnets and predicts that botmasters are now “ acceler-\nating their shift … to newer, stealthier control methods, \nusing protocols such as … peer-to-peer. ” 12 In the P2P \nmodel, no centralized server exists, and all member \nnodes are equally responsible for passing on traffic. “ If \ndone properly, [P2P] makes it near impossible to shut \ndown the botnet as a whole. It also provides anonym-\nity to the [botmaster], because they can appear as just \nanother node in the network, ” says security researcher \nJoe Stewart of Lurhq. 16 There are many protocols avail-\nable for P2P networks, each differing in the way nodes \nfirst join the network and the role they later play in pass-\ning traffic along. Some popular protocols are BitTorrent, \nWASTE, and Kademlia. 13 Many of these protocols were \nfirst developed for benign uses, such as P2P file sharing. \n One of the first malicious P2P bots was Sinit, released \nin September 2003. It uses random scanning to find \npeers, rather than relying on one of the established P2P \nbootstrap protocols. 13 As a result, Sinit often has trouble \nfinding peers, which results in overall poor connectiv-\nity. 17 Due to the large amount of scanning traffic, this bot \nis easily detected by intrusion detection systems (IDSs). 18 \n 13 J. Grizzard, V. Sharma, C. Nunnery, B. Kang, and D. Dagon, \n “ Peer-to-peer botnets: Overview and case study, ” in Proc. First \nWorkshop on Hot Topics in Understanding Botnets (HotBots), \nCambridge, April 2007. \n 14 J. Stewart, “ Bobax Trojan analysis, ” SecureWorks, May 17, 2004, \n http://secureworks.com/research/threats/bobax . \n 15 K. Chiang and L. Lloyd, “ A case study of the Rustock Rootkit and \nSpam Bot, ” in Proc. First Workshop on Hot Topics in Understanding \nBotnets (HotBots), Cambridge, April 10, 2007. \n 16 R. Lemos, “ Bot software looks to improve peerage, ” SecurityFocus , \nMay 2, 2006, www.securityfocus.com/news/11390/ . \n 17 P. Wang, S. Sparks, and C. Zou, “ An advanced hybrid peer-to-\npeer botnet, ” in Proc. First Workshop on Hot Topics in Understanding \nBotnets (HotBots), Cambridge, April 10, 2007. \n 18 J. Stewart, “ Sinit P2P Trojan analysis, ” SecureWorks , December 8, \n2004, www.secureworks.com/research/threats/sinit/ . \n" }, { "page_number": 155, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n122\n Another advanced bot using the P2P approach is \nNugache, released in April 2006. 13 It initially connects \nto a list of 22 predefined peers to join the P2P network, \nthen downloads a list of active peer nodes from there. This \nimplies that if the 22 “ seed ” hosts can be shut down, no \nnew bots will be able to join the network, but existing \nnodes can still function. 19 Nugache encrypts all communi-\ncations, making it harder for IDSs to detect and increasing \nthe difficulty of manual analysis by researchers. 16 Nugache \nis seen as one of the first more sophisticated P2P bots, pav-\ning the way for future enhancements by botnet designers. \n The most famous P2P bot so far is Peacomm, more \ncommonly known as the Storm Worm. It started spread-\ning in January 2007 and continues to have a strong \npresence. 20 To communicate with peers, it uses the \nOvernet protocol, based on the Kademlia P2P protocol. \nFor bootstrapping, it uses a fixed list of peers (146 in \none observed instance) distributed along with the bot. \nOnce the bot has joined Overnet, the botmaster can eas-\nily update the binary and add components to extend its \nfunctionality. Often the bot is configured to automati-\ncally retrieve updates and additional components, such \nas an SMTP server for spamming, an email address \nharvesting tool, and a DoS module. Like Nugache, all \nof Peacomm’s communications are encrypted, making \nit extremely hard to observe C & C traffic or inject com-\nmands appearing to come from the botmaster. Unlike \ncentralized botnets relying on a dynamic DNS provider, \nPeacomm uses its own P2P network as a distributed \nDNS system that has no single point of failure. The fixed \nlist of peers is a potential weakness, although it would be \nchallenging to take all these nodes offline. Additionally, \nthe attackers can always set up new nodes and include \nan updated peer list with the bot, resulting in an “ arms \nrace ” to shut down malicious nodes. 13 \n 3. TYPICAL BOT LIFE CYCLE \n Regardless of the topology being used, the typical life \ncycle of a bot is similar: \n 1. Creation. First, the botmaster develops his bot soft-\nware, often reusing existing code and adding custom \nfeatures. He might use a test network to perform dry \nruns before deploying the bot in the wild. \n 2. Infection. There are many possibilities for infecting \nvictim computers, including the following four. Once \na victim machine becomes infected with a bot, it is \nknown as a zombie . \n ● Software vulnerabilities. The attacker exploits \na vulnerability in a running service to automati-\ncally gain access and install his software without \nany user interaction. This was the method used \nby most worms, including the infamous Code \nRed and Sasser worms. 5 \n ● Drive-by download . The attacker hosts his file on \na Web server and entices people to visit the site. \nWhen the user loads a certain page, the software \nis automatically installed without user interac-\ntion, usually by exploiting browser bugs, miscon-\nfigurations, or unsecured ActiveX controls. \n ● Trojan horse . The attacker bundles his malicious \nsoftware with seemingly benign and useful soft-\nware, such as screen savers, antivirus scanners, or \ngames. The user is fully aware of the installation \nprocess, but he does not know about the hidden \nbot functionality. \n ● Email attachment : Although this method has \nbecome less popular lately due to rising user \nawareness, it is still around. The attacker sends an \nattachment that will automatically install the bot \nsoftware when the user opens it, usually without \nany interaction. This was the primary infection \nvector of the ILOVEYOU email worm from \n2000. 5 The recent Storm Worm successfully used \nenticing email messages with executable attach-\nments to lure its victims. 20 \n 3. Rallying . After infection, the bot starts up for the \nfirst time and attempts to contact its C & C server(s) \nin a process known as rallying . In a centralized \nbotnet, this could be an IRC or HTTP server, for \nexample. In a P2P botnet, the bots perform the boot-\nstrapping protocol required to locate other peers and \njoin the network. Most bots are very fault-tolerant, \nhaving multiple lists of backup servers to attempt if \nthe primary ones become unavailable. Some C & C \nservers are configured to immediately send some \ninitial commands to the bot (without botmaster inter-\nvention). In an IRC botnet, this is typically done by \nincluding the commands in the C & C channel’s topic. \n 4. Waiting. Having joined the C & C network, the bot \nwaits for commands from the botmaster. During this \ntime, very little (if any) traffic passes between the \nvictim and the C & C servers. In an IRC botnet, this \ntraffic would mainly consist of periodic keep-alive \nmessages from the server. \n 19 R. Schoof and Ralph Koning, “ Detecting peer-to-peer botnets, ” \nunpublished paper, University of Amsterdam, February 4, 2007, http://\nstaff.science.uva.nl/~delaat/sne-2006-2007/p17/report.pdf . \n 20 Wikipedia contributors, “ Storm worm, ” http://en.wikipedia.org/w/\nindex.php?title \u0003 Storm_Worm & oldid \u0003 207916428 accessed May 4, \n2008). \n" }, { "page_number": 156, "text": "Chapter | 8 The Botnet Problem\n123\n 5. Executing. Once the bot receives a command from \nthe botmaster, it executes it and returns any results to \nthe botmaster via the C & C network. The supported \ncommands are only limited by the botmaster’s imagi-\nnation and technical skills. Common commands are \nin line with the major uses of botnets: scanning for \nnew victims, sending spam, sending DoS floods, set-\nting up traffic redirection, and many more. \n Following execution of a command, the bot returns \nto the waiting state to await further instructions. If the \nvictim computer is rebooted or loses its connection to \nthe C & C network, the bot resumes in the rallying state. \nAssuming it can reach its C & C network, it will then con-\ntinue in the waiting state until further commands arrive. \n Figure 8.1 shows the detailed infection sequence in a \ntypical IRC-based botnet: \n 1. An existing botnet member computer launches a \nscan, then discovers and exploits a vulnerable host. \n 2. Following the exploit, the vulnerable host is made to \ndownload and install a copy of the bot software, con-\nstituting an infection. \n 3. When the bot starts up on the vulnerable host, it \nenters the rallying state: It performs a DNS lookup to \ndetermine the current IP of its C & C server. \n 4. The new bot joins the botnet’s IRC channel on the \nC & C server for the first time, now in the waiting state. \n 5. The botmaster sends his commands to the C & C \nserver on the botnet’s IRC channel. \n 6. The C & C server forwards the commands to all bots, \nwhich now enter the executing state. \n 4. THE BOTNET BUSINESS MODEL \n Unlike the viruses and worms of the past, botnets are \nmotivated by financial profit. Organized crime groups \noften use them as a source of income, either by hiring \n “ freelance ” botmasters or by having their own members \ncreate botnets. As a result, network security professionals \nare up against motivated, well-financed organizations that \ncan often hire some of the best minds in computers and \nnetwork security. This is especially true in countries such \nas Russia, Romania, and other Eastern European nations \nwhere there is an abundance of IT talent at the high \nschool and university level but legitimate IT job prospects \nare very limited. In such an environment, criminal organi-\nzations easily recruit recent graduates by offering far bet-\nter opportunities than the legitimate job market. 21 , 22 , 23 , 24 \nOne infamous example of such a crime organization \nis the Russian Business Network (RBN), a Russian \nInternet service provider (ISP) that openly supports \ncriminal activity. 21, 25 They are responsible for the Storm \nWorm (Peacomm), 25 the March 2007 DDoS attacks on \nEstonia, 25 and a high-profile attack on the Bank of India \nin August 2007, 26 along with many other attacks. \n It might not be immediately obvious how a collec-\ntion of computers can be used to cause havoc and pro-\nduce large profits. The main point is that botnets provide \n anonymous and distributed access to the Internet. The \nanonymity makes the attackers untraceable, and a botnet’s \ndistributed nature makes it extremely hard to shut down. \nAs a result, botnets are perfect vehicles for criminal activ-\nities on the Internet. Some of the main profit-producing \nmethods are explained here, 27 but criminals are always \ndevising new and creative ways to profit from botnets: \n ● Spam. Spammers send millions of emails advertising \nphony or overpriced products, phishing for financial \n FIGURE 8.1 Infection sequence of a typical centralized IRC-based \nbotnet. \n 21 D. Bizeul, “ Russian business network study, ” unpublished paper, \nNovember 20, 2007, www.bizeul.org/fi les/RBN_study.pdf . \n 22 A. E. Cha, “ Internet dreams turn to crime, ” Washington Post , May \n18, 2003, www.washingtonpost.com/ac2/wp-dyn/A2619-2003May17 . \n 23 B. I. Koerner, “ From Russia with l ø pht, ” Legal Affairs , May – June \n2002, http://legalaffairs.org/issues/May-June-2002/feature_koerner_may\njun2002.msp . \n 24 M. Delio, “ Inside Russia’s hacking culture, ” WIRED , March 12, \n2001, www.wired.com/culture/lifestyle/news/2001/03/42346 . \n 25 Wikipedia contributors, “ Russian business network, ” http://\nen.wikipedia.org/w/index.php?title \u0003 Russian_Business_Network & \noldid \u0003 209665215 (accessed May 3, 2008). \n 26 L. Tung, “ Infamous Russian ISP behind Bank of India hack ” ZDNet , \nSeptember 4, 2007, http://news.zdnet.co.uk/security/0,1000000189,\n39289057,00.htm?r \u0003 2 . \n 27 P. B ä cher, T. Holz, M. K ö tter, and G. Wicherski, “ Know your enemy: \nTracking botnets, ” March 13, 2005, see www.honeynet.org/papers/\nbots/ . \n" }, { "page_number": 157, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n124\ndata and login information, or running advance-fee \nschemes such as the Nigerian 419 scam. 28 Even if \nonly a small percentage of recipients respond to this \nspam, the payoff is considerable for the spammer. \nIt is estimated that up to 90% of all spam originates \nfrom botnets. 2 \n ● DDoS and extortion. Having amassed a large \nnumber of bots, the attacker contacts an organization \nand threatens to launch a massive DDoS attack, \nshutting down its Web site for several hours or \neven days. Another variation on this method is \nto find vulnerabilities, use them steal financial or \nconfidential data, and then demand money for the \n “ safe return ” of the data and to keep it from being \ncirculated in the underground economy. 23 Often, \ncompanies would rather pay off the attacker to avoid \ncostly downtime, lost sales, and the lasting damage \nto its reputation that would result from a DDoS \nattack or data breach. \n ● Identity theft. Once a bot has a foothold on a \nvictim’s machine, it usually has complete control. \nFor example, the attacker can install keyloggers \nto record login and password information, search \nthe hard drive for valuable data, or alter the DNS \nconfiguration to redirect victims to look-alike \nWeb sites and collect personal information, known \nas pharming. 29 Using the harvested personal \ninformation, the attacker can make fraudulent credit \ncard charges, clean out the victim’s bank account, \nand apply for credit in the victim’s name, among \nmany other things. \n ● Click fraud. In this scenario, bots are used to \nrepeatedly click Web advertising links, generating \nper-click revenue for the attacker. 2 This represents \nfraud because only the clicks of human users with \na legitimate interest are valuable to advertisers. The \nbots will not buy the product or service as a result of \nclicking the advertisement. \n These illegal activities are extremely profitable. \nFor example, a 2006 study by the Germany Honeynet \nProject estimated that a botmaster can make about $430 \nper day just from per-install advertising software. 30 A \n20-year-old California botmaster indicted in February \n2006 earned $100,000 in advertising revenue from his \nbotnet operations. 31 However, both of these cases pale in \ncomparison to the estimated $20 million worth of dam-\nage caused by an international ring of computer crimi-\nnals known as the A-Team. 32 \n Due to these very profitable uses of botnets, many bot-\nmasters make money simply by creating botnets and then \nrenting out processing power and bandwidth to spammers, \nextortionists, and identity thieves. Despite a recent string \nof high-profile botnet arrests, these are merely a drop in \nthe bucket. 4 Overall, botmasters still have a fairly low \nchance of getting caught due to a lack of effective trace-\nback techniques. The relatively low risk combined with \nhigh yield makes the botnet business very appealing as a \nfundraising method for criminal enterprises, especially in \ncountries with weak computer crime enforcement. \n 5. BOTNET DEFENSE \n When botnets emerged, the response was similar to pre-\nvious Internet malware: Antivirus vendors created sig-\nnatures and removal techniques for each new instance \nof the bot. This approach initially worked well at the \nhost level, but researchers soon started exploring more \nadvanced methods for eliminating more than one bot at a \ntime. After all, a botnet with tens of thousands of mem-\nbers would be very tedious to combat one bot at a time. \n This section describes the current defenses against \ncentralized botnets, moving from the host level to the \nnetwork level, then to the C & C server, and finally to the \nbotmaster himself. \n Detecting and Removing Individual Bots \n Removing individual bots does not usually have a notice-\nable impact on the overall botnet, but it is a crucial first \nstep in botnet defense. The basic antivirus approach using \nsignature-based detection is still effective with many \nbots, but some are starting to use polymorphism, which \ncreates unique instances of the bot code and evades sig-\nnature-based detection. For example, Agobot is known to \nhave thousands of variants, and it includes built-in sup-\nport for polymorphism to change its signature at will. 33 \n 28 Wikipedia contributors, “ E-mail spam, ” http://en.wikipedia.org/w/\nindex.php?title \u0003 E-mail_spam & oldid \u0003 209902571 (accessed May 3, \n2008). \n 29 Wikipedia contributors, “ Pharming, ” http://en.wikipedia.org/w/\nindex.php?title \u0003 Pharming & oldid \u0003 196469141 (accessed May 3, 2008). \n 30 R. Naraine, “ Money bots: Hackers cash in on hijacked PCs, ” eWeek , \nSeptember 8, 2006, www.eweek.com/article2/0,1759,2013924,00.asp . \n 31 P. F. Roberts, “ DOJ indicts hacker for hospital botnet \nattack, ” eWeek , February 10, 2006, www.eweek.com/article2/0,\n1759,1925456,00.asp . \n 32 T. Claburn, “ New Zealander ‘ AKILL ’ pleads guilty to botnet \ncharges, ” Information Week , April 3, 2008, www.informationweek.\ncom/news/security/cybercrime/showArticle.jhtml?articleID \u0003 2070\n01573 . \n 33 Wikipedia contributors, “ Agobot (computer worm), ” http://\nen.wikipedia.org/w/index.php?title \u0003 Agobot_%28computer_worm%\n29 & oldid \u0003 201957526 (accessed May 3, 2008). \n" }, { "page_number": 158, "text": "Chapter | 8 The Botnet Problem\n125\n To deal with these more sophisticated bots and all \nother polymorphic malware, detection must be done \nusing behavioral analysis and heuristics. Researchers \nStinson and Mitchell have developed a taint-based \napproach called BotSwat that marks all data originating \nfrom the network. If this data is used as input for a sys-\ntem call, there is a high probability that it is bot-related \nbehavior, since user input typically comes from the key-\nboard or mouse on most end-user systems. 34 \n Detecting C & C Traffic \n To mitigate the botnet problem on a larger scale, \nresearchers turned their attention to network-based \ndetection of the botnet’s C & C traffic. This method \nallows organizations or even ISPs to detect the presence \nof bots on their entire network, rather than having to \ncheck each machine individually. \n One approach is to examine network traffic for \ncertain known patterns that occur in botnet C & C traf-\nfic. This is, in effect, a network-deployed version of \nsignature-based detection, where signatures have to \nbe collected for each bot before detection is possible. \nResearchers Goebel and Holz implemented this method \nin their Rishi tool, which evaluates IRC nicknames \nfor likely botnet membership based on a list of known \nbotnet naming schemes. As with all signature-based \napproaches, it often leads to an “ arms race ” where the \nattackers frequently change their malware and the net-\nwork security community tries to keep up by creating \nsignatures for each new instance. 35 \n Rather than relying on a limited set of signatures, \nit is also possible to use the IDS technique of anomaly \ndetection to identify unencrypted IRC botnet traffic. This \nmethod was successfully implemented by researchers \nBinkley and Singh at Portland State University, and as \na result they reported a significant increase in bot detec-\ntion on the university network. 36 \n Another IDS-based detection technique called \nBotHunter was proposed by Gu et al. in 2007. Their \napproach is based on IDS dialog correlation tech-\nniques: It deploys three separate network monitors at \nthe network perimeter, each detecting a specific stage \nof bot infection. By correlating these events, BotHunter \ncan reconstruct the traffic dialog between the infected \nmachine and the outside Internet. From this dialog, the \nengine determines whether a bot infection has taken \nplace with a high accuracy rate. 37 \n Moving beyond the scope of a single network/organi-\nzation, traffic from centralized botnets can be detected at \nthe ISP level based only on transport layer flow statistics. \nThis approach was developed by Karasaridis et al., and it \nsolves many of the problems of packet-level inspection. \nIt is passive, highly scalable, and only uses flow sum-\nmary data (limiting privacy issues). Additionally, it can \ndetermine the size of a botnet without joining and can \neven detect botnets using encrypted C & C. The approach \nexploits the underlying principle of centralized botnets: \nEach bot has to contact the C & C server, producing \ndetectable patterns in network traffic flows. 38 \n Beyond the ISP level, a heuristic method for Internet-\nwide bot detection was proposed by Ramachandran et al. \nin 2006. In this scheme, query patterns of DNS black-\nhole lists (DNSBLs) are used to create a list of possible \nbot-infected IP addresses. It relies on the fact that botmas-\nters need to periodically check whether their spam-send-\ning bots have been added to a DNSBL and have therefore \nbecome useless. The query patterns of botmasters to a \nDNSBL are very different from those of legitimate mail \nservers, allowing detection. 39 One major limitation is that \nthis approach focuses mainly on the sending of spam. It \nwould most likely not detect bots engaged in other ille-\ngal activities, such as DDoS attacks or click fraud, since \nthese do not require DNSBL lookups. \n Detecting and Neutralizing \nthe C & C Servers \n Though detecting C & C traffic and eliminating all bots \non a given local network is a step in the right direction, \nit still doesn’t allow the takedown of an entire botnet at \nonce. To achieve this goal in a centralized botnet, access \n 34 E. Stinson and J. Mitchell, “ Characterizing bots ’ remote con-\ntrol behavior, ” in Proc. 4th International Conference on Detection \nof Intrusions & Malware and Vulnerability Assessment (DIMVA), \nLucerne, Switzerland, July 12 – 13, 2007. \n 35 J. Goebel and T. Holz, “ Rishi: Identify bot contaminated hosts by \nIRC nickname evaluation, ” in Proc. First Workshop on Hot Topics in \nUnderstanding Botnets (HotBots), Cambridge, April 10, 2007. \n 36 J. Binkley and S. Singh, “ An algorithm for anomaly-based botnet \ndetection, ” in Proc. 2nd Workshop on Steps to Reducing Unwanted \nTraffi c on the Internet (SRUTI), San Jose, July 7, 2006, pp. 43 – 48. \n 37 G. Gu, P. Porras, V. Yegneswaran, M. Fong, and W. Lee, \n “ BotHunter: Detecting malware infection through IDS-driven dialog \ncorrelation, ” in Proc. 16th USENIX Security Symposium, Boston, \nAugust 2007. \n 38 A. Karasaridis, B. Rexroad, and D. Hoeflin, “ Wide-scale botnet \ndetection and characterization, ” in Proc. First Workshop on Hot Topics \nin Understanding Botnets (HotBots), Cambridge, MA, April 10, \n2007. \n 39 A. Ramachandran, N. Feamster, and D. Dagon, “ Revealing bot-\nnet membership using DNSBL counter-intelligence, ” in Proc. 2nd \nWorkshop on Steps to Reducing Unwanted Traffi c on the Internet \n(SRUTI), San Jose, CA, July 7, 2006, pp. 49 – 54. \n" }, { "page_number": 159, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n126\nto the C & C servers must be removed. This approach \nassumes that the C & C servers consist of only a few \nhosts that are accessed directly. If hundreds or thousands \nof hosts are used in a fast-flux proxy configuration, it \nbecomes extremely challenging to locate and neutralize \nthe true C & C servers. \n In work similar to BotHunter, researchers Gu et al. \ndeveloped BotSniffer in 2008. This approach repre-\nsents several improvements, notably that BotSniffer can \nhandle encrypted traffic, since it no longer relies only \non content inspection to correlate messages. A major \nadvantage of this approach is that it requires no advance \nknowledge of the bot’s signature or the identity of C & C \nservers. By analyzing network traces, BotSniffer detects \nthe spatial-temporal correlation among C & C traffic \nbelonging to the same botnet. It can therefore detect both \nthe bot members and the C & C server(s) with a low false \npositive rate. 40 \n Most of the approaches mentioned under “ Detecting \nC & C Traffic ” can also be used to detect the C & C serv-\ners, with the exception of the DNSBL approach. 39 \nHowever, their focus is mainly on detection and removal \nof individual bots. None of these approaches mentions \ntargeting the C & C servers to eliminate an entire botnet. \n One of the few projects that has explored the feasi-\nbility of C & C server takedown is the work of Freiling \net al. in 2005. 41 Although their focus is on DDoS preven-\ntion, they describe the method that is generally used in \nthe wild to remove C & C servers when they are detected. \nFirst, the bot binary is either reverse-engineered or run \nin a sandbox to observe its behavior, specifically the \nhostnames of the C & C servers. Using this information, \nthe proper dynamic DNS providers can be notified to \nremove the DNS entries for the C & C servers, preventing \nany bots from contacting them and thus severing contact \nbetween the botmaster and his botnet. Dagon et al. used \na similar approach in 2006 to obtain experiment data for \nmodeling botnet propagation, redirecting the victim’s \nconnections from the true C & C server to their sinkhole \nhost. 42 Even though effective, the manual analysis and \ncontact with the DNS operator is a slow process. It can \ntake up to several days until all C & C servers are located \nand neutralized. However, this process is essentially the \nbest available approach for shutting down entire botnets \nin the wild. As we mentioned, this technique becomes \nmuch harder when fast-flux proxies are used to conceal \nthe real C & C servers or a P2P topology is in place. \n Attacking Encrypted C & C Channels \n Though some of the approaches can detect encrypted \nC & C traffic, the presence of encryption makes bot-\nnet research and analysis much harder. The first step in \ndealing with these advanced botnets is to penetrate the \nencryption that protects the C & C channels. \n A popular approach for adding encryption to an \nexisting protocol is to run it on top of SSL/TLS; to \nsecure HTTP traffic, ecommerce Web sites run HTTP \nover SSL/TLS, known as HTTPS. Many encryption \nschemes that support key exchange (including SSL/TLS) \nare susceptible to man-in-the-middle (MITM) attacks, \nwhereby a third party can impersonate the other two par-\nties to each other. Such an attack is possible only when \nno authentication takes place prior to the key exchange, \nbut this is a surprisingly common occurrence due to poor \nconfiguration. \n The premise of an MITM attack is that the client \ndoes not verify that it’s talking to the real server, and \nvice versa. When the MITM receives a connection from \nthe client, it immediately creates a separate connection to \nthe server (under a different encryption key) and passes \non the client’s request. When the server responds, the \nMITM decrypts the response, logs and possibly alters \nthe content, then passes it on to the client reencrypted \nwith the proper key. Neither the client or the server \nnotice that anything is wrong, because they are commu-\nnicating with each other over an encrypted connection, \nas expected. The important difference is that unknown \nto either party, the traffic is being decrypted and \nreencrypted by the MITM in transit, allowing him to \nobserve and alter the traffic. \n In the context of bots, two main attacks on encrypted \nC & C channels are possible: (1) “ gray-box ” analysis, \nwhereby the bot communicates with a local machine \nimpersonating the C & C server, and (2) a full MITM \nattack, in which the bot communicates with the true \nC & C server. Figure 8.2 shows a possible setup for both \nattacks, using the DeleGate proxy 43 for the conversion to \nand from SSL/TLS. \n 40 G. Gu, J. Zhang, and W. Lee, “ BotSniffer: Detecting botnet com-\nmand and control channels in network traffi c, ” in Proc. 15th Network \nand Distributed System Security Symposium (NDSS), San Diego, \nFebruary 2008. \n 41 F. Freiling, T. Holz, and G. Wicherski, “ Botnet tracking: Exploring \na root-cause methodology to prevent denial-of-service attacks, ” in \nProc. 10th European Symposium on Research in Computer Security \n(ESORICS), Milan, Italy, September 12 – 14, 2005. \n 42 D. Dagon, C. Zou, and W. Lee, “ Modeling botnet propagation \nusing time zones, ” in Proc. 13th Network and Distributed System \nSecurity Symposium (NDSS), February 2006. \n 43 “ DeleGate multi-purpose application gateway, ” www.delegate.org/\ndelegate/ (accessed May 4, 2008). \n" }, { "page_number": 160, "text": "Chapter | 8 The Botnet Problem\n127\n The first attack is valuable to determine the authen-\ntication information required to join the live botnet: the \naddress of the C & C server, the IRC channel name (if \napplicable), plus any required passwords. However, it \ndoes not allow the observer to see the interaction with \nthe larger botnet, specifically the botmaster. The sec-\nond attack reveals the full interaction with the botnet, \nincluding all botmaster commands, the botmaster password \nused to control the bots, and possibly the IP addresses of \nother bot members (depending on the configuration of \nthe C & C server). Figures 8.3 – 8.5 show the screenshots \nof the full MITM attack on a copy of Agobot configured \nto connect to its C & C server via SSL/TLS. Specifically, \n Figure 8.3 shows the botmaster’s IRC window, with his \ncommands and the bot’s responses. Figure 8.4 shows \nthe encrypted SSL/TLS trace, and Figure 8.5 shows the \ndecrypted plaintext that was observed at the DeleGate \nproxy. The botmaster password botmasterPASS is clearly \nvisible, along with the required username, botmaster . \n Armed with the botmaster username and password, \nthe observer could literally take over the botnet. He \nPlaintext\nRogue\nC&C\nServer\nLog\nReal\nC&C\nServer\nMan-in-the-Middle\nPlaintext\nDG1\nDG\nBot\nBot\n(1)\n(2)\nDG: DeleGate\nDG2\nSSL\nSSL\nSSL\n FIGURE 8.2 Setups for man-in-the-middle attacks on encrypted C & C \nchannels. \n FIGURE 8.3 Screenshot showing the botmaster’s IRC window. \n FIGURE 8.4 Screenshot showing the SSL/TLS-encrypted network traffic. \n" }, { "page_number": 161, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n128\ncould log in as the botmaster, then issue a command \nsuch as Agobot’s .bot.remove, causing all bots to dis-\nconnect from the botnet and permanently remove them-\nselves from the infected computers. Unfortunately, there \nare legal issues with this approach because it constitutes \nunauthorized access to all the botnet computers, despite \nthe fact that it is in fact a benign command to remove the \nbot software. \n Locating and Identifying the Botmaster \n Shutting down an entire botnet at once is a significant \nachievement, especially when the botnet numbers in the \ntens of thousands of members. However, there is noth-\ning stopping the botmaster from simply deploying new \nbots to infect the millions of vulnerable hosts on the \nInternet, creating a new botnet in a matter of hours. In \nfact, most of the machines belonging to the shut-down \nbotnet are likely to become infected again because the \nvulnerabilities and any attacker-installed backdoors often \nremain active, despite the elimination of the C & C serv-\ners. Botnet-hunting expert Gadi Evron agrees: “ When \nwe disable a command-and-control server, the botnet is \nimmediately recreated on another host. We’re not hurt-\ning them anymore, ” he said in a 2006 interview. 44 \n The only permanent solution of the botnet problem is \nto go after the root cause: the botmasters. Unfortunately, \nmost botmasters are very good at concealing their identi-\nties and locations, since their livelihood depends on it. \nTracking the botmaster to her true physical location is a \ncomplex problem that is described in detail in the next \nsection. So far, there is no published work that would \nallow automated botmaster traceback on the Internet, \nand it remains an open problem. \n 6. BOTMASTER TRACEBACK \n The botnet field is full of challenging problems: obfus-\ncated binaries, encrypted C & C channels, fast-flux \nproxies protecting central C & C servers, customized \ncommunication protocols, and many more (see Figure \n8.6 ). Arguably the most challenging task is locating the \nbotmaster. Most botmasters take precautions on multiple \n FIGURE 8.6 Botnet C & C traffic laundering. \n 44 R. Naraine, “ Is the botnet battle already lost? ” eWeek , October 16, \n2006, www.eweek.com/article2/0,1895,2029720,00.asp . \n FIGURE 8.5 Screenshot showing decrypted plaintext from the DeleGate proxy. \n" }, { "page_number": 162, "text": "Chapter | 8 The Botnet Problem\n129\nlevels to ensure that their connections cannot be traced \nto their true locations. \n The reason for the botmaster’s extreme caution is that \na successful trace would have disastrous consequences. \nHe could be arrested, his computer equipment could be \nseized and scrutinized in detail, and he could be sen-\ntenced to an extended prison term. Additionally, authori-\nties would likely learn the identities of his associates, \neither from questioning him or by searching his comput-\ners. As a result, he would never again be able to operate \nin the Internet underground and could even face violent \nrevenge from his former associates when he is released. \n In the United States, authorities have recently started \nto actively pursue botmasters, resulting in several \narrests and convictions. In November 2005, 20-year-old \nJeanson James Ancheta of California was charged with \nbotnet-related computer offenses. 45 He pleaded guilty in \nJanuary 2006 and could face up to 25 years in prison. 46 \nIn a similar case, 20-year-old Christopher Maxwell was \nindicted on federal computer charges. He is accused of \nusing his botnet to attack computers at several universi-\nties and a Seattle hospital, where bot infections severely \ndisrupted operations. 31 \n In particular, the FBI’s Operation Bot Roast has \nresulted in several high-profile arrests, both in the United \nStates and abroad. 47 The biggest success was the arrest \nof 18-year-old New Zealand native Owen Thor Walker, \nwho was a member of a large international computer \ncrime ring known as the A-Team. This group is reported \nto have infected up to 1.3 million computers with bot \nsoftware and caused about $20 million in economic \ndamage. Despite this success, Walker was only a minor \nplayer, and the criminals in control of the A-Team are \nstill at large. 32 \n Unfortunately, botmaster arrests are not very com-\nmon. The cases described here represent only several \nindividuals; thousands of botmasters around the world \nare still operating with impunity. They use sophisticated \ntechniques to hide their true identities and locations, \nand they often operate in countries with weak computer \ncrime enforcement. The lack of international coordina-\ntion, both on the Internet and in law enforcement, makes \nit hard to trace botmasters and even harder to hold them \naccountable to the law. 22 \n Traceback Challenges \n One defining characteristic of the botmaster is that he \noriginates the botnet C & C traffic. Therefore, one way to \nfind the botmaster is to track the botnet C & C traffic. To \nhide himself, the botmaster wants to disguise his link to \nthe C & C traffic via various traffic-laundering techniques \nthat make tracking C & C traffic more difficult. For \nexample, a botmaster can route his C & C traffic through \na number of intermediate hosts, various protocols, and \nlow-latency anonymous networks to make it extremely \ndifficult to trace. To further conceal his activities, a bot-\nmaster can also encrypt his traffic to and from the C & C \nservers. Finally, a botmaster only needs to be online \nbriefly and send small amounts of traffic to interact with \nhis botnet, reducing the chances of live traceback. Figure \n8.6 illustrates some of the C & C traffic-laundering tech-\nniques a botmaster can use. \n Stepping Stones \n The intermediate hosts used for traffic laundering are \nknown as stepping stones . The attacker sets them up in \na chain, leading from the botmaster’s true location to the \nC & C server. Stepping stones can be SSH servers, prox-\nies (such as SOCKS), IRC bouncers (BNCs), virtual pri-\nvate network (VPN) servers, or any number of network \nredirection services. They usually run on compromised \nhosts, which are under the attacker’s control and lack \naudit/logging mechanisms to trace traffic. As a result, \nmanual traceback is tedious and time-consuming, requir-\ning the cooperation of dozens of organizations whose \nnetworks might be involved in the trace. \n The major challenge posed by stepping stones is that \nall routing information from the previous hop (IP head-\ners, TCP headers, and the like) is stripped from the data \nbefore it is sent out on a new, separate connection. Only \nthe content of the packet (the application layer data) is \npreserved, which renders many existing tracing schemes \nuseless. An example of a technique that relies on rout-\ning header information is probabilistic packet marking \n(PPM). This approach was introduced by Savage et al. \nin 2000, embedding tracing information in an unused IP \nheader field. 48 Two years later, Goodrich expanded this \napproach, introducing “ randomize-and-link ” for better \nscalability. 49 Another technique for IP-level traceback \n 45 P. F. Roberts, “ California man charged with botnet offenses, ” eWeek , \nNovember 3, 2005, www.eweek.com/article2/0,1759,1881621,00.asp . \n 46 P. F. Roberts, “ Botnet operator pleads guilty, ” eWeek , January 24, \n2006, www.eweek.com/article2/0,1759,1914833,00.asp . \n 47 S. Nichols, “ FBI ‘ bot roast ’ scores string of arrests ” vnunet.\ncom , December 3, 2007, www.vnunet.com/vnunet/news/2204829/\nbot-roast-scores-string-arrests . \n 48 S. Savage, D. Wetherall, A. Karlin, and T. Anderson, “ Practical \nnetwork support for IP traceback, ” in Proc. ACM SIGCOMM 2000, \nSept. 2000, pp. 295 – 306. \n 49 M. T. Goodrich, “ Effi cient packet marking for large-scale \nIP traceback, ” in Proc. 9th ACM Conference on Computer and \nCommunications Security (CCS 2002), October 2002, pp. 117 – 126. \n" }, { "page_number": 163, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n130\nis the log/hash-based scheme introduced by Snoeren \net al. 50 and enhanced by Li et al. 51 These techniques \nwere very useful in combating the fast-spreading worms \nof the early 2000s, which did not use stepping stones. \nHowever, these approaches do not work when stepping \nstones are present, since IP header information is lost. \n Multiple Protocols \n Another effective and efficient method to disguise the \nbotmaster is to launder the botnet C & C traffic across \nother protocols. Such protocol laundering can be \nachieved by either protocol tunneling or protocol trans-\nlation . For example, a sophisticated botmaster could \nroute its command and control traffic through SSH (or \neven HTTP) tunnels to reach the command and control \ncenter. The botmaster could also use some intermediate \nhost X as a stepping stone, use some real-time commu-\nnication protocols other than IRC between the botmas-\nter host and host X , and use IRC between the host X and \nthe IRC server. In this case, host X performs the protocol \ntranslation at the application layer and serves as a con-\nduit of the botnet C & C channel. One protocol that is \nparticularly suitable for laundering the botnet command \nand control is instant messaging (IM), which supports \nreal-time text-based communication between two or \nmore people. \n Low-Latency Anonymous Network \n Besides laundering the botnet C & C across stepping \nstones and different protocols, a sophisticated botmaster \ncould anonymize its C & C traffic by routing it through \nsome low-latency anonymous communication systems. \nFor example, Tor — the second generation of onion rout-\ning — uses an overlay network of onion routers to pro-\nvide anonymous outgoing connections and anonymous \nhidden services. The botmaster could use Tor as a vir-\ntual tunnel to anonymize his TCP-based C & C traffic to \nthe IRC server of the botnet. At the same time, the IRC \nserver of the botnet could utilize Tor’s hidden services \nto anonymize the IRC server of the botnet in such a way \nthat its network location is unknown to the bots and yet \nit could communicate with all the bots. \n Encryption \n All or part of the stepping stone chain can be encrypted \nto protect it against content inspection, which could \nreveal information about the botnet and botmaster. This \ncan be done using a number of methods, including SSH \ntunneling, SSL/TLS-enabled BNCs, and IPsec tun-\nneling. Using encryption defeats all content-based trac-\ning approaches, so the tracer must rely on other network \nflow characteristics, such as packet size or timing, to \ncorrelate flows to each other. \n Low-Traffic Volume \n Since the botmaster only has to connect briefly to issue \ncommands and retrieve results from his botnet, a low vol-\nume of traffic flows from any given bot to the botmaster. \nDuring a typical session, only a few dozen packets from \neach bot can be sent to the botmaster. Tracing approaches \nthat rely on analysis of packet size or timing will most \nlikely be ineffective because they typically require a \nlarge amount of traffic (several hundred packets) to cor-\nrelate flows with high statistical confidence. Examples of \nsuch tracing approaches 52 , 53 , 54 all use timing information \nto embed a traceable watermark. These approaches can \nhandle stepping stones, encryption, and even low-latency \nanonymizing network, but they cannot be directly used \nfor botmaster traceback due to the low traffic volume. \n Traceback Beyond the Internet \n Even if all three technical challenges can be solved and \neven if all Internet-connected organizations worldwide \ncooperate to monitor traffic, there are additional trace-\nback challenges beyond the reach of the Internet (see \n Figure 8.7 ). Any IP-based traceback method assumes \nthat the true source IP belongs to the computer the \nattacker is using and that this machine can be physically \nlocated. However, in many scenario this is not true — for \nexample, (1) Internet-connected mobile phone networks, \n(2) open wireless (Wi-Fi) networks, and (3) public com-\nputers, such as those at libraries and Internet caf é s. \n 50 A. Snoeren, C. Patridge, L. A. Sanchez, C. E. Jones, F. Tchakountio, \nS. T. Kent, and W. T. Strayer, “ Hash-based IP traceback, ” in Proc. ACM \nSIGCOMM 2001, September 2001, pp. 3 – 14. \n 51 J. Li, M. Sung, J. Xu, and L. Li, “ Large-scale IP traceback in high-\nspeed internet: Practical techniques and theoretical foundation, ” in Proc. \n2004 IEEE Symposium on Security and Privacy, IEEE, 2004. \n 52 X. Wang, S. Chen, and S. Jajodia, “ Network fl ow watermarking \nattack on low-latency anonymous communication systems, ” in Proc. \n2007 IEEE Symposium on Security and Privacy, May 2007. \n 53 X. Wang, S. Chen, and S. Jajodia, “ Tracking anonymous, peer-\nto-peer VoIP calls on the internet, ” in Proc. 12th ACM Conference on \nComputer and Communications Security (CCS 2005), October 2005. \n 54 X. Wang and D. Reeves, “ Robust correlation of encrypted attack \ntraffi c through stepping stones by manipulation of interpacket delays, ” \nin Proc. 10th ACM Conference on Computer and Communications \nSecurity (CCS 2003), October 2003, pp. 20 – 29. \n" }, { "page_number": 164, "text": "Chapter | 8 The Botnet Problem\n131\n Most modern cell phones support text-messaging \nservices such as Short Message Service (SMS), and \nmany smart phones also have full-featured IM soft-\nware. As a result, the botmaster can use a mobile device \nto control her botnet from any location with cell phone \nreception. To enable her cell phone to communicate \nwith the C & C server, a botmaster needs to use a proto-\ncol translation service or a special IRC client for mobile \nphones. She can run the translation service on a com-\npromised host, an additional stepping stone. For an IRC \nbotnet, such a service would receive the incoming SMS \nor IM message, then repackage it as an IRC message and \nsend it on to the C & C server (possibly via more step-\nping stones), as shown in Figure 8.7 . To eliminate the \nneed for protocol translation, the botmaster can run a \nnative IRC client on a smart phone with Internet access. \nExamples of such clients are the Java-based WLIrc 55 \nand jmIrc 56 open source projects. In Figure 8.8 , a Nokia \nsmartphone is shown running MSN Messenger, control-\nling an Agobot zombie via MSN-IRC protocol transla-\ntion. On the screen, a new bot has just been infected and \nhas joined the IRC channel following the botmaster’s \n .scan.dcom command. \n When a botnet is being controlled from a mobile \ndevice, even a perfect IP traceback solution would only \nreach as far as the gateway host that bridges the Internet \nand the carrier’s mobile network. From there, the tracer \ncan ask the carrier to complete the trace and disclose the \nname and even the current location of the cell phone’s \nowner. However, there are several problems with this \napproach. First, this part of the trace again requires lots \nof manual work and cooperation of yet another organiza-\ntion, introducing further delays and making a real-time \ntrace unlikely. Second, the carrier won’t be able to deter-\nmine the name of the subscriber if he is using a prepaid \n 55 “ WLIrc wireless IRC client for mobile phones, ” http://wirelessirc.\nsourceforge.net/ (accessed May 3, 2008). \n 56 “ jmIrc: Java mobile IRC-client (J2ME), ” http://jmirc.sourceforge.\nnet/ (accessed May 3, 2008). \n FIGURE 8.7 Using a cell phone to evade Internet-based traceback. \n FIGURE 8.8 Using a Nokia smartphone to control an Agobot-based \nbotnet. (Photo courtesy of Ruishan Zhang.) \n" }, { "page_number": 165, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n132\ncell phone. Third, the tracer could obtain an approximate \nphysical location based on cell site triangulation. Even if \nhe can do this in real time, it might not be very useful \nif the botmaster is in a crowded public place. Short of \ndetaining all people in the area and checking their cell \nphones, police won’t be able to pinpoint the botmaster. \n A similar situation arises when the botmaster uses \nan unsecured Wi-Fi connection. This could either be a \npublic access point or a poorly configured one that is \nintended to be private. With a strong antenna, the bot-\nmaster can be located up to several thousand feet away. \nIn a typical downtown area, such a radius can contain \nthousands of people and just as many computers. Again, \nshort of searching everyone in the vicinity, the police \nwill be unable to find the botmaster. \n Finally, many places provide public Internet access \nwithout any logging of the users ’ identities. Prime exam-\nples are public libraries, Internet caf é s, and even the busi-\nness centers at most hotels. In this scenario, a real-time \ntrace would actually find the botmaster, since he would \nbe sitting at the machine in question. However, even if \nthe police are late by only several minutes, there might \nno longer be any record of who last used the computer. \nPhysical evidence such as fingerprints, hair, and skin \ncells would be of little use, since many people use these \ncomputers each day. Unless a camera system is in place \nand it captured a clear picture of the suspect on his way \nto/from the computer, the police again will have no leads. \n This section illustrates a few common scenarios \nwhere even a perfect IP traceback solution would fail \nto locate the botmaster. Clearly, much work remains on \ndeveloping automated, integrated traceback solutions that \nwork across various types of networks and protocols. \n 7. SUMMARY \n Botnets are one of the biggest threats to the Internet \ntoday, and they are linked to most forms of Internet \ncrime. Most spam, DDoS attacks, spyware, click fraud, \nand other attacks originate from botnets and the shad-\nowy organizations behind them. Running a botnet is \nimmensely profitable, as several recent high-profile \narrests have shown. Currently, many botnets still rely \non a centralized IRC C & C structure, but more and more \nbotmasters are using P2P protocols to provide resilience \nand avoid a single point of failure. A recent large-scale \nexample of a P2P botnet is the Storm Worm, widely cov-\nered in the media. \n A number of botnet countermeasures exist, but most \nare focused on bot detection and removal at the host and \nnetwork level. Some approaches exist for Internet-wide \ndetection and disruption of entire botnets, but we still \nlack effective techniques for combating the root of the \nproblem: the botmasters who conceal their identities and \nlocations behind chains of stepping-stone proxies. \n The three biggest challenges in botmaster traceback \nare stepping stones, encryption, and the low traffic vol-\nume. Even if these problems can be solved with a tech-\nnical solution, the trace must be able to continue beyond \nthe reach of the Internet. Mobile phone networks, open \nwireless access points, and public computers all provide \nan additional layer of anonymity for the botmasters. \n Short of a perfect solution, even a partial trace-\nback technique could serve as a very effective deter-\nrent for botmasters. With each botmaster that is located \nand arrested, many botnets will be eliminated at once. \nAdditionally, other botmasters could decide that the \nrisks outweigh the benefits when they see more and \nmore of their colleagues getting caught. Currently, the \neconomic equation is very simple: Botnets can generate \nlarge profits with relatively low risk of getting caught. \nA botmaster traceback solution, even if imperfect, would \ndrastically change this equation and convince more bot-\nmasters that it simply is not worth the risk of spending \nthe next 10 – 20 years in prison. \n" }, { "page_number": 166, "text": "133\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Intranet Security \n Bill Mansoor \n Information Systems Audit and Control Association (ISACA) \n Chapter 9 \n Headline drama like these in the mainstream media are \nembarrassing nightmares to top brass in any large corpo-\nration. These events have a lasting impact on a company’s \nbottom line because the company reputation and customer \ntrust take a direct hit. Once events like these transpire, \ncustomers and current and potential investors never look \nat the company in the same trusting light again, regard-\nless of remediation measures. The smart thing, then, is \nto avoid this kind of limelight. The onus of preventing \nsuch embarrassing security gaffes falls squarely on the \nshoulders of IT security chiefs (CISOs and security offic-\ners), who are sometimes hobbled by unclear mandates \nfrom government regulators and lack of sufficient budg-\neting to tackle the mandates. \n However, federal governments across the world are \nnot taking breaches of personal data lightly (see side bar, \n “ TJX: Data Breach with 45 Million Data Records Stolen ” ). \nIn view of a massive plague of publicized data thefts in the \npast decade, recent mandates such as the Health Insurance \nPortability and Accountability Act (HIPAA), Sarbanes-\nOxley, and the Payment Card Industry-Data Security \nStandard (PCI-DSS) Act within the United States now have \nteeth. These go so far as to spell out stiff fines and personal \njail sentences for CEOs who neglect data breach issues. \n As seen in the TJX case, intranet data breaches can \nbe a serious issue, impacting a company’s goodwill in \n 1 Jake, Tapper, and Kirit, Radia, “ State Department contract employees \nfi red, another disciplined for looking at passport fi le, ” ABCnews.com, \nMarch 21, 2008, http://abcnews.go.com/Politics/story?id \u0003 4492773 & \npage \u0003 1 . \n 2 Laptop security blog, Absolute Software, http://blog.absolute.com/\ncategory/real-theft-reports/ . \n 3 John Leyden, “ eBayed VPN kit hands over access to council net-\nwork ” , theregister.co.uk, September 29, 2008, www.theregister.co.uk/\n2008/09/29/second_hand_vpn_security_breach . \n 4 Bob, Coffi eld, “ Second criminal conviction under HIPAA, ” Health \nCare Law Blog, March 14, 2006, http://healthcarebloglaw.blogspot.\ncom/2006/03/second-criminal-conviction-under-hipaa.html \n 5 “ TJX identity theft saga continues: 11 charged with pilfering mil-\nlions of credit cards, ” Networkworld.com magazine, August 5, 2008, \n www.networkworld.com/community/node/30741?nwwpkg \u0003 breaches?\nap1 \u0003 rcb . \n 6 “ The TJX Companies, Inc. updates information on computer systems \nintrusion, ” February 21, 2007, www.tjx.com/Intrusion_Release_email.pdf . \n Intranet Security as News in the Media \n – “ State Department Contract Employees Fired, \nAnother Disciplined for Looking at Passport File ” 1 \n – “ Laptop stolen with a million customer data \nrecords ” 2 \n – “ eBayed VPN kit hands over access to council \nnetwork ” 3 \n – “ (Employee) caught selling personal and medical \ninformation about . . . FBI agent to a confidential \nsource . . . for $500. ” 4 \n – “ Data thieves gain access to TJX through unsecured \nwireless access point ” 5 \n TJX: Data Breach with 45 Million Data Records \nStolen \n The largest-scale data breach in history occurred in \nearly 2007 at TJX, the parent company for the TJ Maxx, \nMarshalls, and HomeGoods retail chains. \n In the largest identity-theft case ever investigated by \nthe U.S. Department of Justice, 11 people were con-\nvicted of wire fraud in the case. The primary suspect \nwas found to perpetrate the intrusion by wardriving and \ntaking advantage of an unsecured Wi-Fi access point to \nget in and set up a “ sniffer ” software instance to capture \ncredit-card information from a database. 12 \n Though the intrusion was earlier believed to have taken \nplace from May 2006 to January 2007, TJX later found that \nit took place as early as July 2005. The data compromised \nincluded portions of the credit- and debit-card transac-\ntions for approximately 45 million customers. 6 \n" }, { "page_number": 167, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n134\nthe open marketplace as well as spawning class-action \nlawsuits. 7 \n Gone are the days when intranet security was a super-\nficial exercise; security inside the firewall was all but \nnonexistent. There was a feeling of implicit trust in the \ninternal user. After all, if you hired that person, trained \nhim for years, how could you not trust him? \n In the new millennium, the Internet has come of age, and \nso have its users. The last largely computer-agnostic gen-\neration has exited the user scene; their occupational shoes \nhave been filled with the “ X and Y ” generations. Many of \nthese young people have grown up with the Internet, often \nfamiliar with it since elementary school. It is not uncommon \ntoday to find young college students who started their pro-\ngramming interests in the fifth or sixth grade. \n With such a level of computer-savvy in users, the game \nof intranet security has changed (see side bar, “ Network \nBreach Readiness: Many Are Still Complacent ” ). \nResourceful as ever, these new users have gotten used to \nthe idea of being hyperconnected to the Internet using \nmobile technology such as personal digital assistants \n(PDAs) and smart phones and firewalled barriers. For \na corporate intranet that uses older ideas of using access \ncontrol as the cornerstone of data security, such mobile \naccess to the Internet at work needs careful analysis and \ncontrol. The idea of building a virtual moat around your \nwell-constructed castle (investing in a firewall and hoping \nto call it an intranet) is gone. Hyperconnected “ knowledge \nworkers ” with laptops, PDAs and USB keys that have \nwhole operating systems built in have made sure of it. \n If we could reuse the familiar vehicle ad tagline of the \n1980s, we would say that the new intranet is not “ your \nfather’s intranet anymore. ” The intranet as just a simple \nplace to share files and to list a few policies and procedures \nhas ceased to be. The types of changes can be summed \nup in the following list of features, which shows that the \nintranet has become a combined portal as well as a public \ndashboard. Some of the features can include: \n ● A searchable corporate personnel directory of phone \nnumbers by department. Often the list is searchable \nonly if the exact name is known. \n ● Expanded activity guides and a corporate calendar \nwith links for various company divisions. \n ● Several RSS feeds for news according to divisions \nsuch as IT, HR, Finance, Accounting, and \nPurchasing. \n ● Company blogs (weblogs) by top brass that talk \nabout the current direction for the company in \nreaction to recent events, a sort of “ mission statement \nof the month. ” \n ● Intranets frequently feature a search engine for \nsearching company information, often helped by a \nsearch appliance from Google. Microsoft also has its \nown search software on offer that targets corporate \nintranets. \n ● One or several “ wiki ” repositories for company \nintellectual property, some of it of a mission-critical \nnature. Usually granular permissions are applied for \naccess here. One example could be court documents \nfor a legal firm with rigorous security access applied. \n ● A section describing company financials and other \nmission-critical indicators. This is often a separate \nWeb page linked to the main intranet page. \n ● A “ live ” section with IT alerts regarding specific \ndowntimes, outages, and other critical time-sensitive \ncompany notifications. Often embedded within the \nportal, this is displayed in a “ ticker-tape ” fashion or \nlike an RSS-type dynamic display. \n Of course, this list is not exhaustive; some intranets \nhave other unique features not listed here. But in any case, \nintranets these days do a lot more than simply list corpo-\nrate phone numbers. \n Recently, knowledge management systems have pre-\nsented another challenge to intranet security postures. \nCompanies that count knowledge as a prime protected asset \n(virtually all companies these days) have started deploying \n “ mashable ” applications that combine social networking \n(such as Facebook and LinkedIn), texting, and microblog-\nging (such as Twitter) features to encourage employees to \n “ wikify ” their knowledge and information within intranets. \n 8 “ Ponemon Institute announces result of survey assessing the business \nimpact of a data security breach, ” May 15, 2007, www.ponemon.org/\npress/Ponemon_Survey_Results_Scott_and_Scott_FINAL1.pdf . \n 7 “ TJX class action lawsuit settlement site, ” The TJX Companies, Inc., \nand Fifth Third Bancorp, Case No. 07-10162, www.tjxsettlement.com/ . \n Network Breach Readiness: Many Are Still \nComplacent \n The level of readiness for breaches among IT shops \nacross the country is still far from optimal. The Ponemon \nInstitute, a security think tank, surveyed some industry \npersonnel and came up with some startling revelations. \nHopefully these will change in the future: \n ● Eighty-five percent of industry respondents reported \nthat they had experienced a data breach. \n ● Of those responding, 43% had no incident response \nplan in place, and 82% did not consult legal counsel \nbefore responding to the incident. \n ● Following a breach, 46% of respondents still had not \nimplemented encryption on portable devices (lap-\ntops, PDAs) with company data stored on them. 8 \n" }, { "page_number": 168, "text": "Chapter | 9 Intranet Security\n135\nOne of the bigger vendors in this space, Socialtext, has \nintroduced a mashable wiki app that operates like a corpo-\nrate dashboard for intranets. 9 , 10 \n Socialtext has individual widgets, one of which, \n “ Socialtext signals, ” is a microblogging engine. In the cor-\nporate context, microblogging entails sending short SMS \nmessages to apprise colleagues of recent developments in \nthe daily routine. Examples could be short messages on \nprogress on any major project milestone — for example, \njoining up major airplane assemblies or getting Food and \nDrug Administration (FDA) testing approval for a special \nexperimental drug. \n These emerging scenarios present special challenges \nto security personnel guarding the borders of an intranet. \nThe border as it once existed has ceased to be. One cannot \nblock stored knowledge from leaving the intranet when a \nmajority of corporate mobile users are accessing intranet \nwikis from anywhere using inexpensive mini-notebooks \nthat are given away with cellphone contracts. 11 \n If we consider the impact of national and international \nprivacy mandates on these situations, the situation is com-\npounded further for C-level executives in multinational \ncompanies who have to come up with responses to pri-\nvacy mandates in each country in which the company does \nbusiness. The privacy mandates regarding private customer \ndata have always been more stringent in Europe than in \nNorth America, which is a consideration for doing busi-\nness in Europe. \n It is hard enough to block entertainment-related \nFlash video traffic from time-wasting Internet abuse \nwithout blocking a video of last week’s corporate meet-\ning at headquarters. Only letting in traffic on an excep-\ntion basis becomes untenable or impractical because of \na high level of personnel involvement needed for every \nongoing security change. Simply blocking YouTube.com \nor Vimeo.com is not sufficient. Video, which has myriad \nlegitimate work uses nowadays, is hosted on all sorts of \ncontent-serving (caching and streaming) sites worldwide, \nwhich makes it well near impossible to block using Web \nfilters. The evolution of the Internet Content Adaptation \nProtocol (ICAP), which standardizes Web site categories \nfor content-filtering purposes, is under way. However, \nICAP still does not solve the problem of the dissolving \nnetworking “ periphery. ” 12 \n Guarding movable and dynamic data — which may be \nmoving in and out of the perimeter without notice, flout-\ning every possible mandate — is a key feature of today’s \nintranet. The dynamic nature of data has rendered the \ntraditional confidentiality, integrity, and availability \n(CIA) architecture somewhat less relevant. The chang-\ning nature of data security necessitates some specialized \nsecurity considerations: \n ● Intranet security policies and procedures (P & Ps) are \nthe first step toward a legal regulatory framework. \nThe P & Ps needed on any of the security controls \nlisted below should be compliant with federal and \nstate mandates (such as HIPAA, Sarbanes-Oxley, the \nEuropean Directive 95/46/EC on the protection of \npersonal data, and PCI-DSS, among others). These \nP & Ps have to be signed off by top management and \nplaced on the intranet for review by employees. There \nshould be sufficient teeth in all procedural sections to \nenforce the policy, explicitly spelling out sanctions \nand other consequences of noncompliance, leading up \nto discharge. \n ● To be factual, none of these government mandates \nspell out details on implementing any security con-\ntrols. That is the vague nature of federal and inter-\nnational mandates. Interpretation of the security \ncontrols is better left after the fact to an entity such \nas the National Institute of Standards and Technology \n(NIST) in the United States or the Geneva-based \nInternational Organization for Standardization (ISO). \nThese organizations have extensive research and pub-\nlication guidance for any specific security initiative. \nMost of NIST’s documents are offered as free down-\nloads from its Web site. 13 ISO security standards such \nas 27002~27005 are also available for a nominal fee \nfrom the ISO site. \n Policies and procedures, once finalized, need to be \nautomated as much as possible (one example is manda-\ntory password changes every three months). Automating \npolicy compliance takes the error-prone human factor \nout of the equation (see side bar, “ Access Control in the \nEra of Social Networking ” ). There are numerous soft-\nware tools available to help accomplish security policy \nautomation. \n 9 Mowery, James, “ Socialtext melds media and collaboration ” , \ncmswire.com, October 8, 2008, www.cmswire.com/cms/enterprise-20/\nsocialtext-melds-media-and-collaboration-003270.php . \n 10 Rob, Hof, “Socialtext 3.0: Will wikis fi nally fi nd their place in busi-\nness?” Businessweek.com magazine, September 30, 2008, www.business-\nweek.com/the_thread/techbeat/archives/2008/09/socialtext_30_i.html . \n 11 Hickey, Matt, “ MSI’s 3.5G Wind 120 coming in November, offer \nsubsidized by Taiwanese Telecom, ” Crave.com, October 20, 2008, http://\nnews.cnet.com/8301-17938_105-10070911-1.html?tag \u0003 mncol;title . \n 12 Network Appliance, Inc., RFC Standards white paper for Internet \nContent Adaptation Protocol (ICAP), July 30, 2001, www.content-\nnetworking.com/references.html . \n 13 National Institute of Standards and Technology, Computer Security \nResource Center, http://csrc.nist.gov/ . \n" }, { "page_number": 169, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n136\n 1. PLUGGING THE GAPS: NAC AND \nACCESS CONTROL \n The first priority of an information security officer in \nmost organizations is to ensure that there is a relevant \ncorporate policy on access controls. Simple on the sur-\nface, the subject of access control is often complicated \nby the variety of ways the intranet is connected to the \nexternal world. \n Remote users coming in through traditional or SSL \n(browser-based) virtual private networks (VPNs), con-\ntrol over use of USB keys, printouts, and CD-ROMs all \nrequire that a comprehensive endpoint security solution be \nimplemented. \n The past couple of years have seen large-scale adop-\ntion of network access control (NAC) products in the mid-\nlevel and larger IT shops to manage endpoint security. \nEndpoint security ensures that whomever is plugging into \nor accessing any hardware anywhere within the intranet \nhas to comply with the minimum baseline corporate secu-\nrity policy standards. This can include add-on access cre-\ndentials but goes far beyond access. Often these solutions \nensure that traveling corporate laptops are compliant with \na minimum patching level, scans, and antivirus definition \nlevels before being allowed to connect to the intranet. \n 14 “ Juniper and Microsoft hook up for NAC work, ” May 22, 2007, \nPHYSORG.com, www.physorg.com/news99063542.html . \n 15 Bocek, Kevin, “ What does a data breach cost? ” SCmagazine.com, \nJuly 2, 2007, www.scmagazineus.com/What-does-a-data-breach-cost/\narticle/35131 . \n Access Control in the Era of Social Networking \n In an age in which younger users have grown up with \nsocial networking sites as part of their digital lives, cor-\nporate intranet sites are finding it increasingly difficult \nto block them from using these sites at work. Depending \non the company, some are embracing social network-\ning as part of their corporate culture; others, especially \ngovernment entities, are actively blocking these sites. \nDetractors mention as concerns wasted bandwidth, lost \nproductivity, and the possibility of infections with spy-\nware and worms. \n However, blocking these sites can be difficult because \nmost social networking and video sites such as Vimeo \nand YouTube can use port 80 to vector Flash videos into \nan intranet — which is wide open for HTTP access. Flash \nvideos have the potential to provide a convenient Trojan \nhorse for malware to get into the intranet. \n To block social networking sites, one needs to block \neither the Social Networking category or block the spe-\ncific URLs (such as YouTube.com) for these sites in the \nWeb-filtering proxy appliance. Flash videos are rarely \ndownloaded from YouTube itself. More often a redirected \ncaching site is used to send in the video. The caching \nsites also need to be blocked; this is categorized under \nContent Servers. \n The NAC appliances that enforce these policies often \nrequire that a NAC fat client is installed on every PC and \nlaptop. This rule can be enforced during logon using a \nlogon script. The client can also be a part of the standard \nOS image for deploying new PCs and laptops. \n Microsoft has built a NAC-type framework into \nsome versions of its client OSs (Vista and XP SP3) to \nease compliance with its NAC server product called \nMS Network Policy Server, which closely works with \nits Windows 2008 Server product (see side bar, “ The \nCost of a Data Breach ” ). The company has been able \nto convince quite a few industry networking heavy-\nweights (notably Cisco and Juniper) to adopt its NAP \nstandard. 14 \n Essentially the technology has three parts: a policy-\nenforceable client, a decision point, and an enforcement \npoint. The client could be an XP SP3 or Vista client \n(either a roaming user or guest user) trying to connect \nto the company intranet. The decision point in this case \nwould be the Network Policy Server product, checking \nto see whether the client requesting access meets the \nminimum baseline to allow it to connect. If it does not, \nthe decision point product would pass this data on to the \nenforcement point, a network access product such as a \nrouter or switch, which would then be able to cut off \naccess. \n The scenario would repeat itself at every connection \nattempt, allowing the network’s health to be maintained \n The Cost of a Data Breach \n ● As of July 2007, the average breach cost per incident \nwas $4.8 million. \n ● This works out to $182 per exposed record. \n ● It represents an increase of more than 30% from 2005. \n ● Thirty-five percent of these breaches involved the loss \nor theft of a laptop or other portable device. \n ● Seventy percent were due to a mistake or malicious \nintent by an organization’s own staff. \n ● Since 2005 almost 150 million individuals ’ identifia-\nble information has been compromised due to a data \nsecurity breach. \n ● Nineteen percent of consumers notified of a data \nbreach discontinued their relationship with the busi-\nness, and a further 40% considered doing so. 15 \n" }, { "page_number": 170, "text": "Chapter | 9 Intranet Security\n137\non an ongoing basis. Microsoft’s NAP page has more \ndetails and animation to explain this process. 16 \n Access control in general terms is a relationship triad \namong internal users, intranet resources, and the actions \ninternal users can take on those resources. The idea is to \ngive users only the least amount of access they require to \nperform their job. The tools used to ensure this in Windows \nshops utilize Active Directory for Windows logon script-\ning and Windows user profiles. Granular classification is \nneeded for users, actions, and resources to form a logical \nand comprehensive access control policy that addresses \nwho gets to connect to what, yet keeping the intranet safe \nfrom unauthorized access or data-security breaches. Quite \na few off-the-shelf solutions geared toward this market \noften combine inventory control and access control under \na “ desktop life-cycle ” planning umbrella. \n Typically, security administrators start with a “ Deny –\n All ” policy as a baseline before slowly building in the \naccess permissions. As users migrate from one department \nto another, are promoted, or leave the company, in large \norganizations this job can involve one person by herself. \nThis person often has a very close working relationship \nwith Purchasing, Helpdesk, and HR, getting coordination \nand information from these departments on users who \nhave separated from the organization and computers that \nhave been surplused, deleting and modifying user accounts \nand assignments of PCs and laptops. \n Helpdesk software usually has an inventory control \ncomponent that is readily available to Helpdesk person-\nnel to update and/or pull up to access details on computer \nassignments and user status. Optimal use of form automa-\ntion can ensure that these details occur (such as deleting a \nuser on the day of separation) to avoid any possibility of \nan unwelcome data breach. \n 2. MEASURING RISK: AUDITS \n Audits are another cornerstone of a comprehensive intra-\nnet security policy. To start an audit, an administrator \nshould know and list what he is protecting as well as \nknowing the relevant threats and vulnerabilities to those \nresources. \n Assets that need protection can be classified as either \ntangible or intangible. Tangible assets are, of course, \nremovable media (USB keys), PCs, laptops, PDAs, Web \nservers, networking equipment, DVR security cameras, \nand employees ’ physical access cards. Intangible assets \ncan include company intellectual property such as cor-\nporate email and wikis, user passwords, and, especially \nfor HIPAA and Sarbanes-Oxley mandates, personally \nidentifiable health and financial information, which the \ncompany could be legally liable to protect. \n Threats can include theft of USB keys, laptops, PDAs, \nand PCs from company premises, resulting in a data breach \n(for tangible assets ) and weak passwords and unhardened \noperating systems in servers (for intangible assets ). \n Once a correlated listing of assets and associated threats \nand vulnerabilities has been made we have to measure the \nimpact of a breach, which is known as risk . The common \nrule of thumb to measure risk is: \n Risk\nValue of asset\nThreat\nVulnerability\n\u0003\n\u0007\n\u0007\n \n It is obvious that an Internet-facing Web server faces \ngreater risk and requires priority patching and virus scan-\nning because the vulnerability and threat components are \nhigh in that case (these servers routinely get sniffed and \nscanned over the Internet by hackers looking to find holes \nin their armor). However, this formula can standardize the \npriority list so that the actual audit procedure (typically \ncarried out weekly or monthly by a vulnerability-scanning \ndevice) is standardized by risk level. Vulnerability-scanning \nappliances usually scan server farms and networking appli-\nances only because these are high-value targets within the \nnetwork for hackers who are looking for either unhardened \nserver configurations or network switches with default fac-\ntory passwords left on by mistake. To illustrate the situa-\ntion, look at Figure 9.1 , which illustrates an SQL injection \nattack on a corporate database. 17 \n The value of an asset is subjective and can be assessed \nonly by the IT personnel in that organization (see side \nbar, “ Questions for a Nontechnical Audit of Intranet \nSecurity ” ). If the IT staff has an ITIL (Information Tech-\nnology Infras tructure Library) process under way, the \nvalue of an asset will often already have been classified \nand can be used. Otherwise, a small spreadsheet can be \ncreated with classes of various tangible and intangible \nassets (as part of a hardware/software cataloguing exer-\ncise) and values assigned that way. \n 16 NAP Program program details, Microsoft.com, www.microsoft.\ncom/windowsserver2008/en/us/nap-features.aspx . \n 17 “ Web application security — check your site for web application vul-\nnerabilities, ” www.acunetix.com/websitesecurity/webapp-security.htm . \n Questions for a Nontechnical Audit of Intranet \nSecurity \n Is all access (especially to high-value assets) logged? \n In case of laptop theft, is encryption enabled so that the \nrecords will be useless to the thief? \n Are passwords verifiably strong enough to comply with \nthe security policy? Are they changed frequently and \nheld to strong encryption standards? \n" }, { "page_number": 171, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n138\n 3. GUARDIAN AT THE GATE: \nAUTHENTICATION AND ENCRYPTION \n To most lay users, authentication in its most basic form \nis two-factor authentication — meaning a username and \na password. Although adding further factors (such as \nadditional autogenerated personal identification numbers \n[PINs] and/or biometrics) makes authentication stronger \nby magnitudes, one can do a lot with just the password \nwithin a two-factor situation. Password strength is deter-\nmined by how hard the password is to crack using a \npassword-cracker application that uses repetitive tries \nusing common words (sometimes from a stored diction-\nary) to match the password. Some factors will prevent \nthe password from being cracked easily and make it a \nstronger password: \n ● Password length (more than eight characters) \n ● Use of mixed case (both uppercase and lowercase) \n ● Use of alphanumeric characters (letters as well as \nnumbers) \n ● Use of special characters (such as !, ?, %, and #) \n The ACL in a Windows AD environment can be \ncustomized to demand up to all four factors in the \n FIGURE 9.1 SQL injection attack. Source: © acunetix.com \n Are all tangible assets (PCs, laptops, PDAs, Web servers, \nnetworking equipment) tagged with asset tags? \n Is the process for surplusing obsolete IT assets secure \n(meaning, are disks wiped for personally identifiable \ndata before surplusing happens)? \n Is email and Web usage logged? \n Are peer-to-peer (P2P) and instant messaging (IM) usage \ncontrolled? \n Based on the answers you get (or don’t), you can start \nthe security audit procedure by finding answers to these \nquestions. \n" }, { "page_number": 172, "text": "Chapter | 9 Intranet Security\n139\n Basic Ways to Prevent Wi-Fi Intrusions in Corporate Intranets \n 1. Reset and customize the default Service Set Identifier \n(SSID) or Extended Service Set Identifier (ESSID) for the \naccess point device before installation. \n 2. Change the default admin password. \n 3. Install a RADIUS server, which checks for laptop user \ncredentials from an Active Directory database (ACL) \nfrom the same network before giving access to the wire-\nless laptop. See Figures 9.2 and 9.3 for illustrated expla-\nnations of the process. \n 4. Enable WPA or WPA2 encryption, not WEP, which is \neasily cracked. \n 5. Periodically try to wardrive around your campus and \ntry to sniff (and disable) nonsecured network-connected \nrogue access points set up by naïve users. \n 6. Document the wireless network by using one of the \nleading wireless network management software pack-\nages made for that purpose. \nsetting or renewal of a password, which will render the \npassword strong. \n Prior to a few years ago, the complexity of a pass-\nword (the last three items in the preceding list) was \nfavored as a measure of strength in passwords. However, \nthe latest preference as of this writing is to use uncom-\nmon passwords — joined-together sentences to form \npassphrases that are quite long but don’t have much in \nthe way of complexity. Password authentication ( “ what \nyou know ” ) as two-factor authentication is not as secure \nas adding a third factor to the equation (a dynamic token \npassword). Common types of third-factor authentication \ninclude biometrics (fingerprint scan, palm scan, or retina \nscan — in other words, “ what you are ” ) and token-type \nauthentication (software or hardware PIN – generating \ntokens — that is, “ what you have ” ). \n Proximity or magnetic swipe cards and tokens have \nseen common use for physical premises-access authen-\ntication in high-security buildings (such as financial and \nR & D companies) but not for network or hardware access \nwithin IT. \n When remote or teleworker employees connect to the \nintranet via VPN tunnels or Web-based SSL VPNs (the \noutward extension of the intranet once called an extranet ), \nthe connection needs to be encrypted with strong 3DES \nor AES type encryption to comply with patient data and \nfinancial data privacy mandates. The standard authentica-\ntion setup is usually a username and a password, with an \nadditional hardware token-generated random PIN entered \ninto a third box. Until lately, RSA as a company was one \nof the bigger players in the hardware-token field; it inci-\ndentally also invented the RSA algorithm for public-key \nencryption. \n As of this writing, hardware tokens cost under $30 per \nuser in quantities of greater than a couple hundred pieces, \ncompared to about a $100 only a decade ago. Most \nvendors offer free lifetime replacements for hardware \ntokens. Instead of a separate hardware token, some \ninexpensive software token generators can be installed \nwithin PC clients, smart phones, and BlackBerry devices. \nTokens are probably the most cost-effective enhancement \nto security today. \n 4. WIRELESS NETWORK SECURITY \n Employees using the convenience of wireless to log into \nthe corporate network (usually via laptop) need to have \ntheir laptops configured with strong encryption to pre-\nvent data breaches. The first-generation encryption type \nknown as Wireless Equivalent Privacy (WEP) was easily \ndeciphered (cracked) using common hacking tools and \nis no longer widely used. The latest standard in wireless \nauthentication is WPA or WPA2 (802.11i), which offer \nstronger encryption compared to WEP. Though wireless \ncards in laptops can offer all the previously noted choices, \nthey should be configured with WPA or WPA2 if possible. \n There are quite a few hobbyists roaming corporate \nareas looking for open wireless access points (transmit-\nters) equipped with powerful Wi-Fi antennas and wardriv-\ning software, a common package being Netstumbler. \nWardriving was originally meant to log the presence \nof open Wi-Fi access points on Web sites (see side bar, \n “ Basic Ways to Prevent Wi-Fi Intrusions in Corporate \nIntranets ” ), but there is no guarantee that actual access \nand use ( piggybacking , in hacker terms) won’t occur, curi-\nosity being human nature. If there is a profit motive, as in \nthe TJX example, access to corporate networks will take \nplace, although the risk of getting caught and resulting \nrisk of criminal prosecution will be high. Furthermore, \ninstalling a RADIUS server is a must to check access \nauthentication for roaming laptops. \n" }, { "page_number": 173, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n140\n Note: Contrary to common belief, turning off SSID \nbroadcast won’t help unless you’re talking about a home \naccess point situation. Hackers have an extensive suite of \ntools with which to sniff SSIDs for lucrative corporate tar-\ngets, which will be broadcast anyway when connecting in \nclear text (unlike the real traffic, which will be encrypted). \nRADIUS\nAuthentication Server\nActive Directory Server\nSwitch\nSwitch\nWireless Access Point\nAuthenticating Laptop\nWireless Traffic\n FIGURE 9.2 Wireless EAP authentication using Active Directory and authentication servers. \nTime elapsed\nLaptop\nWireless Access Point\nRADIUS Authentication Server\nYes\nNo\nVerify\nEAP Session/Encryption key\nsent/RADIUS\nEAP Session/Encryption key sent\nResponse passed on to laptop\nClient challenge from laptop\nFailure message sent to laptop\nSuccess message sent to laptop\nEAP Identity Response\nEAP Identity Challenge\nEAP over LAN (EAPOL) start\nWireless Association Response\nWireless Association Request\nEncrypted data exchange\nResponse sent over RADIUS\nChallenge sent over RADIUS\nRADIUS Deny\nEAP Success\nEAP Identity Response over RADIUS\n FIGURE 9.3 High-level wireless Extensible Authentication Protocol (EAP) workflow. \n" }, { "page_number": 174, "text": "Chapter | 9 Intranet Security\n141\n 5. SHIELDING THE WIRE: NETWORK \nPROTECTION \n Firewalls are, of course, the primary barrier to a network. \nTypically rule based, firewalls prevent unwarranted traffic \nfrom getting into the intranet from the Internet. These days \nfirewalls also do some stateful inspections within pack-\nets to peer a little into the header contents of an incom-\ning packet, to check validity — that is, to check whether a \nstreaming video packet is really what it says it is, and not \nmalware masquerading as streaming video. \n Intrusion prevention systems (IPSs) are a newer type of \ninline network appliance that uses heuristic analysis (based \non a weekly updated signature engine) to find patterns of \nmalware identity and behavior and to block malware from \nentering the periphery of the intranet. The IPS and the intru-\nsion detection system (IDS), however, operate differently. \n IDSs are typically not sitting inline; they sniff traffic \noccurring anywhere in the network, cache extensively, and \ncan correlate events to find malware. The downside of IDSs \nis that unless their filters are extensively modified, they gen-\nerate copious amounts of false positives — so much so that \n “ real ” threats become impossible to sift out of all the noise. \n IPSs, in contrast, work inline and inspect packets rap-\nidly to match packet signatures. The packets pass through \nmany hundreds of parallel filters, each containing match-\ning rules for a different type of malware threat. Most \nvendors publish new sets of malware signatures for their \nappliances every week. However, signatures for common \nworms and injection exploits such as SQL-slammer, \nCode-red, and NIMDA are sometimes hardcoded into \nthe application-specific integrated chip (ASIC) that con-\ntrols the processing for the filters. Hardware-enhancing \na filter helps avert massive-scale attacks more efficiently \nbecause it is performed in hardware, which is more rapid \nand efficient compared to software signature matching. \nIncredible numbers of malicious packets can be dropped \nfrom the wire using the former method. \n The buffers in an enterprise-class IPS are smaller \ncompared to those in IDSs and are quite fast — akin to \na high-speed switch to preclude latency (often as low as \n200 microseconds during the highest load). A top-of-the-\nline midsize IPS box’s total processing threshold for all \ninput and output segments can exceed 5 gigabits per sec-\nond using parallel processing. 18 \n However, to avoid overtaxing CPUs and for efficien-\ncy’s sake, IPSs usually block only a very limited number \nof important threats out of the thousands of malware signa-\ntures listed. Tuning IPSs can be tricky — just enough block-\ning to silence the false positive noise but making sure all \ncritical filters are activated to block important threats. \n The most important factors in designing a critical data \ninfrastructure are resiliency, robustness, and redundancy \nregarding the operation of inline appliances. Whether one \nis talking about firewalls or inline IPSs, redundancy is \nparamount (see side bar, “ Types of Redundancy for Inline \nSecurity Appliances ” ). Intranet robustness is a primary \nconcern where data has to available on a 24/7 basis. \n 18 IPS specifi cation datasheet. “ TippingPoint® intrusion prevention \nsystem (IPS) technical specifi cations, ” www.tippingpoint.com/pdf/\nresources/datasheets/400918-007_IPStechspecs.pdf . \n Types of Redundancy for Inline Security Appliances \n 1. Security appliances usually have dual power supplies \n(often hot-swappable) and are designed to be connected \nto two separate UPS devices, thereby minimizing chances \nof a failure within the appliance itself. The hot-swap \ncapability minimizes replacement time for power \nsupplies. \n 2. We can configure most of these appliances to either shut \ndown the connection or fall back to a level-two switch (in \ncase of hardware failure). If reverting to a fallback state, \nmost IPSs become basically a bump in the wire and, \ndepending on the type of traffic, can be configured to \nfail open so that traffic remains uninterrupted. Also, inex-\npensive, small third-party switchboxes are available to \nenable this failsafe high-availability option for a single IPS \nbox. The idea is to keep traffic flow active regardless of \nattacks. \n 3. IPS or firewall devices can be placed in dual-redundant \nfailover mode, either in active-active (load-sharing) or \nactive-passive (primary-secondary) mode. The devices com-\nmonly use a protocol called Virtual Router Redundancy \nProtocol (VRRP) where the secondary pings the primary \nevery second to check live status and assumes leadership \nto start processing traffic in case pings are not returned from \nthe primary. The switchover is instantaneous and transpar-\nent to most network users. Prior to the switchover, all data \nand connection settings are fully synchronized at identical \nstates between both boxes to ensure failsafe switchover. \n 4. Inline IPS appliances are relatively immune to attacks \nbecause they have highly hardened Linus/Unix operat-\ning systems and are designed from the ground up to be \nrobust and low-maintenance appliances (logs usually \nclear themselves by default). \n" }, { "page_number": 175, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n142\n Most security appliances come with syslog report-\ning (event and alert logs sent usually via port 514 UDP) \nand email notification (set to alert beyond a customizable \nthreshold) as standard. The syslog reporting can be for-\nwarded to a security events management (SEM) appli-\nance, which consolidates syslogs into a central threat \nconsole for benefit of event correlation and forwards \nwarning emails to administrators based on preset thresh-\nold criteria. Moreover, most firewalls and IPSs can be \nconfigured to forward their own notification email to \nadministrators in case of an impending threat scenario. \n For those special circumstances where a wireless-\ntype LAN connection is the primary one (whether \nmicrowave beam, laser beam, or satellite-type connec-\ntion), redundancy can be ensured by a secondary con-\nnection of equal or smaller capacity. For example, in \ncertain northern Alaska towns where digging trenches \ninto the hardened icy permafrost is expensive and rig-\nging wire across the tundra is impractical due to the \nextreme cold, the primary network connections between \ntowns are always via microwave link, often operating in \ndual redundant mode. \n 6. WEAKEST LINK IN SECURITY: USER \nTRAINING \n Intranet security awareness is best communicated to \nusers in two primary ways — during new employee ori-\nentation and by ongoing targeted training for users in \nvarious departments, with specific user audiences in \nmind. \n A formal security training policy should be drafted \nand signed off by management, with well-defined scopes, \nroles, and responsibilities of various individuals, such as \nthe CIO and the information security officer, and posted \non the intranet. New recruits should be given a copy of \nall security policies to sign off on before they are granted \nuser access. The training policy should also spell out the \nHR, Compliance, and PR departments ’ roles in the train-\ning program. \n Training can be given using the PowerPoint Seminar \nmethod in large gatherings before monthly “ all-hands ” \ndepartmental meetings and also via an emailed Web link \nto a Flash video format presentation. The latter can also \nbe configured to have an interactive quiz at the end, which \nshould pique audience interest on the subject and help \nthem remember relevant issues. \n As far as topics to be included in the training, any \napplicable federal or industry mandate such as HIPAA, \nSOX, PCI-DSS, or ISO 27002 should be discussed \nextensively first, followed by discussions on tackling \nsocial engineering, spyware, viruses, and so on. \n The topics of data theft and corporate data breaches \nare frequently in the news. This subject can be exten-\nsively discussed with emphasis on how to protect per-\nsonally identifiable information in a corporate setting. \nPassword policy and access control topics are always \ngood things to discuss; users at a minimum need to be \nreminded to sign off their workstations before going on \nbreak. \n 7. DOCUMENTING THE NETWORK: \nCHANGE MANAGEMENT \n Controlling the IT infrastructure configuration of a large \norganization is more about change control than other \nthings. Often the change control guidance comes from \ndocuments such as the ITIL series of guidebooks. \n After a baseline configuration is documented, \nchange control — a deliberate and methodical process \nthat ensures that any changes made to the baseline IT \nconfiguration of the organization (such as changes to \nnetwork design, AD design, and so on) — is extensively \ndocumented and authorized only after prior approval. \nThis is done to ensure that unannounced or unplanned \nchanges are not allowed to hamper the day-to-day effi-\nciency and business functions of the overall intranet \ninfrastructure. \n In most government entities, even very small changes \nare made to go through change management (CM); how-\never, management can provide leeway to managers to \napprove a certain minimal level of ad hoc change that has \nno potential to disrupt operations. In most organizations \nwhere mandates are a day-to-day affair, no ad hoc change \nis allowed unless it goes through supervisory-level change \nmanagement meetings. \n The goal of change management is largely to comply \nwith mandates — but for some organizations, waiting for \na weekly meeting can slow things significantly. If justi-\nfied, an emergency CM meeting can be called to approve \na time-sensitive change. \n Practically speaking, the change management pro-\ncess works like this; A formal change management \ndocument is filled out (usually a multitab online Excel \nspreadsheet) and forwarded to the change management \nombudsman (maybe a project management person). See \nthe side bar “ Change Management Spreadsheet Details \nto Submit to a CM Meeting ” for some CM form details. \n The document must have supervisory approval from \nthe requestor’s supervisor before proceeding to the \n" }, { "page_number": 176, "text": "Chapter | 9 Intranet Security\n143\nombudsman. The ombudsman posts this change docu-\nment on a section of the intranet for all other supervi-\nsors and managers within the CM committee to review \nin advance. Done this way, the change management \ncommittee, meeting in its weekly or biweekly change \napproval meetings, can voice reservations or ask clari-\nfication questions of the change-initiating person, who \nis usually present to explain the change. At the end of \nthe deliberations the decision is then voted on to either \napprove, deny, modify, or delay the change (sometimes \nwith preconditions). \n If approved, the configuration change is then made \n(usually within the following week). The post-mortem \nsection of the change can then be updated to note any \nissues that occurred during the change (such as a roll-\nback after change reversal and the causes). \n In recent years, some organizations have started to \noperate the change management collaborative process \nusing social networking tools at work. This allows dispa-\nrate flows of information, such as emails, departmental \nwikis, and file-share documents, to belong to a unified \nthread for future reference. \n 8. REHEARSE THE INEVITABLE: DISASTER \nRECOVERY \n Possible disaster scenarios can range from the mundane \nto the biblical in proportion. In intranet or general IT \nterms, successfully recovering from a disaster can mean \nresuming critical IT support functions for mission-critical \nbusiness functions. Whether such recovery is smooth \nand hassle-free depends on how prior disaster-recovery \nplanning occurs and how this plan is tested to address all \nrelevant shortcomings adequately. \n The first task when planning for disaster recovery \n(DR) is to assess the business impact of a certain type of \ndisaster on the functioning of an intranet using business \nimpact analysis (BIA). BIA involves certain metrics; \nagain, off-the shelf software tools are available to assist \nwith this effort. The scenario could be a natural hurricane-\ninduced power outage or a human-induced critical appli-\ncation crash. In any one of these scenarios, one needs \nto assess the type of impact in time, productivity, and \nfinancial terms. \n BIAs can take into consideration the breadth of \nimpact. For example, if the power outage is caused by a \nhurricane or an earthquake, support from generator ven-\ndors or the electricity utility could be hard to get because \nof the large demands for their services. BIAs also need \nto take into account historical and local weather priori-\nties. Though there could be possibilities of hurricanes \noccurring in California or earthquakes occurring along \nthe Gulf Coast of Florida, for most practical purposes \nthe chances of those disasters occurring in those locales \nare pretty remote. Historical data can be helpful for pri-\noritizing contingencies. \n Once the business impacts are assessed to categorize \ncritical systems, a disaster recovery (DR) plan can be \norganized and tested. The criteria for recovery have two \ntypes of metrics: a recovery point objective (RPO) and a \nrecovery time objective (RTO). \n In the DR plan, the RPO refers to how far back or \n “ back to what point in time ” that backup data has to be \nrecovered. This timeframe generally dictates how often \ntape backups are taken, which can again depend on the \ncriticality of the data. The most common scenario for \nmedium-sized IT shops is daily incremental backups \nand a weekly full backup on tape. Tapes are sometimes \nchanged automatically by the tape backup appliances. \n One important thing to remember is to rotate tapes \n(that is, put them on a life-cycle plan by marking them \nfor expiry) to make sure that tapes have complete data \nintegrity during a restore. Most tape manufacturers have \nmarking schemes for this task. Although tapes are still \nrelatively expensive, the extra amount spent on always \nhaving fresh tapes ensures that there are no nasty sur-\nprises at the time of a crucial data recovery. \n RTO refers to how long it takes to restore backed up \nor recovered data to its original state for resuming nor-\nmal business processes. The critical factor here is cost. It \n Change Management Spreadsheet Details to Submit \nto a CM Meeting \n ● Name and organizational details of the change-\nrequestor \n ● Actual change details, such as the time and duration \nof the change \n ● Any possible impacts (high, low, medium) to signifi-\ncant user groups or critical functions \n ● The amount of advance notice needed for impacted \nusers via email (typically two working days) \n ● Evidence that the change has been tested in advance \n ● Signature and approval of the supervisor and her \nsupervisor (manager) \n ● Whether and how rollback is possible \n ● Post-change, a “ post-mortem tab ” has to confirm \nwhether the change process was successful and any \nrevealing comments or notes for the conclusion \n ● One of the tabs can be an “ attachment tab ” contain-\ning embedded Visio diagrams or word documentation \nembedded within the Excel sheet to aid discussion \n" }, { "page_number": 177, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n144\nwill cost much more to restore data within an hour using \nan online backup process or to resume operations using \na hotsite rather than a five-hour restore using stored tape \nbackups. If business process resumption is critical, cost \nbecomes less a factor. \n DR also has to take into account resumption of com-\nmunication channels. If network and telephone links \naren’t up, having a timely tape restore does little good \nto resume business functions. Extended campus network \nlinks are often dependent on leased lines from major \nvendors such as Verizon and AT & T, so having a trusted \nvendor relationship with agreed-on SLA standards is a \nrequirement. \n Depending on budgets, one can configure DR to \nhappen almost instantly, if so desired, but that is a far \nmore costly option. Most shops with “ normal ” data-\nflows are okay with business being resumed within the \nspan of about three to fours hours or even a full work-\ning day after a major disaster. Balancing costs with busi-\nness expectations is the primary factor in the DR game. \nSpending inordinately for a rare disaster that might never \nhappen is a waste of resources. It is fiscally imprudent \n(not to mention futile) to try to prepare for every contin-\ngency possible. \n Once the DR plan is more or less finalized, a DR \ncommittee can be set up under an experienced DR pro-\nfessional to orchestrate the routine training of users and \nmanagers to simulate disasters on a frequent basis. In \nmost shops this means management meeting every two \nmonths to simulate a DR “ war room ” (command center) \nsituation and employees going through a mandatory \ninteractive six-month disaster recovery training, listing \nthe DR personnel to contact. \n Within the command center, roles are preassigned, \nand each member of the team carries out his or her role \nas though it were a real emergency or disaster. DR coor-\ndination is frequently modeled after the U.S. Federal \nEmergency Management Agency (FEMA) guidelines, \nan active entity that has training and certification tracks \nfor DR management professionals. \n There are scheduled simulated “ generator shut-\ndowns ” in most shops on a biweekly or monthly basis \nto see how the systems actually function. The systems \ncan include uninterrupible power supplies (UPSs), emer-\ngency lighting, email and cell phone notification meth-\nods, and alarm enunciators and sirens. Since electronics \nitems in a server room are sensitive to moisture damage, \ngas-based Halon fire-extinguishing systems are used. \nThese Halon systems also have a provision to test them \n(often twice a year) to determine their readiness. The \nvendor will be happy to be on retainer for these tests, \nwhich can be made part of the purchasing agreement as \na service-level agreement (SLA). If equipment is tested \non a regular basis, shortcomings and major hardware \nmaintenance issues with major DR systems can be eas-\nily identified, documented, and redressed. \n In a severe disaster situation, priorities need to be \nexercised on what to salvage first. Clearly, trying to \nrecover employee records, payroll records, and critical \nbusiness mission data such as customer databases will \ntake precedence. Anything irreplaceable or not easily \nreplaceable needs priority attention. \n We can divide the levels of redundancies and back-\nups to a few progressive segments. The level of backup \nsophistication would of course be dependent on (1) \ncriticality and (2) time-to-recovery criteria of the data \ninvolved. \n At the very basic level, we can opt not to back up \nany data or not even have procedures to recover data, \nwhich means that data recovery would be a failure. \nUnderstandably, this is not a common scenario. \n More typical is contracting with an archival company \nof a local warehouse within a 20-mile periphery. Tapes \nare backed up onsite and stored offsite, with the archi-\nval company picking up the tapes from your facility on a \ndaily basis. The time to recover is dependent on retriev-\ning the tapes from archival storage, getting them onsite, \nand starting a restore. The advantages here are lower \ncost. However, the time needed to transport tapes and \nrecover them might not be acceptable, depending on the \ntype of data and the recovery scenario. \n Often a “ coldsite ” or “ hotsite ” is added to the intranet \nbackup scenario. A coldsite is a smaller and scaled-down \ncopy of the existing intranet data center that has only \nthe most essential pared-down equipment supplied and \ntested for recovery but not in a perpetually ready state \n(powered down as in “ cold, ” with no live connection). \nThese coldsites can house the basics, such as a Web \nserver, domain name servers, and SQL databases, to get \nan informational site started up in very short order. \n A hotsite is the same thing as a coldsite except that \nin this case the servers are always running and the \nInternet and intranet connections are “ live ” and ready \nto be switched over much more quickly than on a cold-\nsite. These are just two examples of how the business \nresumption and recovery times can be shortened. \n Recovery can be made very rapidly if the hotsite is \nlinked to the regular data center using fast leased-line \nlinks (such as a DS3 connection). Backups synched in \nreal time with identical RAID disks at the hotsite over \nredundant high-speed data links afford the shortest \nrecovery time. \n" }, { "page_number": 178, "text": "Chapter | 9 Intranet Security\n145\n In larger intranet shops based in defense-contractor \ncompanies, there are sometimes requirements for even \nfaster data recovery with far more rigid standards for \ndata integrity. To-the-second real-time data synchroniza-\ntion in addition to hardware synchronization ensures that \nduplicate sites thousands of miles away can be up and \nrunning within a matter of seconds — even faster than a \nhotsite. Such extreme redundancy is typically needed for \ncritical national databases (that is, air traffic control or \ncustoms databases that are accessed 24/7, for example). \n At the highest level of recovery performance, most \nlarge database vendors offer “ zero data loss ” solutions, \nwith a variety of cloned databases synchronized across \nthe country that automatically failover and recover in an \ninstantaneous fashion to preserve a consistent status —\n often free from human intervention. Oracle’s version is \ncalled Data Guard; most mainframe vendors offer a sim-\nilar product varying in tiers and features offered. \n The philosophy here is simple: The more dollars you \nspend, the more readiness you can buy. However, the \nexpense has to be justified by the level of criticality for \nthe availability of the data. \n 9. CONTROLLING HAZARDS: PHYSICAL \nAND ENVIRONMENTAL PROTECTION \n Physical access and environmental hazards are very rel-\nevant to security within the intranet. People are the pri-\nmary weak link in security (as previously discussed), \nand controlling the activity and movement of authorized \npersonnel and preventing access to unauthorized person-\nnel fall within the purview of these security controls. \n This important area of intranet security must first be \nformalized within a management-sanctioned and pub-\nlished P & P. \n Physical access to data center facilities (as well as \nIT working facilities) is typically controlled using card \nreaders. These were scanning types in the last two dec-\nades but are increasingly being converted to near-field or \nproximity-type access card systems. Some high-security \nfacilities (such as bank data centers) use smartcards, \nwhich use encryption keys stored within the cards for \nmatching keys. \n Some important and common-sense topics should \nbe discussed within the subject of physical access. First, \ndisbursal of cards needs to be a deliberate and high-\nsecurity affair requiring the signatures of at least two \nsupervisory-level people who can be responsible for the \nauthenticity and actual need for access credentials for a \nperson to specific areas. \n Access-card permissions need to be highly granular. \nAn administrative person will probably never need to \nbe in server room, so that person’s access to the server \nroom should be blocked. Areas should be categorized \nand catalogued by sensitivity and access permissions \ngranted accordingly. \n Physical data transmission access points to the \nintranet have to be monitored via digital video recording \n(DVR) and closed-circuit cameras if possible. Physical \nelectronic eavesdropping can occur to unmonitored net-\nwork access points in both wireline and wireless ways. \nThere have been known instances of thieves intercepting \nLAN communication from unshielded Ethernet cable \n(usually hidden above the plenum or false ceiling for \nlonger runs). All a data thief needs is to place a TAP box \nand a miniature (Wi-Fi) wireless transmitter at entry or \nexit points to the intranet to copy and transmit all com-\nmunications. At the time of this writing, these transmit-\nters are the size of a USB key. The miniaturization of \nelectronics has made data theft possible for part-time \nthieves. Spy-store sites give determined data thieves \nplenty of workable options at relatively little cost. \n Using a DVR solution to monitor and store access \nlogs to sensitive areas and correlating them to the times-\ntamps on the physical access logs can help forensic \ninvestigations in case of a physical data breach, malfea-\nsance, or theft. It is important to remember that DVR \nrecords typically rotate and are erased every week. One \nperson has to be in charge of the DVR so records are \nsaved to optical disks weekly before they are erased. \nDVR tools need some tending to because their sophis-\ntication level often does not come up to par with other \nnetwork tools. \n Written or PC-based sign-in logs must be kept at \nthe front reception desk, with timestamps. Visitor cards \nshould have limited access to private and/or secured \nareas. Visitors must provide official identification, log \ntimes coming in and going out, and names of persons to \nbe visited and the reason for their visit. If possible, visi-\ntors should be escorted to and from the specific person \nto be visited, to minimize the chances of subversion or \nsabotage. \n Entries to courthouses and other special facilities \nhave metal detectors but these may not be needed for \nevery facility. The same goes for bollards and concrete \nentry barriers to prevent car bombings. In most gov-\nernment facilities where security is paramount, even \nphysical entry points to parking garages have special \npersonnel (usually deputed from the local sheriff’s \ndepartment) to check under cars for hidden explosive \ndevices. \n" }, { "page_number": 179, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n146\n Contractor laptops must be registered and physi-\ncally checked in by field support personnel, and if these \nlaptops are going to be plugged into the local network, \nthe laptops need to be virus-scanned by data-security \npersonnel and checked for unauthorized utilities or sus-\npicious software (such as hacking utilities, Napster, or \nother P2P threats). \n Supply of emergency power to the data center and \nthe servers has to be robust to protect the intranet from \ncorruption due to power failures. Redundancy has to be \nexercised all the way from the utility connection to the \nservers themselves. This means there has to be more than \none power connection to the data center (from more than \none substation/transformer, if it is a larger data center). \nThere has to be provision of alternate power supply (a \nready generator to supply some, if not all, power require-\nments) in case of failure of utility power to the facility. \n Power supplied to the servers has to come from \nmore than one single UPS because most servers have \ntwo removable power inputs. Data center racks typically \nhave two UPSs on the bottom supplying power to two \nseparate power strips on both sides of the rack for this \nredundancy purpose (for seamless switchover). In case \nof a power failure, the UPSs instantly take over the sup-\nply of power and start beeping, alerting personnel to \ngracefully shut down servers. UPSs usually have reserve \npower for brief periods (less than 10 minutes) until the \ngenerator kicks in, relieving the UPS of the large burden \nof the server power loads. Generators come on trailers \nor are skid-mounted and are designed to run as long as \nthere is fuel available in the tank, which can be about \nthree to five days, depending on the model and capacity \nto generate (in thousands of kilowatts). \n Increasingly, expensive polluting batteries have made \nUPSs in larger datacenters fall out of favor compared \nto flywheel power supplies, which is a cleaner, battery-\nless technology to supply interim power. Maintenance of \nthis technology is half as costly as UPS and it offers the \nsame functionality. 19 \n There has to be provision for rechargeable emer-\ngency luminaires within the server room as well as all \nareas occupied by administrators, so entry and exit are \nnot hampered during a power failure. \n Provision for fire detection and firefighting must also be \nmade. As mentioned previously, Halon gas fire-suppression \nsystems are appropriate for server rooms because sprinklers \nwill inevitably damage expensive servers if the servers are \nstill turned on during sprinkler activation. \n Sensors have to be placed close to the ground to \ndetect moisture from plumbing disasters and result-\nant flooding. Master shutoff valve locations for water \nhave to be marked and identified and personnel trained \non performing shutoffs periodically. Complete environ-\nmental control packages with cameras geared toward \ndetecting any type of temperature, moisture, and sound \nabnormality are offered by many vendors. These sensors \nare connected to monitoring workstations using Ethernet \nLAN cabling. Reporting can occur through emails if \ncustomizable thresholds are met or exceeded. \n 10. KNOW YOUR USERS: PERSONNEL \nSECURITY \n Users working within intranet-related infrastructures have \nto be known and trusted. Often data contained within \nthe intranet is highly sensitive, such as new product \ndesigns and financial or market-intelligence data gathered \nafter much research and at great expense. \n Assigning personnel to sensitive areas in IT entails \nattaching security categories and parameters to the posi-\ntions, especially within IT. Attaching security parameters to \na position is akin to attaching tags to a photograph or blog. \nSome parameters will be more important than others, but \nall describe the item to some extent. The categories and \nparameters listed on the personnel access form should cor-\nrelate to access permissions to sensitive installations such \nas server rooms. Access permissions should be compliant \nto the organizational security policy in force at the time. \n Personnel, especially those who will be handling sen-\nsitive customer data or individually identifiable health \nrecords, should be screened before hiring to ensure that they \ndo not have felonies or misdemeanors on their records. \n During transfers and terminations, all sensitive access \ntools should be reassessed and reassigned (or deassigned, in \ncase of termination) for logical and physical access. Access \ntools can include such items as encryption tokens, company \ncell phones, laptops or PDAs, card keys, metal keys, entry \npasses, and any other company identification provided for \nemployment. For people who are leaving the organization, \nan exit interview should be taken. System access should be \nterminated on the hour after former personnel have ceased \nto be employees of the company. \n 11. PROTECTING DATA FLOW: \nINFORMATION AND SYSTEM INTEGRITY \n Information integrity protects information and data flows \nwhile they are in movement to and from the users ’ \n 19 Flywheel energy storage, Wikipedia.com, http://en.wikipedia.org/\nwiki/Flywheel_energy_storage . \n" }, { "page_number": 180, "text": "Chapter | 9 Intranet Security\n147\ndesktops to the intranet. System integrity measures pro-\ntect the systems that process the information (usually \nservers such as email or file servers). The processes to \nprotect information can include antivirus tools, IPS and \nIDS tools, Web-filtering tools, and email encryption \ntools. \n Antivirus tools are the most common security \ntools available to protect servers and users ’ desktops. \nTypically, enterprise-level antivirus software from larger \nvendors such as Symantec or McAfee will contain a con-\nsole listing all machines on the network and will enable \nthe administrators to see graphically (color or icon dif-\nferentiation) which machines need virus remediation \nor updates. All machines will have a software client \ninstalled that does some scanning and reporting of the \nindividual machines to the console. To save bandwidth, \nthe management server that contains the console will be \nupdated with the latest virus (and spyware) definition \nfrom the vendor. Then it is the management console’s \njob to slowly update the software client in each compu-\nter with the latest definitions. Sometimes the client itself \nwill need an update, and the console allows this to be \ndone remotely. \n IDS used to detect malware within the network from \nthe traffic and communication malware used. There are \ncertain patterns of behavior attached to each type of \nmalware, and those signatures are what IDSs are used \nto match. IDSs are mostly defunct nowadays. The major \nproblems with IDSs were that (1) IDSs used to produce \ntoo many false positives, which made sifting out actual \nthreats a huge, frustrating exercise, and (2) IDSs had no \nteeth, that is, their functionality was limited to reporting \nand raising alarms. IDS devices could not stop malware \nfrom spreading because they could not block it. \n Compared to IDSs, IPSs have seen much wider \nadoption across corporate intranets because IPS devices \nsit inline processing traffic at the periphery and they can \nblock traffic or malware, depending on a much more \nsophisticated heuristic algorithm than IDS devices. \nAlthough IPS are all mostly signature based, there are \nalready experimental IPS devices that can stop threats, \nnot on signature, but based only on suspicious or anoma-\nlous behavior. This is good news because the numbers of \n “ zero-day ” threats are on the increase, and their signa-\ntures are mostly unknown to the security vendors at the \ntime of infection. \n Web-filtering tools have gotten more sophisticated \nas well. Ten years ago Web filters could only block traf-\nfic to specific sites if the URL matched. Nowadays most \nWeb filter vendors have large research arms that try to \ncategorize specific Web sites under certain categories. \nSome vendors have realized the enormity of this task \nand have allowed the general public to contribute to \nthis effort. The Web site www.trustedsource.org is an \nexample; a person can go in and submit a single or mul-\ntiple URLs for categorization. If they’re examined and \napproved, the site category will then be added to the ven-\ndor’s next signature update for their Web filter solution. \n Web filters not only match URLs, they do a fair bit of \npacket-examining too these days — just to make sure that \na JPEG frame is indeed a JPEG frame and not a worm in \ndisguise. The categories of Web sites blocked by a typical \nmidsized intranet vary, but some surefire blocked catego-\nries would be pornography, erotic sites, discrimination/\nhate, weapons/illegal activities, and dating/relationships. \n Web filters are not just there to enforce the moral \nvalues of management. These categories — if not blocked \nat work — openly enable an employee to offend another \nemployee (especially pornography or discrimina-\ntory sites) and are fertile grounds for a liability lawsuit \nagainst the employer. \n Finally, email encryption has been in the news \nbecause of the various mandates such as Sarbanes-Oxley \nand HIPAA. Both mandates specifically mention email \nor communication encryption to encrypt personally iden-\ntifiable financial or patient medical data while in transit. \nLately the state of California (among other states) has \nadopted a resolution to discontinue fund disbursements \nto any California health organization that does not use \nemail encryption as a matter of practice. This has caught \nquite a few California companies and local government \nentities unaware because email encryption software is \nrelatively hard to implement. The toughest challenge yet \nis to train users to get used to the tool. \n Email encryption works by entering a set of creden-\ntials to access the email rather than just getting email \npushed to the user, as within the email client Outlook. \n 12. SECURITY ASSESSMENTS \n A security assessment (usually done on a yearly basis \nfor most midsized shops) not only uncovers various mis-\nconfigured items on the network and server-side sections \nof IT operations, it serves as a convenient blueprint for \nIT to activate necessary changes and get credibility for \nbudgetary assistance from the accounting folks. \n Typically most consultants take two to four weeks to \nconduct a security assessment (depending on the size of \nthe intranet) and they primarily use open-source vulner-\nability scanners such as Nessus. GFI LANguard, Retina, \nand Core Impact are other examples of commercial \nvulnerability-testing tools. Sometimes testers also use \n" }, { "page_number": 181, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n148\nother proprietary suites of tools (special open-source tools \nlike the Metasploit Framework or Fragrouter) to conduct \n “ payload-bearing attack exploits, ” thereby evading the \nfirewall and the IPS to gain entry. In the case of intranet \nWeb servers, cross-site scripting attacks can occur (see \nsidebar, “ Types of Scans Conducted on Servers and \nNetwork Appliances During a Security Assessment ” ).\n Types of Scans Conducted on Servers and Network \nAppliances During a Security Assessment \n ● Firewalls and IPS devices configuration \n ● Regular and SSL VPN configuration \n ● Web server hardening (most critical; available as guides \nfrom vendors such as Microsoft) \n ● DMZ configuration \n ● Email vulnerabilities \n ● DNS server anomalies \n ● Database servers (hardening levels) \n ● Network design and access control vulnerabilities \n ● Internal PC health such as patching levels and \nincidence of spyware, malware, and so on \n The results of these penetration tests are usually \ncompiled as two separate items: (1) as a full-fledged \ntechnical report for IT and (2) as a high-level executive \nsummary meant for and delivered to top management to \ndiscuss strategy with IT after the engagement. \n 13. RISK ASSESSMENTS \n Risk is defined as the probability of loss. In IT terms \nwe’re talking about compromising data CIA (confiden-\ntiality, integrity, or availability). Risk management is a \nway to manage the probability of threats causing an \nimpact. Measuring risks using a risk assessment exer-\ncise is the first step toward managing or mitigating a \nrisk. Risk assessments can identify network threats, their \nprobabilities, and their impacts. The reduction of risk \ncan be achieved by reducing any of these three factors. \n Regarding intranet risks and threats, we’re talk-\ning about anything from threats such as unpatched PCs \ngetting viruses and spyware (with hidden keylogging \nsoftware) to network-borne denial-of-service attacks \nand even large, publicly embarrassing Web vandal-\nism threats, such as someone being able to deface the \nmain page of the company Web site. The last is a very \nhigh-impact threat but mostly perceived to be a remote \nprobability — unless, of course, the company has experi-\nenced this before. The awareness among vendors as well \nas users regarding security is at an all-time high due to \nsecurity being a high-profile news item. \n Any security threat assessment needs to explore and \nlist exploitable vulnerabilities and gaps. Many mid-\nsized IT shops run specific vulnerability assessment \n(VA) tools in-house on a monthly basis. eEye’s Retina \nNetwork Security Scanner and Foundstone’s scanning \ntools appliance are two examples of VA tools that can be \nfound in use at larger IT shops. These tools are consoli-\ndated on ready-to-run appliances that are usually man-\naged through remote browser-based consoles. Once the \ngaps are identified and quantified, steps can be taken to \ngradually mitigate these vulnerabilities, minimizing the \nimpact of threats. \n In intranet risk assessments, we identify primarily \nWeb server and database threats residing within the \nintranet, but we should also be mindful about the periph-\nery to guard against breaches through the firewall or IPS. \n 14. CONCLUSION \n It is true that the level of Internet hyperconnectiv-\nity among generation X and Y users has mushroomed \nlately, and the network periphery that we used to take \nfor granted as a security shield has been diminished, \nto a large extent, because of the explosive growth of \nsocial networking and the resulting connectivity boom. \nHowever, with the various new types of incoming appli-\ncation traffic (VoIP, SIP, and XML traffic) to their net-\nworks, security administrators need to stay on their toes \nand deal with these new protocols by implementing \nnewer tools and technology. One recent example of new \ntechnology is the application-level firewall for connect-\ning outside vendors to intranets (also known as an XML \nfirewall, placed within a DMZ) that protects the intranet \nfrom malformed XML and SOAP message exploits com-\ning from outside sourced applications. 20 \n In conclusion, we can say that with the myriad secu-\nrity issues facing intranets today, most IT shops are \nstill well equipped to defend themselves if they assess \nrisks and, most important, train their employees regard-\ning data security practices on an ongoing basis. The \nproblems with threat mitigation remain largely a matter \nof meeting gaps in procedural controls rather than tech-\nnical measures. Trained and security-aware employees \nare the biggest deterrent to data thefts and security \nbreaches. \n 20 Latest standard (version 1.1) for SOAP message security standard\nfrom OASIS, a consortium for Web Services Security, www.oasis-\nopen.org/committees/download.php/16790/wss-v1.1-spec-os-\nSOAPMessageSecurity.pdf . \n" }, { "page_number": 182, "text": "149\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Local Area Network Security \n Dr. Pramod Pandya \n California State University \n Chapter 10 \n Securing available resources on any corporate or aca-\ndemic data network is of paramount importance because \nmost of these networks connect to the Internet for com-\nmercial or research activities. Therefore, the network is \nunder attack from hackers on a continual basis, so net-\nwork security technologies are ever evolving and playing \ncatch-up with hackers. Around 20 years ago the number \nof potential users was small and the scope of any activity \non the network was limited to local networks only. As the \nInternet expanded in its reach across national boundaries \nand as the number of users increased, potential risk to the \nnetwork grew exponentially. Over the past 10 years ecom-\nmerce-related activities such as online shopping, bank-\ning, stock trading, and social networking have permeated \nextensively, creating a dilemma for both service providers \nand their potential clients, as to who is a trusted service \nprovider and a trusted client on the network. Of course, \nthis being a daunting task for security professionals, they \nhave needed to design security policies appropriate for \nboth the servers and their clients. The security policy must \nbe a factor in clients ’ level of access to the resources. So, \nin whom do we place trust, and how much trust? Current \nnetwork designs implement three levels of trust: most \ntrusted, less trusted, and least trusted. Figure 10.1 reflects \nthese levels of trust, as described here: \n ● The most trusted users belong to the intranet . These \nusers have to authenticate to a centralize administrator \nto access the resources on the network. \n ● The less trusted users may originate from the intranet \nas well as the external users who are authenticated to \naccess resources such as email and Web services. \n ● The least trusted users are the unauthenticated users; \nmost of them are simply browsing the resources on the \nInternet with no malice intended. Of course, some are \nscanning the resources with the intent to break in and \nsteal data. \n These are the objectives of network security: \n ● Confidentiality. Only authorized users have access to \nthe network. \n ● Integrity. Data cannot be modified by unauthorized \nusers. \n ● Access. Security must be designed so that authorized \nusers have uninterrupted access to data. \n Finally, the responsibility for the design and imple-\nmentation of network security is headed by the chief \n FIGURE 10.1 The DMZ. \n" }, { "page_number": 183, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n150\ninformation officer (CIO) of the enterprise network. The \nCIO has a pool of network administrators and legal advis-\ners to help with this task. The network administrators \ndefine the placing of the network access controls, and the \nlegal advisors underline the consequences and liabilities \nin the event of network security breaches. We have seen \ncases of customer records such as credit-card numbers, \nSocial Security numbers, and personal information being \nstolen. The frequency of these reports have been on the \nincrease in the past years, and consequently this has led \nto a discussion on the merits of encryption of stored data. \nOne of the most quoted legal requirements on the part of \nany business, whether small or big, is the protection of \nconsumer data under the Health Insurance Portability and \nAccountability Act (HIPAA), which restricts disclosure of \nhealth-related data and personal information. \n 1. IDENTIFY NETWORK THREATS \n Network security threats can be in one of two categories: \n(1) disruptive type or (2) unauthorized access type. \n Disruptive \n Most LANs are designed as collapsed backbone networks \nusing a layer-2 or layer-3 switch. If a switch or a router \nwere to fail due to power failure, a segment or the entire \nnetwork may cease to function until the power is restored. \nIn some case, the network failure may be due to a virus \nattack on the secondary storage, thus leading to loss of data. \n Unauthorized Access \n This access type can be internal (employee) or exter-\nnal (intruder), a person who would attempt to break into \nresources such as database, file, and email or web servers \nthat they have no permission to access. Banks, financial \ninstitutions, major corporations, and major retail busi-\nnesses employ data networks to process customer trans-\nactions and store customer information and any other \nrelevant data. Before the birth of the Internet Age, interin-\nstitutional transactions were secured because the networks \nwere not accessible to intruders or the general public. In \nthe past 10 years, access to the Internet is almost universal; \nhence institutional data networks have become the target \nof frequent intruder attacks to steal customer records. One \nfrequently reads in the news about data network security \nbeing compromised by hackers, leading to loss of credit \ncard and debit card numbers, Social Security numbers, \ndrivers ’ license numbers, and other sensitive information\nsuch as purchasing profiles. Over the years, although \nnetwork security has increased, the frequency of attacks \non the networks has also increased because the tools to \nbreach the network security have become freely available \non the Internet. In 1988 the U.S. Department of Defense \nestablished the Computer Emergency Response Team \n(CERT), whose mission is to work with the Internet com-\nmunity to prevent and respond to computer and network \nsecurity breaches. Since the Internet is widely used for \ncommercial activities by all kinds of businesses, the fed-\neral government has enacted stiffer penalties for hackers. \n 2. ESTABLISH NETWORK ACCESS \nCONTROLS \n In this section we outline steps necessary to secure net-\nworks through network controls. These network controls \nare either software or hardware based and are imple-\nmented in a hierarchical structure to reflect the network \norganization. This hierarchy is superimposed on the net-\nwork from the network’s perimeter to the access level \nper user of the network resources. The functions of the \nnetwork control are to detect an unauthorized access, \nto prevent network security from being breached, and \nfinally, to respond to a breach — thus the three categories \nof detect, prevent, and respond. \n The role of prevention control is to stop unauthorized \naccess to any resource available on the network. This \ncould be implemented as simply as a password required \nto authenticate the user to access the resource on the \nnetwork. For an authorized user this password can grant \nlogin to the network to access the services of a database, \nfile, Web, print, or email server. The network adminis-\ntrator would need a password to access the switch or a \nrouter. The prevention control in this case is software \nbased. An analog of hardware-based control would be, \nfor example, if the resources such as server computers, \nswitches, and routers are locked in a network access \ncontrol room. \n The role of the detection control is to monitor the \nactivities on the network and identify an event or a set of \nevents that could breach network security. Such an event \nmay be a virus, spyware, or adware attack. The detec-\ntion control software must, besides registering the attack, \ngenerate or trigger an alarm to notify of an unusual event \nso that a corrective action can be taken immediately, \nwithout compromising security. \n The role of the response control is to take corrective \naction whenever network security is breached so that the \nsame kind of breach is detected and any further damage \nis prevented. \n" }, { "page_number": 184, "text": "Chapter | 10 Local Area Network Security\n151\n 3. RISK ASSESSMENT \n During the initial design phase of a network, the net-\nwork architects assess the types of risks to the network \nas well as the costs of recovering from attacks for all \nthe resources that have been compromised. These cost \nfactors can be realized using well-established account-\ning procedures such as cost/benefit analysis, return on \ninvestment (ROI), and total cost of ownership (TCO). \nThese risks could range from natural disaster to an \nattack by a hacker. Therefore, you need to develop lev-\nels of risks to various threats to the network. You need \nto design some sort of spreadsheet that lists risks versus \nthreats as well as responses to those identified threats. \nOf course, the spreadsheet would also mark the placing \nof the network access controls to secure the network. \n 4. LISTING NETWORK RESOURCES \n We need to identify the assets (resources) that are avail-\nable on the corporate network. Of course, this list could \nbe long, and it would make no sense to protect all the \nresources, except for those that are mission-critical to the \nbusiness functions. Table 10.1 identifies mission-critical \ncomponents of any enterprise network. You will observe \nthat these mission-critical components need to be priori-\ntized, since they do not all provide the same functions. \nSome resources provide controlled access to a network; \nother resources carry sensitive corporate data. Hence the \nthreats posed to these resources do not carry the same \ndegree of vulnerabilities to the network. Therefore, the \nnetwork access control has to be articulated and applied \nto each of the components listed, in varying degrees. \nFor example, threats to DNS server pose a different set \nof problems from threats to the database servers. In the \nnext section we itemize the threats to these resources \nand specific network access controls. \n 5. THREATS \n We need to identify the threats posed to the network from \ninternal users as opposed to those from external users. \nThe reason for such a distinction is that the internal users \nare easily traceable, compared to the external users. If a \nthreat to the data network is successful, and it could lead \nto loss or theft of data or denial of access to the services \noffered by the network, it would lead to monetary loss for \nthe corporation. Once we have identified these threats, we \ncan rank them from most probable to least probable and \ndesign network security policy to reflect that ranking. \n From Table 10.2 , we observe that most frequent \nthreats to the network are from viruses, and we have \nseen a rapid explosion in antivirus, antispamware, and \nspyware and adware software. Hijacking of resources \nsuch as domain name services, Web services, and perim-\neter routers would lead to what’s most famously known \nas denial of service (DoS) or distributed denial of serv-\nice (DDoS). Power failures can always be complemented \nby standby power supplies that could keep the essential \nresources from crashing. Natural disasters such as fire, \nfloods, or earthquakes can be most difficult to plan for; \ntherefore we see a tremendous growth in data protection \nand backup service provider businesses. \n 6. SECURITY POLICIES \n The fundamental goals of security policy are to allow unin-\nterrupted access to the network resources for authenticated \n TABLE 10.1 Mission-Critical Components of Any Enterprise Network \n Threats \n Fire, Flood, Earthquake \n Power Failure \n Spam \n Virus \n Spyware, Adware \n Hijacking \n Resources \n \n \n \n \n \n \n Perimeter Router \n \n \n \n \n \n \n DNS Server \n \n \n \n \n \n \n WEB Server \n \n \n \n \n \n \n Email Server \n \n \n \n \n \n \n Core Switches \n \n \n \n \n \n \n Databases \n \n \n \n \n \n \n" }, { "page_number": 185, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n152\n 7. THE INCIDENT-HANDLING PROCESS \n The incident-handling process is the most important \ntask of a security policy for the reason that you would \nnot want to shut down the network in case of a network \nsecurity breach. The purpose of the network is to share \nthe resources; therefore an efficient procedure must be \ndeveloped to respond to a breach. If news of the network \nsecurity breach becomes public, the corporation’s busi-\nness practices could become compromised, thus result-\ning in compromise of its business operations. Therefore \nset procedures must be developed jointly with the busi-\nness operations manager and the chief information \nofficer. This calls for a modular design of the enterprise \nnetwork so that its segments can be shut down in an \norderly way, without causing panic. \n Toward this end, we need a set of tools to monitor \nactivities on the network — we need an intrusion detec-\ntion and prevention system. These pieces of software will \nmonitor network activities and log and report an activity \nthat does not conform to usual or acceptable standards as \ndefined by the software. Once an activity is detected and \nlogged, response is activated. It is not merely sufficient \nto respond to an incident; the network administrator also \nhas to activate tools to trace back to the source of this \nbreach. This is critical so that the network administrator \ncan update the security procedures to make sure that this \nparticular incident does not take place. \n 8. SECURE DESIGN THROUGH \nNETWORK ACCESS CONTROLS \n A network is as secure as its weakest link in the over-\nall design. To secure it, we need to identify the entry \nand exit points into the network. Since most data net-\nworks have computational nodes to process data and \nstorage nodes to store data, stored data may need to be \nencrypted so that if network security is breached, stolen \ndata may still remain confidential unless the encryption \nis broken. As we hear of cases of stolen data from either \nhacked networks or stolen nodes, encrypting data while \nit’s being stored appears to be necessary to secure data. \n The entry point to any network is a perimeter router, \nwhich sits between the external firewall and the Internet; \nthis model is applicable to most enterprise networks that \nengage in some sort of ecommerce activities. Hence our \nfirst network access control is to define security policy \non the perimeter router by configuring the appropriate \nparameters on the router. The perimeter router will filter \ntraffic based on the range of IP addresses. \nusers and to deny access to unauthenticated users. Of \ncourse, this is always a balancing act between the users ’ \ndemands on network resources and the evolutionary \nnature of information technology. The user community \nwould prefer open access, whereas the network admin-\nistrator insists on restricted and monitored access to the \nnetwork. \n The hacker is, in the final analysis, the arbitrator of \nthe network security policy, since it is always the unau-\nthorized user who discovers the potential flaw in the \nsoftware. Hence, any network is as secure as the last \nattack that breached its security. It would be totally unre-\nalistic to expect a secured network at all times, once it \nis built and secured. Therefore, network security design \nand its implementation represent the ultimate battle of \nthe minds between the chief information security officer \n(CISO) and the devil, the hacker. We can summarize that \nthe network security policy can be as simple as to allow \naccess to resources, or it can be several hundred pages \nlong, detailing the levels of access and punishment if a \nbreach is discovered. Most corporate network users now \nhave to sign onto the usage policy of the network and \nare reminded that security breaches are a punishable \noffence. \n The critical functions of a good security policy are: \n ● Appoint a security administrator who is conversant \nwith users ’ demands and on a continual basis is pre-\npared to accommodate the user community’s needs. \n ● Set up a hierarchical security policy to reflect the \ncorporate structure. \n ● Define ethical Internet access capabilities. \n ● Evolve the remote access policy. \n ● Provide a set of incident-handling procedures. \n TABLE 10.2 The Most Frequent Threats to the \nNetwork Are from Viruses \n Rank \n Threat\n 1 \n Virus \n 2 \n Spam \n 3 \n Spyware, Adware \n 4 \n Hijacking \n 5 \n Power Failure \n 6 \n Fire, Flood, Earthquake \n" }, { "page_number": 186, "text": "Chapter | 10 Local Area Network Security\n153\n Next in the line of defense is the external firewall that \nfilters traffic based on the state of the network connection. \nAdditionally, the firewall could also check the contents of \nthe traffic packet against the nature of the Transmission \nControl Protocol (TCP) connection requested. Following \nthe firewall we have the so-called demilitarized zone, or \nDMZ, where we would place the following servers: Web, \nDNS, and email. We could harden these servers so that \npotential threatening traffic can be identified and appro-\npriate incident response generated. \n The DMZ is placed between two firewalls, so our \nlast line of defense is the next firewall that would inspect \nthe traffic and possibly filter out the potential threat. The \nnodes that are placed on the intranet can be protected by \ncommercially available antivirus software. Last but not \nleast, we could install on the network an intrusion detec-\ntion and prevention system that will generate real-time \nresponse to a threat. \n Next we address each of the network control access \npoints. The traditional network design includes an access \nlayer, a distribution layer, and the core layer. In the case of \na local area network (LAN) we will use the access and dis-\ntribution layers; the core layer would simply be our perim-\neter router that we discussed earlier in this section. Thus \nthe LAN will consist of a number of segments reflecting \nthe organizational structure. The segments could sit behind \ntheir firewall to protect one another as well, in case of net-\nwork breach; segments under attack can be isolated, thus \npreventing a cascade-style attack on the network. \n 9. IDS DEFINED \n An intrusion detection system, or IDS, can be both soft-\nware and hardware based. IDSs listen to all the activities \ntaking place on both the computer (node on a network) \nand the network itself. One could think of an IDS as \nlike traffic police, whose function is to monitor the data \npacket traffic and detect and identify those data packets \nthat match predefined unusual pattern of activities. An \nIDS can also be programmed to teach itself from its past \nactivities to refine the rules for unusual activities. This \nshould not come as a surprise, since the hackers also get \nsmarter over time. \n As we stated, the IDS collects information from a \nvariety of system and network resources, but in actual-\nity it captures packets of data as defined by the TCP/IP \nprotocol stack. In this sense IDS is both a sniffer and \nanalyzer software. IDS in its sniffer role would either \ncapture all the data packets or select ones as specified \nby the configuration script. This configuration script \nis a set of rules that tell the analyzer what to look for in \na captured data packet, then make an educated guess per \nrules and generate an alert. Of course, this could lead to \nfour possible outcomes with regard to intrusion detection: \nfalse positive, false negative, true positive, or true negative. \nWe address this topic in more detail later in the chapter. \n IDSs performs a variety of functions: \n ● Monitor and analyze user and system activities \n ● Verify the integrity of data files \n ● Audit system configuration files \n ● Recognize activity of patterns, reflecting known \nattacks \n ● Statistical analysis of any undefined activity pattern \n An IDS is capable of distinguishing different types \nof network traffic, such as an HTTP request over port \n80 from some other application such as SMTP being \nrun over port 80. We see here that an IDS understands \nwhich TCP/IP applications run over which preassigned \nport numbers, and therefore falsifying port numbers \nwould be trivially detectable. This is a very easy illus-\ntration, but there are more complex attacks that are not \nthat easy to identify, and we shall cover them later in \nthis chapter. \n The objective of intrusion detection software pack-\nages is to make possible the complex and sometimes \nvirtually impossible task of managing system security. \nWith this in mind, it might be worthwhile to bring to \nour attention two industrial-grade IDS software pack-\nages: Snort (NIDS), which runs on both Linux and \nWindows, and GFI LANguard S.E.L.M., a host intrusion \ndetection system (HIDS), which runs on Windows only. \nCommercial-grade IDS software is designed with user-\nfriendly interfaces that make it easy to configure scripts, \nwhich lay down the rules for intrusion detection. \n Next let’s examine some critical functions of an IDS: \n ● Can impose a greater degree of flexibility to the \nsecurity infrastructure of the network \n ● Monitors the functionality of routers, including \nfirewalls, key servers, and critical switches \n ● Can help resolve audit trails, thus often exposing \nproblems before they lead to loss of data \n ● Can trace user activity from the network point of \nentry to the point of exit \n ● Can report on file integrity checks \n ● Can detect whether a system has been reconfigured \nby an attack \n ● Can recognize a potential attack and generate \nan alert \n ● Can make possible security management of \na network by nonexpert staff \n" }, { "page_number": 187, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n154\n 10. NIDS: SCOPE AND LIMITATIONS \n Network-based IDS (NIDS) sensors scan network pack-\nets at the router or host level, auditing data packets and \nlogging any suspicious packets to a log file. Figure 10.2 \nis an example of a NIDS. The data packets are captured \nby a sniffer program, which is a part of the IDS software \npackage. The node on which the IDS software is ena-\nbled runs in promiscuous mode. In promiscuous mode, \nthe NIDS node captures all the data packets on the net-\nwork as defined by the configuration script. NIDSs \nhave become a critical component of network security \nmanagement as the number of nodes on the Internet has \ngrown exponentially over last few years. Some of the \ncommon malicious attacks on networks are: \n ● IP address spoofing \n ● MAC address spoofing \n ● ARP cache poisoning \n ● DNS name corruption \n 11. A PRACTICAL ILLUSTRATION OF NIDS \n In this section, we illustrate the use of Snort as an example \nof a NIDS. The signature files are kept in the directory \nsignatures under the directory .doc. Signature files are used \nto match defined signature against a pattern of bytes in the \ndata packets, to identify a potential attack. Files marked as \nrules in the rules directory are used to trigger an alarm and \nwrite to the file alert.ids. Snort is installed on a node with \nIP address 192.168.1.22. The security auditing software \nNmap is installed on a node with IP address 192.168.1.20. \nNmap software is capable of generating ping sweeps, TCP \nSYN (half-open) scanning, TCP connect() scanning, and \nmuch more. Figure 10.2 has a node labeled NIDS (behind \nthe Linksys router) on which Snort would be installed. \nOne of the workstations would run Nmap software. \n UDP Attacks \n A UDP attack is generated from a node with IP address \n192.168.1.20 to a node with IP address 192.168.1.22. \nSnort is used to detect a possible attack. Snort’s detect \nengine uses one of the files in DOS under directory rules \nto generate the alert file alert.ids. We display a partial \nlisting (see Listing 10.1 ) of the alert.ids file. \n Listing 10.2 shows a partial listing of DOS rules file. \nThe rules stated in the DOS rules file are used to generate \nthe alert.ids file. \n FIGURE 10.2 An example of a network-based intrusion detection system. \n" }, { "page_number": 188, "text": "Chapter | 10 Local Area Network Security\n155\n TCP SYN (Half-Open) Scanning \n This technique is often referred to as half-open scanning \nbecause you don’t open a full TCP connection. You send \na SYN packet, as though you were going to open a real \nconnection, and wait for a response. A SYN | ACK indi-\ncates that the port is listening. An RST is indicative of a \nnonlistener. If a SYN | ACK is received, you immediately \nsend an RST to tear down the connection (actually, the \nkernel does this for you). The primary advantage of this \n[**] [1:0:0] DOS Teardrop attack [**]\n[Priority: 0] \n01/26-11:37:10.667833 192.168.1.20:1631 -> 192.168.1.22:21\nUDP TTL:128 TOS:0x0 ID:60940 IpLen:20 DgmLen:69\nLen: 41\n[**] [1:0:0] DOS Teardrop attack [**]\n[Priority: 0] \n01/26-11:37:10.668460 192.168.1.20:1631 -> 192.168.1.22:21\nUDP TTL:128 TOS:0x0 ID:60940 IpLen:20 DgmLen:69\nLen: 41\n[**] [1:0:0] DOS Teardrop attack [**]\n[Priority: 0] \n01/26-11:37:11.667926 192.168.1.20:1631 -> 192.168.1.22:21\nUDP TTL:128 TOS:0x0 ID:60941 IpLen:20 DgmLen:69\nLen: 41\n[**] [1:0:0] DOS Teardrop attack [**]\n[Priority: 0] \n01/26-11:37:11.669424 192.168.1.20:1631 -> 192.168.1.22:21\nUDP TTL:128 TOS:0x0 ID:60941 IpLen:20 DgmLen:69\nLen: 41\n[**] [1:0:0] DOS Teardrop attack [**]\n[Priority: 0] \n01/26-11:37:12.669316 192.168.1.20:1631 -> 192.168.1.22:21\nUDP TTL:128 TOS:0x0 ID:60942 IpLen:20 DgmLen:69\nLen: 41\n LISTING 10.1 An alert.ids file. \n# (C) Copyright 2001, Martin Roesch, Brian Caswell, et al. All rights reserved.\n# $Id: dos.rules,v 1.30.2.1 2004/01/20 21:31:38 jh8 Exp $\n#----------\n# DOS RULES\n#----------\nalert ip $EXTERNAL_NET any -> $HOME_NET any (msg:\"DOS Jolt attack\";\nfragbits: M; dsize:408; reference:cve,CAN-1999-0345;\nclasstype:attempted-dos; sid:268; rev:2;)\nalert udp $EXTERNAL_NET any -> $HOME_NET any (msg:\"DOS Teardrop attack\";\nid:242; fragbits:M; reference:cve,CAN-1999-0015;\nreference:url,www.cert.org/advisories/CA-1997-28.html;\nreference:bugtraq,124; classtype:attempted-dos; sid:270; rev:2;)\nalert udp any 19 <> any 7 (msg:\"DOS UDP echo+chargen bomb\";\nreference:cve,CAN-1999-0635; reference:cve,CVE-1999-0103;\nclasstype:attempted-dos; sid:271; rev:3;)\nalert ip $EXTERNAL_NET any -> $HOME_NET any (msg:\"DOS IGMP dos attack\";\ncontent:\"|02 00|\"; depth: 2; ip_proto: 2; fragbits: M+;\nreference:cve,CVE-1999-0918; classtype:attempted-dos; sid:272; rev:2;)\nalert ip $EXTERNAL_NET any -> $HOME_NET any (msg:\"DOS IGMP dos attack\";\ncontent:\"|00 00|\"; depth:2; ip_proto:2; fragbits:M+; reference:cve,CVE-\n1999-0918; classtype:attempted-dos; sid:273; rev:2;)\nalert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:\"DOS ath\";\ncontent:\"+++ath\"; nocase; itype: 8; reference:cve,CAN-1999-1228;\nreference:arachnids,264; classtype:attempted-dos; sid:274; rev:2;)\n LISTING 10.2 The DOS rules file. \n" }, { "page_number": 189, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n156\nscanning technique is that fewer sites will log it! SYN \nscanning is the -s option of Nmap. \n A SYN attack is generated using Nmap software \nfrom a node with IP address 192.168.1.20 to a node \nwith IP address 192.168.1.22. Snort is used to detect \nfor a possible attack. Snort’s detect engine uses scan \nand ICMP rules files under directory rules to generate \nthe alert file alert.ids. A partial listing of alert.ids file is \nshown in Listing 10.3 . \n A partial listing of the scan rules appears in Listing \n10.4 . \n Listing 10.5 contains a partial listing of the ICMP \nrules. \n The following points must be noted about NIDS: \n ● One NIDS is installed per LAN (Ethernet) segment. \n ● Place NIDS on the auxiliary port on the switch and then \nlink all the ports on the switch to that auxiliary port. \n ● When the network is saturated with traffic, the NIDS \nmight drop packets and thus create a potential “ hole. ” \n ● If the data packets are encrypted, the usefulness of an \nIDS is questionable. \n Some Not-So-Robust Features of NIDS \n Network security is a complex issue with myriad pos-\nsibilities and difficulties. In networks, security is also a \nweakest link phenomenon, since it takes vulnerability on \none node to allow a hacker to gain access to a network \nand thus create chaos on the network. Therefore IDS \nproducts are vulnerable. \n An IDS cannot compensate for weak identification \nand authentication. Hence you must rely on other means \nof identification and authentication of users. This is best \nimplemented by token-based or biometric schemes and \none-time passwords. \n An IDS cannot conduct investigations of attacks with-\nout human intervention. Therefore when an incident does \noccur, steps must be defined to handle the incident. The \nincident must be followed up to determine the respon-\nsible party, then the vulnerability that allowed the prob-\nlem to occur should be diagnosed and corrected. You \nwill observe that an IDS is not capable of identifying the \nattacker, only the IP address of the node that served as \nthe hacker’s point of entry. \n[**] [1:469:1] ICMP PING NMAP [**]\n[Classification: Attempted Information Leak] [Priority: 2] \n01/24-19:28:24.774381 192.168.1.20 -> 192.168.1.22\nICMP TTL:44 TOS:0x0 ID:29746 IpLen:20 DgmLen:28\nType:8 Code:0 ID:35844 Seq:45940 ECHO\n[Xref => http://www.whitehats.com/info/IDS162]\n[**] [1:469:1] ICMP PING NMAP [**]\n[Classification: Attempted Information Leak] [Priority: 2] \n01/24-19:28:24.775879 192.168.1.20 -> 192.168.1.22\nICMP TTL:44 TOS:0x0 ID:29746 IpLen:20 DgmLen:28\nType:8 Code:0 ID:35844 Seq:45940 ECHO\n[Xref => http://www.whitehats.com/info/IDS162]\n[**] [1:620:6] SCAN Proxy Port 8080 attempt [**]\n[Classification: Attempted Information Leak] [Priority: 2] \n01/24-19:28:42.023770 192.168.1.20:51530 -> 192.168.1.22:8080\nTCP TTL:50 TOS:0x0 ID:53819 IpLen:20 DgmLen:40\n******S* Seq: 0x94D68C2 Ack: 0x0 Win: 0xC00 TcpLen: 20\n[**] [1:620:6] SCAN Proxy Port 8080 attempt [**]\n[Classification: Attempted Information Leak] [Priority: 2] \n01/24-19:28:42.083817 192.168.1.20:51530 -> 192.168.1.22:8080\nTCP TTL:50 TOS:0x0 ID:53819 IpLen:20 DgmLen:40\n******S* Seq: 0x94D68C2 Ack: 0x0 Win: 0xC00 TcpLen: 20\n[**] [1:615:5] SCAN SOCKS Proxy attempt [**]\n[Classification: Attempted Information Leak] [Priority: 2] \n01/24-19:28:43.414083 192.168.1.20:51530 -> 192.168.1.22:1080\nTCP TTL:59 TOS:0x0 ID:62752 IpLen:20 DgmLen:40\n******S* Seq: 0x94D68C2 Ack: 0x0 Win: 0x1000 TcpLen: 20\n[Xref => http://help.undernet.org/proxyscan/]\n LISTING 10.3 Alert.ids file. \n" }, { "page_number": 190, "text": "Chapter | 10 Local Area Network Security\n157\n An IDS cannot compensate for weaknesses in network \nprotocols. IP and MAC address spoofing is a very common \nform of attack in which the source IP or MAC address \ndoes not correspond to the real source IP or MAC \naddress of the hacker. Spoofed addresses can be mimicked \nto generate DDoS attacks. \n An IDS cannot compensate for problems in the integrity \nof information the system provides. Many hacker tools \ntarget system logs, selectively erasing records correspond-\ning to the time of the attack and thus covering the hacker’s \ntracks. This calls for redundant information sources. \n An IDS cannot analyze all the traffic on a busy net-\nwork. A network-based IDS in promiscuous mode can \ncapture all the data packets, and as the traffic level raises, \nNIDS can reach a saturation point and begin to lose data \npackets. \n An IDS cannot always deal with problems involv-\ning packet-level attacks. The vulnerabilities lie in the \ndifference between IDS interpretation of the outcome \nof a network transaction and the destination node for \nthat network session’s actual handling of the transac-\ntion. Therefore, a hacker can send a series of fragmented \n# (C) Copyright 2001,2002, Martin Roesch, Brian Caswell, et al.\n# All rights reserved.\n# $Id: scan.rules,v 1.21.2.1 2004/01/20 21:31:38 jh8 Exp $\n#-----------\n# SCAN RULES\n#-----------\n# These signatures are representitive of network scanners. These include\n# port scanning, ip mapping, and various application scanners.\n#\n# NOTE: This does NOT include web scanners such as whisker. Those are\n# in web*\n#\nalert tcp $EXTERNAL_NET 10101 -> $HOME_NET any (msg:\"SCAN myscan\";\nstateless; ttl: >220; ack: 0; flags: S;reference:arachnids,439;\nclasstype:attempted-recon; sid:613; rev:2;)\nalert tcp $EXTERNAL_NET any -> $HOME_NET 113 (msg:\"SCAN ident version\nrequest\"; flow:to_server,established; content: \"VERSION|0A|\"; depth:\n16;reference:arachnids,303; classtype:attempted-recon; sid:616; rev:3;)\nalert tcp $EXTERNAL_NET any -> $HOME_NET 80 (msg:\"SCAN cybercop os\nprobe\"; stateless; flags: SF12; dsize: 0; reference:arachnids,146;\nclasstype:attempted-recon; sid:619; rev:2;)\nalert tcp $EXTERNAL_NET any -> $HOME_NET 3128 (msg:\"SCAN Squid Proxy\nattempt\"; stateless; flags:S,12; classtype:attempted-recon; sid:618;\nrev:5;)\nalert tcp $EXTERNAL_NET any -> $HOME_NET 1080 (msg:\"SCAN SOCKS Proxy\nattempt\"; stateless; flags:S,12;\nreference:url,help.undernet.org/proxyscan/; classtype:attempted-recon;\nsid:615; rev:5;)\nalert tcp $EXTERNAL_NET any -> $HOME_NET 8080 (msg:\"SCAN Proxy Port 8080\nattempt\"; stateless; flags:S,12; classtype:attempted-recon; sid:620;\nrev:6;)\nalert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:\"SCAN FIN\"; stateless;\nflags:F,12; reference:arachnids,27; classtype:attempted-recon; sid:621;\nrev:3;)\nalert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:\"SCAN ipEye SYN scan\";\nflags:S; stateless; seq:1958810375; reference:arachnids,236;\nclasstype:attempted-recon; sid:622; rev:3;)\nalert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:\"SCAN NULL\";\nstateless; flags:0; seq:0; ack:0; reference:arachnids,4;\nclasstype:attempted-recon; sid:623; rev:2;)\n LISTING 10.4 Scan rules. \n" }, { "page_number": 191, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n158\npackets that elude detection and can also launch attacks \non the destination node. Even worse, the hacker can lead \nto DoS on the IDS itself. \n An IDS has problems dealing with fragmented data \npackets. Hackers would normally use fragmentation to \nconfuse the IDS and thus launch an attack. \n 12. FIREWALLS \n A firewall is either a single node or a set of nodes that \nenforce an access policy between two networks. Firewall \ntechnology evolved to protect the intranet from unauthorized \nusers on the Internet. This was the case in the earlier years \nof corporate networks. Since then, the network administra-\ntors have realized that networks can also be attacked from \ntrusted users as well as, for example, the employee of a \ncompany. The corporate network consists of hundreds of \nnodes per department and thus aggregates to over a thou-\nsand or more, and now there is a need to protect data in \neach department from other departments. Hence, a need \nfor internal firewalls arose to protect data from unauthor-\nized access, even if they are employees of the corporation. \n# (C) Copyright 2001,2002, Martin Roesch, Brian Caswell, et al.\n# All rights reserved.\n# $Id: icmp.rules,v 1.19 2003/10/20 15:03:09 chrisgreen Exp $\n#-----------\n# ICMP RULES\n#-----------\n#\n# Description:\n# These rules are potentially bad ICMP traffic. They include most of the\n# ICMP scanning tools and other \"BAD\" ICMP traffic (Such as redirect\nhost)\n#\n# Other ICMP rules are included in icmp-info.rules\nalert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:\"ICMP ISS Pinger\";\ncontent:\"|495353504e475251|\";itype:8;depth:32; reference:arachnids,158;\nclasstype:attempted-recon; sid:465; rev:1;)\nalert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:\"ICMP L3retriever\nPing\"; content: \"ABCDEFGHIJKLMNOPQRSTUVWABCDEFGHI\"; itype: 8; icode: 0;\ndepth: 32; reference:arachnids,311; classtype:attempted-recon; sid:466;\nrev:1;)\nalert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:\"ICMP Nemesis v1.1\nEcho\"; dsize: 20; itype: 8; icmp_id: 0; icmp_seq: 0; content:\n\"|0000000000000000000000000000000000000000|\"; reference:arachnids,449;\nclasstype:attempted-recon; sid:467; rev:1;)\nalert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:\"ICMP PING NMAP\";\ndsize: 0; itype: 8; reference:arachnids,162; classtype:attempted-recon;\nsid:469; rev:1;)\nalert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:\"ICMP icmpenum\nv1.1.1\"; id: 666; dsize: 0; itype: 8; icmp_id: 666 ; icmp_seq: 0;\nreference:arachnids,450; classtype:attempted-recon; sid:471; rev:1;)\nalert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:\"ICMP redirect\nhost\";itype:5;icode:1; reference:arachnids,135; reference:cve,CVE-1999-\n0265; classtype:bad-unknown; sid:472; rev:1;)\nalert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:\"ICMP redirect\nnet\";itype:5;icode:0; reference:arachnids,199; reference:cve,CVE-1999-\n0265; classtype:bad-unknown; sid:473; rev:1;)\nalert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:\"ICMP superscan\necho\"; content:\"|0000000000000000|\"; itype: 8; dsize:8;\nclasstype:attempted-recon; sid:474; rev:1;)\nalert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:\"ICMP traceroute\nipopts\"; ipopts: rr; itype: 0; reference:arachnids,238; classtype\n LISTING 10.5 ICMP rules. \n" }, { "page_number": 192, "text": "Chapter | 10 Local Area Network Security\n159\nThis need has, over the years, led to design of segmented \nIP networks, such that internal firewalls would form barri-\ners within barriers, to restrict a potential break-in to an IP \nsegment rather than expose the entire corporate network to \na hacker. For this reason, network security has grown into \na multibillion-dollar business. \n Almost every intranet, whether of one node or many \nnodes, is always connected to the Internet, and thus \na potential number of hackers wait to attack it. Thus \nevery intranet is an IP network, with TCP- and UDP-\nbased applications running over it. The design of TCP \nand UDP protocols require that every client/server appli-\ncation interacts with other client/server applications \nthrough TCP and UDP port numbers. As we stated ear-\nlier, these TCP and UDP port numbers are well known \nand hence give rise to a necessary weakness in the net-\nwork. TCP and UDP port numbers open up “ holes ” in \nthe networks by their very design. Every Internet and \nintranet point of entry has to be guarded, and you must \nmonitor the traffic (data packets) that enter and leave the \nnetwork. \n A firewall is a combination of hardware and software \ntechnology, namely a sort of sentry, waiting at the points \nof entry and exit to look out for an unauthorized data \npacket trying to gain access to the network. The network \nadministrator, with the help of other IT staff, must first \nidentify the resources and the sensitive data that need to \nbe protected from the hackers. Once this task has been \naccomplished, the next task is to identify who would \nhave access to these identified resources and the data. \nWe should pointed out that most of the networks in any \ncorporation are never designed and built from scratch \nbut are added to an existing network as the demand for \nnetworking grows with the growth of the business. So, \nthe design of the network security policy has multilay-\nered facets as well. \n Once the network security policy is defined and \nunderstood, we can identify the proper placement of \nthe firewalls in relation to the resources on the network. \nHence, the next step would be to actually place the fire-\nwalls in the network as nodes. The network security pol-\nicy now defines access to the network, as implemented in \nthe firewall. These access rights to the network resources \nare based on the characteristics of TCP/IP protocols and \nthe TCP/UDP port numbers. \n Firewall Security Policy \n The firewall enables the network administrator to \ncentralize access control to the campuswide network. \nA firewall logs every packet that enters and leaves the net-\nwork. The network security policy implemented in the \nfirewall provides several types of protection, including \nthe following: \n ● Block unwanted traffic \n ● Direct incoming traffic to more trustworthy internal \nnodes \n ● Hide vulnerable nodes that cannot easily be secured \nfrom external threats \n ● Log traffic to and from the network \n A firewall is transparent to authorized users (both \ninternal and external), whereas it is not transparent to \nunauthorized users. However, if the authorized user \nattempts to access a service that is not permitted to that \nuser, a denial of that service will be echoed, and that \nattempt will be logged. \n Firewalls can be configured in a number of archi-\ntectures, providing various levels of security at differ-\nent costs of installation and operations. Figure 10.2 is an \nexample of a design termed a screened Subnet . In this \ndesign, the internal network is a private IP network, so \nthe resources on that network are completely hidden \nfrom the users who are external to that network, such as \nusers from the Internet. In an earlier chapter we talked \nabout public versus private IP addresses. It is agreed to \nby the IP community that nodes with private IP addresses \nwill not be accessible from outside that network. Any \nnumber of corporations may use the same private IP net-\nwork address without creating packets with duplicated \nIP addresses. This feature of IP networks, namely private \nIP networks, adds to network security. In Figure 10.2 , \nwe used a Linksys router to support a private IP net-\nwork (192.168.1.0) implementation. For the nodes on \nthe 192.168.1.0 network to access the resources on the \nInternet, the Linksys router has to translate the private \nIP address of the data packet to a public IP address. In \nour scenario, the Linksys router would map the address \nof the node on the 192.168.1.0 network, to an address on \nthe public network, 200.100.70.0. This feature is known \nas Network Address Translation (NAT), which is enabled \non the Linksys router. You can see in Figure 10.2 that \nthe Linksys router demarks the internal (IN) network \nfrom the external (OUT) network. \n We illustrate an example of network address trans-\nlation, as shown in Listing 10.6 . The script configures \na Cisco router that translates an internal private IP \naddress to a public IP address. Of course, configuring a \nLinksys router is much simpler using a Web client. An \nexplanation of the commands and their details follow the \nscript. \n" }, { "page_number": 193, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n160\n Configuration Script for sf Router \n The access-list command creates an entry in a standard \ntraffic filter list: \n ● Access-list “ access-list-number ” permit | deny source \n[source-mask] \n ● Access-list number: identifies the list to which the \nentry belongs; a number from 1 to 99 \n ● Permit | deny: this entry allows or blocks traffic from \nthe specified address \n ● Source: identifies source IP address \n ● Source-mask: identifies the bits in the address field \nthat are matched; it has a 1 in position indicating “ don’t \ncare ” bits, and a 0 in any position that is to be strictly \nfollowed \n The IP access-group command links an existing access \nlist to an outbound interface. Only one access list per port, \nper protocol, and per direction is allowed. \n ● Access-list-number: indicates the number of the \naccess list to be linked to this interface \n ● In | out: selects whether the access list is applied to the \nincoming or outgoing interface; out is the default \n NAT is a feature that operates on a border router \nbetween an inside private addressing scheme and an \noutside public addressing scheme. The inside private \naddress is 192.168.1.0 and the outside public address \nis chosen to be 200.100.70.0. Equivalently, we have an \nintranet on the inside and the Internet on the outside. \n 13. DYNAMIC NAT CONFIGURATION \n First a NAT pool is configured from which outside addresses \nare allocated to the requesting inside hosts: IP NAT pool \n “ pool name ” “ start outside IP address ” “ finish outside IP \naddress. ” Next the access-list is defined to determine which \ninside networks are translated by the NAT router: access-list \n “ unique access-list number ” permit | deny “ inside IP net-\nwork address. ” Finally the NAT pool and the access list are \ncorrelated: \n ● IP NAT inside source list “ unique access list \nnumber ” pool “ pool name ” \n ● Enable the NAT on each interface of the NAT router \n ● IP NAT inside \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 \u0002 IP \nNAT outside \n You will note that only one interface may be config-\nured as outside, yet multiple interfaces may be configured \nas inside, with regard to Static NAT configuration: \n ● IP NAT inside source static “ inside IP address ” “ out-\nside IP address ” \n ● IP NAT inside source static 192.168.1.100 \n200.100.70.99 \n 14. THE PERIMETER \n In Figure 10.3 , you will see yet another IP network \nlabeled demilitarized zone (DMZ). You may ask, why \nyet another network? The rationale behind this design is \nas follows. \n The users that belong to IN might want to access the \nresources on the Internet, such as read their email and \nsend email to the users on the Internet. The corporation \nneeds to advertise its products on the Internet. \n The DMZ is the perimeter network, where resources \nhave public IP addresses, so they are seen and heard on \nthe Internet. The resources such as the Web (HTTP), email \n(SMTP), and domain name server (DNS) are placed in \nthe DMZ, whereas the rest of the resources that belong to \nthis corporation are completely hidden behind the Linksys \nrouter. The resources in the DMZ can be attacked by the \nhacker because they are open to users on the Internet. The \nrelevant TCP and UDP port numbers on the servers in the \nDMZ have to be left open to the incoming and outgoing \nip nat pool net-sf 200.100.70.50 200.100.70.60 netmask 255.255.255.0\nip nat inside source list 1 pool net-sf\n!\ninterface Ethernet0\n ip address 192.168.1.1 255.255.255.0 \n ip nat inside\n!\ninterface Ethernet1\n ip address 200.100.70.20 255.255.255.0\n ip nat outside\naccess-list 1 deny 192.168.1.0 0.0.0.255\n LISTING 10.6 Network Address Translation (NAT). \n" }, { "page_number": 194, "text": "Chapter | 10 Local Area Network Security\n161\ntraffic. Does this create a potential “ hole ” in the corporate \nnetwork? The answer to this is both yes and no. Someone \ncan compromise the resources in the DMZ without the \nentire network being exposed to a potential attack. \n The first firewall is the Cisco router, and it is the first \nline of defense, were network security policy imple-\nmented. On the Cisco router it is known as the Access \nControl List (ACL). This firewall will allow external \ntraffic to inbound TCP port 80 on the Web server, TCP \nport 25 on the email server, and TCP and UDP port 53 \non the DNS server. The external traffic to the rest of the \nports will be denied and logged. \n The second line of defense is the Linksys router that \nwill have well-known ports closed to external traffic. It \ntoo will monitor and log the traffic. It is acceptable to \nplace email and the Web server behind the Linksys \nrouter on the private IP network address. Then you will \nhave to open up the TCP ports 80 and 25 on the Linksys \nrouter so that the external traffic can be mapped to ports \n80 and 25, respectively. This would slow down the traf-\nfic because the Linksys router (or any commercial-grade \nrouter) would have to constantly map the port numbers \nback and forth. Finally, the DNS server would always \nneed to be placed in the DMZ with a public IP address, \n FIGURE 10.3 An illustrative firewall design. \n" }, { "page_number": 195, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n162\nsince it will be used to resolve domain names by both \ninternal and external users. This decision has to be left \nto the corporate IT staff. \n 15. ACCESS LIST DETAILS \n The Cisco router in Figure 10.3 can be configured with \nthe following access list to define network security pol-\nicy. Building an access list in the configuration script of \nthe router does not activate the list unless it is applied to \nan interface. “ ip access-group 101 in ” applies the access-\nlist 101 to the serial interface of the router. Some of the \naccess-list commands are explained here. For more infor-\nmation on Cisco access-list commands, visit the Cisco \nWeb site (www.cisco.com): \n ● ip access-group group no. { in | out } : default is out \n ● What is the group number? \n ● The group number is the number that appears in the \naccess-list command line \n ● What is { in | out } ? \n ● In implies that the packet enters the router’s interface \nfrom the network \n ● Out implies that the packet leaves the router’s inter-\nface to the network \n All TCP packets are IP packets, but all IP packets \nare not TCP packets. Therefore, entries matching on IP \npackets are more generic than matching on TCP, UDP, or \nICMP packets. Each entry in the access list is interpreted \n(see Listing 10.7 ) from top to bottom for each packet \non the specified interface. Once a match is reached, \nthe remaining access-list entries are ignored. Hence, \nthe order of entries in an access list is very critical, and \ntherefore more specific entries should appear earlier on. \n This permits TCP from any host to any host if the \nACK or RST bit is set, which indicates that it is part of an \nestablished connection. You will note that in a TCP Full \nConnect, the first packet from the source node does not \nhave the ACK bit set. The keyword established is meant to \nprevent an untrusted user from initiating a connection while \nallowing packets that are part of already established TCP \nconnections to go through: \n ● Access-list 101 permit udp any gt 1023 host \n200.100.70.10 eq 53 \n ● Permit UDP protocol from any host with port greater \nthan 1023 to the DNS server at port 53 \n ● Access-list 101 permit ip any host 200.100.70.12 \n ● Permit IP from any host to 200.100.70.12 \n or \n ● Access-list 101 permit TCP any 200.100.70.12 eq 80 \n ● Permit any host to engage with our HTTP server on \nport 80 only \n ● Access-list 101 permit icmp any echo-reply \n ● Permit ICMP from any host to any host if the packet \nis in response to a ping request \n ● Access-list 101 deny ip any any \n The last access-list command is implicit (that is, not \nexplicitly stated). The action of this last access-list is to \ndeny all other packets. \n 16. TYPES OF FIREWALLS \n Conceptually, there are three types of firewalls: \n ● Packet filtering . Permit packets to enter or leave the \nnetwork through the interface on the router on the \nbasis of protocol, IP address, and port numbers. \n ● Application-layer firewall. A proxy server that acts \nas an intermediate host between the source and the \ndestination nodes. \n ● Stateful-inspection layer. Validates the packet on the \nbasis of its content. \n 17. PACKET FILTERING: IP FILTERING \nROUTERS \n An IP packet-filtering router permits or denies the packet \nto either enter or leave the network through the interface \n(incoming and outgoing) on the basis of the protocol, IP \naddress, and the port number. The protocol may be TCP, \nUDP, HTTP, SMTP, or FTP. The IP address under con-\nsideration would be both the source and the destination \naddresses of the nodes. The port numbers would corre-\nspond to the well-known port numbers. The packet-fil-\ntering firewall has to examine every packet and make a \ndecision on the basis of defined ACL; additionally it will \nlog the following guarded attacks on the network: \n ● A hacker will attempt to send IP spoofed packets \nusing raw sockets (we will discuss more about usage \nof raw sockets in the next chapters) \n ● Log attempted network scanning for open TCP and \nUDP ports — NIDS will carry out this detective work \nin more detail \ninterface serial0\nip address 210.100.70.2\nip access-group 101 in\n!\naccess-list 101 permit tcp any any established\n LISTING 10.7 Access-list configuration script. \n" }, { "page_number": 196, "text": "Chapter | 10 Local Area Network Security\n163\n ● SYN attacks using TCP connect(), and TCP half \nopen \n ● Fragment attacks \n 18. APPLICATION-LAYER FIREWALLS: \nPROXY SERVERS \n These are proxy servers that act as an intermediary host \nbetween the source and the destination nodes. Each \nof the sources would have to set up a session with the \nproxy server, then the proxy server would set up a ses-\nsion with the destination node. The packets would have \nto flow through the proxy server. There are examples of \nWeb and FTP proxy servers on the Internet. The proxy \nservers would also have to be used by the internal users, \nthat is, the traffic from the internal users will have to \nrun through the proxy server to the outside network. Of \ncourse, this slows the flow of packets, but you must pay \nthe price for added network security. \n 19. STATEFUL INSPECTION FIREWALLS \n In here the firewall will examine the contents of the pack-\nets before permitting them to either enter or leave the net-\nwork. The contents of the packets must conform with the \nprotocol declared with the packet. For example, if the pro-\ntocol declared is HTTP, the contents of the packet must be \nconsistent with the HTTP packet definition. \n 20. NIDS COMPLEMENTS FIREWALLS \n A firewall acts as a barrier, if so designed, among vari-\nous IP network segments. Firewalls may be defined \namong IP intranet segments to protect resources. In any \ncorporate network, there will always be more than one \nfirewall because an intruder could be one of the author-\nized network users. Hence the following points should \nbe noted: \n ● Not all threats originate outside the firewall. \n ● The most trusted users are also the potential \nintruders. \n ● Firewalls themselves may be subject to attack. \n Since the firewall sits at the boundary of the IP net-\nwork segments, it can only monitor the traffic entering \nand leaving the interface on the firewall that connects \nto the network. If the intruder is internal to the firewall, \nthe firewall will not be able to detect the security breach. \nOnce an intruder has managed to transit through the inter-\nface of the firewall, the intruder would go undetected, \nwhich could possibly lead to stealing sensitive infor-\nmation, destroying information, leaving behind viruses, \nstaging attacks on other networks, and most important, \nleaving spyware software to monitor the activities on the \nnetwork for future attacks. Hence, a NIDS would play a \ncritical role in monitoring activities on the network and \ncontinually looking for possible anomalous patterns of \nactivities. \n Firewall technology has been around for the past 20 \nyears, so much has been documented about its weak-\nnesses and strengths. Information about firewalls is freely \navailable on the Internet. Hence a new breed of hackers \nhave utilized tunneling as a means of bypassing firewall \nsecurity policy. NIDS enhances security infrastructure by \nmonitoring system activities for signs of attack and then, \nbased on the system settings, responds to the attack as \nwell as generates an alarm. Response to a potential attack \nis known as the incident response or incident handling , \nwhich combines investigation and diagnosis phases. \nIncident response has been an emerging technology in the \npast couple of years and is now an integral part of intru-\nsion detection and prevention technology. \n Finally, but not least, securing network systems is an \nongoing process in which new threats arise all the time. \nConsequently, firewalls, NIDS, and intrusion prevention \nsystems are continuously evolving technologies. In this \nchapter and subsequent chapters our focus has been and \nwill be wired networks. However, as wireless data net-\nworks proliferate and seamlessly connect to the cellular \nvoice networks, the risk of attacks on the wired networks \nis growing exponentially. \n 21. MONITOR AND ANALYZE SYSTEM \nACTIVITIES \n Figure 10.1 shows the placement of a NIDS, one in the \nDMZ and the other in the private network. This sug-\ngests at least two points on the network from which we \ncapture data packets. The next question is the timing of \nthe information collection, although this depends on the \ndegree of threat perceived to the network. \n If the level of perceived threat to the network is low, \nan immediate response to the attack is not very criti-\ncal. In such a case, interval-oriented data capturing and \nanalysis is most economical in terms of load placed on a \nNIDS and other resources on the network. Additionally, \nthere might not be full-time network security personnel \nto respond to an alarm triggered by the NIDS. \n If the level of perceived threat is imminent and the \ntime and the data are mission-critical to the organization, \n" }, { "page_number": 197, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n164\nreal-time data gathering and analysis are of extreme \nimportance. Of course, the real-time data gathering would \nimpact the CPU cycles on the NIDS and would lead to a \nmassive amount of data storage. With real-time data cap-\nturing and analysis, real-time response to an attack can be \nautomated with notification. In such a case, network activ-\nities can be interrupted, the incident could be isolated, and \nsystem and network recovery could be set in motion. \n Analysis Levels \n Capturing and storing data packets are among the man-\nageable functions of any IDS. How do we analyze the \ndata packets that represent potential or imminent threats \nto the network? \n We need to examine the data packets and look for \nevidence that could point to a threat. Let’s examine the \nmakeup of data packets. Of course, any packet is almost \nencapsulated by successive protocols from the Internet \nmodel, with the data as its kernel. Potential attacks could \nbe generated by IP or MAC spoofing, fragmented IP pack-\nets leading to some sort of DoS, saturating the resource \nwith flooding, and much more. We should remind read-\ners that since humans are not going to examine the data \npackets, this process of examination is relegated to an \nalgorithm. This algorithm must compare the packets with \na known format of the packet (signature) that suggests an \nattack is in progress, or it could be that there is some sort \nof unusual activity on the network. How does one distin-\nguish abnormal from normal sets of activities? There must \nbe some baseline (statistical) that indicates normal, and \ndeviation from it would be an indicator of abnormal. We \nexplore these concepts in the following paragraphs. \n We can identify two levels of analysis: signature and \nstatistical. \n 22. SIGNATURE ANALYSIS \n Signature analysis includes some sort of pattern match-\ning of the contents of the data packets. There are patterns \ncorresponding to known attacks. These known attacks are \nstored in a database, and a pattern is examined against the \nknown pattern, which defines signature analysis. Most \ncommercial NIDS products perform signature analy-\nsis against a database of known attacks, which is part of \nthe NIDS software. Even though the databases of known \nattacks may be proprietary to the vendor, the client of this \nsoftware should be able to increase the scope of the NIDS \nsoftware by adding signatures to the database. Snort is \nopen-source NIDS software, and the database of known \nattacks is maintained and updated by the user community. \nThis database is an ASCII (human-readable) file. \n 23. STATISTICAL ANALYSIS \n First we have to define what constitutes a normal traffic \npattern on the network. Then we must identify deviations \naway from normal patterns as potential threats. These \ndeviations must be arrived at by statistical analysis of \nthe traffic patterns. A good example would be how many \ntimes records are written to a database over a given time \ninterval, and deviations from normally accepted numbers \nwould be an indication of an impending attack. Of course, \na clever hacker could mislead the detector into accepting \nattack activity as normal by gradually varying behavior \nover time. This would be an example of a false negative. \n 24. SIGNATURE ALGORITHMS \n Signature analysis is based on these algorithms: \n ● Pattern matching \n ● Stateful pattern matching \n ● Protocol decode-based analysis \n ● Heuristic-based analysis \n ● Anomaly-based analysis \n Pattern Matching \n Pattern matching is based on searching for a fixed \nsequence of bytes in a single packet. In most cases the \npattern is matched against only if the suspect packet is \nassociated with a particular service or, more precisely, \ndestined to and from a particular port. This helps to \nreduce the number of packets that must get examined \nand thus speed up the process of detection. However, it \ntends to make it more difficult for systems to deal with \nprotocols that do not live on well-defined ports. \n The structure of a signature based on the simple pat-\ntern-matching approach might be as follows: First, the \npacket is IPv4 and TCP, the destination port is 3333, and \nthe payload contains the fictitious string psuw , trigger an \nalarm. In this example, the pattern psuw is what we were \nsearching for, and one of the IDS rules implies to trigger \nan alarm. One could do a variation on this example to set \nup more convoluted data packets. The advantage of this \nsimple algorithm is: \n ● This method allows for direct correlation of an \nexploit with the pattern; it is highly specific. \n ● This method is applicable across all protocols. \n" }, { "page_number": 198, "text": "Chapter | 10 Local Area Network Security\n165\n ● This method reliably alerts on the pattern matched. \n The disadvantages of this pattern-matching approach \nare as follows: \n ● Any modification to the attack can lead to missed \nevents (false negatives). \n ● This method can lead to high false-positive rates if \nthe pattern is not as unique as the signature writer \nassumed. \n ● This method is usually limited to inspection of a \nsingle packet and, therefore, does not apply well \nto the stream-based nature of network traffic such \nas HTTP traffic. This scenario leads to easily \nimplemented evasion techniques. \n Stateful Pattern Matching \n This method of signature development adds to the pattern-\nmatching concept because a network stream comprises \nmore than a single atomic packet. Matches should be \nmade in context within the state of the stream. This means \nthat systems that perform this type of signature analysis \nmust consider arrival order of packets in a TCP stream and \nshould handle matching patterns across packet boundaries. \nThis is somewhat similar to a stateful firewall. \n Now, instead of looking for the pattern in every packet, \nthe system has to begin to maintain state information on the \nTCP stream being monitored. To understand the difference, \nconsider the following scenario. Suppose that the attack \nyou are looking for is launched from a client connecting to \na server and you have the pattern-match method deployed \non the IDS. If the attack is launched so that in any given \nsingle TCP packet bound for the target on port 3333 the \nstring is present, this event triggers the alarm. If, however, \nthe attacker causes the offending string to be sent such that \nthe fictitious gp is in the first packet sent to the server and \n o is in the second, the alarm does not get triggered. If the \nstateful pattern-matching algorithm is deployed instead, the \nsensor has stored the gp portion of the string and is able to \ncomplete the match when the client forwards the fictitious p .\nThe advantages of this technique are as follows: \n ● This method allows for direct correlation of an \nexploit with the pattern. \n ● This method is applicable across all protocols. \n ● This method makes evasion slightly more difficult. \n ● This method reliably alerts on the pattern specified. \n The disadvantages of the stateful pattern matching-\nbased analysis are as follows: \n ● Any modification to the attack can lead to missed \nevents (false negatives). \n ● This method can lead to high false-positive rates if \nthe pattern is not as unique as the signature writer \nassumed. \n Protocol Decode-based Analysis \n In many ways, intelligent extensions to stateful pattern \nmatches are protocol decode-based signatures. This class \nof signature is implemented by decoding the various ele-\nments in the same manner as the client or server in the \nconversation would. When the elements of the proto-\ncol are identified, the IDS applies rules defined by the \nrequest for comments (RFCs) to look for violations. In \nsome instances, these violations are found with pattern \nmatches within a specific protocol field, and some require \nmore advanced techniques that account for such variables \nas the length of a field or the number of arguments. \n Consider the fictitious example of the gwb attack \nfor illustration purposes. Suppose that the base pro-\ntocol that the attack is being run over is the fictitious \nOBL protocol, and more specifically, assume that the \nattack requires that the illegal fictitious argument gpp \nmust be passed in the OBL Type field. To further com-\nplicate the situation, assume that the Type field is pre-\nceded by a field of variable length called OBL Options. \nThe valid list of fictitious options are gppi, nppi, upsnfs , \nand cvjmep . Using the simple or the stateful pattern-\nmatching algorithm in this case leads to false positives \nbecause the option gppi contains the pattern that is being \nsearched for. In addition, because the field lengths are \nvariable, it would be impossible to limit such false posi-\ntives by specifying search start and stop locations. The \nonly way to be certain that gpp is being passed in as the \nOBL type argument is to fully decode the protocol. \n If the protocol allows for behavior that the pat-\ntern-matching algorithms have difficulty dealing with, \nnot doing full protocol decodes can also lead to false \nnegatives. For example, if the OBL protocol allows \nevery other byte to be a NULL if a value is set in the \nOBL header, the pattern matchers would fail to see \nfx00ox00ox00. The protocol decode-enabled analysis \nengine would strip the NULLS and fire the alarm as \nexpected, assuming that gpp was in the Type field. Thus, \nwith the preceding in mind, the advantages of the proto-\ncol decode-based analysis are as follows: \n ● This method can allow for direct correlation of an \nexploit. \n ● This method can be more broad and general to allow \ncatching variations on a theme. \n ● This method minimizes the chance for false positives \nif the protocol is well defined and enforced. \n" }, { "page_number": 199, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n166\n ● This method reliably alerts on the violation of the \nprotocol rules as defined in the rules script. \n The disadvantages of this technique are as follows: \n ● This method can lead to high false-positive rates if the \nRFC is ambiguous and allows developers the discre-\ntion to interpret and implement as they see fit. These \ngray area protocol violations are very common. \n ● This method requires longer development times to \nproperly implement the protocol parser. \n Heuristic-Based Analysis \n A good example of this type of signature is a signature \nthat would be used to detect a port sweep. This signature \nlooks for the presence of a threshold number of unique \nports being touched on a particular machine. The signa-\nture may further restrict itself through the specification \nof the types of packets that it is interested in (that is, \nSYN packets). Additionally, there may be a requirement \nthat all the probes must originate from a single source. \nSignatures of this type require some threshold manipula-\ntions to make them conform to the utilization patterns on \nthe network they are monitoring. This type of signature \nmay be used to look for very complex relationships as \nwell as the simple statistical example given. \n The advantages for heuristic-based signature analysis \nare that some types of suspicious and/or malicious activity \ncannot be detected through any other means. The disad-\nvantages are that algorithms may require tuning or modifi-\ncation to better conform to network traffic and limit false \npositives. \n Anomaly-Based Analysis \n From what is seen normally, anomaly-based signatures \nare typically geared to look for network traffic that \ndeviates. The biggest problem with this methodology is \nto first define what normal is. Some systems have hard-\ncoded definitions of normal, and in this case they could \nbe considered heuristic-based systems. Some systems are \nbuilt to learn normal, but the challenge with these systems \nis in eliminating the possibility of improperly classifying \nabnormal behavior as normal. Also, if the traffic pattern \nbeing learned is assumed to be normal, the system must \ncontend with how to differentiate between allowable devi-\nations and those not allowed or representing attack-based \ntraffic. The work in this area has been mostly limited to \nacademia, although there are a few commercial products \nthat claim to use anomaly-based detection methods. A \nsubcategory of this type of detection is the profile-based \ndetection methods. These systems base their alerts on \nchanges in the way that users or systems interact on the \nnetwork. They incur many of the same limitations and \nproblems that the overarching category has in inferring \nthe intent of the change in behavior. \n Statistical anomalies may also be identified on the \nnetwork either through learning or teaching of the sta-\ntistical norms for certain types of traffic, for example, \nsystems that detect traffic floods, such as UDP, TCP, or \nICMP floods. These algorithms compare the current rate \nof arrival of traffic with a historical reference; based on \nthis, the algorithms will alert to statistically significant \ndeviations from the historical mean. Often, a user can \nprovide the statistical threshold for the alerts. The advan-\ntages for anomaly-based detection are as follows: \n ● If this method is implemented properly, it can detect \nunknown attacks. \n ● This method offers low overhead because new signa-\ntures do not have to be developed. \n The disadvantages are: \n ● In general, these systems are not able to give you \nintrusion data with any granularity. It looks like \nsomething terrible may have happened, but the sys-\ntems cannot say definitively. \n ● This method is highly dependent on the environment \nin which the systems learn what normal is. \n The following are Freeware tools to monitor and ana-\nlyze network activities: \n ● Network Scanner, Nmap, is available from www.\ninsecure.org . Nmap is a free open-source utility to \nmonitor open ports on a network. The MS-Windows \nversion is a zip file by the name nmap-3.75-win32.\nzip. You also need to download a packet capture \nlibrary, WinPcap, under Windows. It is available \nfrom http://winpcap.polito.it . In addition to these \nprograms, you need a utility to unzip the zipped file, \nwhich you can download from various Internet sites. \n ● PortPeeker is a freeware utility for capturing network \ntraffic for TCP, UDP, or ICMP protocols. With \nPortPeeker you can easily and quickly see what \ntraffic is being sent to a given port. This utility is \navailable from www.Linklogger.com . \n ● Port-scanning tools such as Fport 2.0 and SuperScan \n4.0 are easy to use and freely available from www.\nFoundstone.com . \n ● Network sniffer Ethereal is available from www.\nethereal.com . Ethereal is a packet sniffer and \nanalyzer for a variety of protocols. \n" }, { "page_number": 200, "text": "Chapter | 10 Local Area Network Security\n167\n ● EtherSnoop light is a free network sniffer designed \nfor capturing and analyzing the packets going through \nthe network. It captures the data passing through \nyour network Ethernet card, analyzes the data, and \nrepresents it in a readable form. EtherSnoop light is a \nfully configurable network analyzer program for Win32 \nenvironments. It is available from www.arechisoft.com . \n ● A fairly advanced tool, Snort, an open-source NIDS, \nis available from www.snort.org . \n ● UDPFlood is a stress testing tool that could be \nidentified as a DoS agent; it is available from www.\nFoundstone.com . \n ● An application that allows you to generate a SYN \nattack with a spoofed address so that the remote host’s \nCPU cycle’s get tied up is Attacker, and is available \nfrom www.komodia.com . \n" }, { "page_number": 201, "text": "This page intentionally left blank\n" }, { "page_number": 202, "text": "169\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Wireless Network Security \n Chunming Rong \n University of Stavanger \n Erdal Cayirci \n University of Stavanger \n Chapter 11 \n With the rapid development of technology in wireless \ncommunication and microchips, wireless technology \nhas been widely used in various application areas. The \nproliferation of wireless devices and wireless networks \nin the past decade shows the widespread use of wireless \ntechnology. \n Wireless networks is a general term to refer to various \ntypes of networks that communicate without the need of \nwire lines. Wireless networks can be broadly categorized \ninto two classes based on the structures of the networks: \nwireless ad hoc networks and cellular networks. The \nmain difference between these two is whether a fixed \ninfrastructure is present. \n Three of the well-known cellular networks are the \nGSM network, the CDMA network, and the 802.11 \nwireless LAN. The GSM network and the CDMA net-\nwork are the main network technologies that support \nmodern mobile communication, with most of the mobile \nphones and mobile networks that are built based on these \ntwo wireless networking technologies and their variants. \nAs cellular networks require fixed infrastructures to sup-\nport the communication between mobile nodes, deploy-\nment of the fixed infrastructures is essential. Further, \ncellular networks require serious and careful topology \ndesign of the fixed infrastructures before deployment, \nbecause the network topologies of the fixed infrastruc-\ntures are mostly static and will have a great impact on \nnetwork performance and network coverage. \n Wireless ad hoc networks do not require a fixed infra-\nstructure; thus it is relatively easy to set up and deploy a \nwireless ad hoc network (see Figure 11.1 ). Without the \nfixed infrastructure, the topology of a wireless ad hoc net-\nwork is dynamic and changes frequently. It is not realistic \nto assume a static or a specific topology for a wireless ad \nhoc network. On the other hand, wireless ad hoc networks \nneed to be self-organizing; thus mobile nodes in a wire-\nless ad hoc network can adapt to the change of topology \nand establish cooperation with other nodes at runtime. \n Besides the conventional wireless ad hoc networks, \nthere are two special types that should be mentioned: \nwireless sensor networks and wireless mesh networks. \nWireless sensor networks are wireless ad hoc networks, \nmost of the network nodes of which are sensors that \nmonitor a target scene. The wireless sensors are mostly \ndeprived devices in terms of computation power, power \nsupply, bandwidth, and other computation resources. \nWireless mesh networks are wireless networks with \neither a full mesh topology or a partial mesh topology \nin which some or all nodes are directly connected to all \nother nodes. The redundancy in connectivity of wireless \nnetworks provides great reliability and excellent flexibil-\nity in network packet delivery. \n 1. CELLULAR NETWORKS \n Cellular networks require fixed infrastructures to work \n(see Figure 11.2 ). A cellular network comprises a fixed \ninfrastructure and a number of mobile nodes. Mobile \nnodes connect to the fixed infrastructure through wireless \nlinks. They may move around from within the range of \nWireless\nNetworks\nCellular\nNetworks\nWireless Ad\nHoc Networks\n FIGURE 11.1 Classification of wireless networks. \n" }, { "page_number": 203, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n170\none base station to outside the range of the base station, \nand they can move into the ranges of other base stations. \nThe fixed infrastructure is stationary, or mostly stationary, \nincluding base stations, links between base stations, and \npossibly other conventional network devices such as rout-\ners. The links between base stations can be either wired or \nwireless. The links should be more substantial than those \nlinks between base stations and mobile nodes in terms of \nreliability, transmission range, bandwidth, and so on. \n The fixed infrastructure serves as the backbone of a \ncellular network, providing high speed and stable con-\nnection for the whole network, compared to the connec-\ntivity between a base station and a mobile node. In most \ncases, mobile nodes do not communicate with each other \ndirectly without going through a base station. A packet \nfrom a source mobile node to a destination mobile node \nis likely to be first transmitted to the base station to \nwhich the source mobile node is connected. The packet \nis then relayed within the fixed infrastructures until \nreaching the destination base station to which the des-\ntination mobile node is connected. The destination base \nstation can then deliver the packet to the destination \nmobile node to complete the packet delivery. \n Cellular Telephone Networks \n Cellular telephone networks offer mobile communica-\ntion for most of us. With a cellular telephone network, \nbase stations are distributed over a region, with each \nbase station covering a small area. Each part of the small \narea is called a cell . Cell phones within a cell connect to \nthe base station of the cell for communication. When a \ncell phone moves from one cell to another, its connec-\ntion will also be migrated from one base station to a new \nbase station. The new base station is the base station of \nthe cell into which the cell phone just moved. \n Two of the technologies are the mainstream for cel-\nlular telephone networks: the global system for mobile \ncommunication (GSM) and code division multiple \naccess (CDMA). \n GSM is a wireless cellular network technology for \nmobile communication that has been widely deployed in \nmost parts of the world. Each GSM mobile phone uses a \npair of frequency channels, with one channel for send-\ning data and another for receiving data. Time division \nmultiplexing (TDM) is used to share frequency pairs by \nmultiple mobiles. \n CDMA is a technology developed by a company \nnamed Qualcomm and has been accepted as an interna-\ntional standard. CDMA assumes that multiple signals \nadd linearly, instead of assuming that colliding frames \nare completely garbled and of no value. With coding \ntheory and the new assumption, CDMA allows each \nmobile to transmit over the entire frequency spectrum at \nall times. The core algorithm of CDMA is how to extract \ndata of interest from the mixed data. \n 802.11 Wireless LANs \n Wireless LANs are specified by the IEEE 802.11 series \nstandard [1] , which describes various technologies and \nprotocols for wireless LANs to achieve different targets, \nallowing the maximum bit rate from 2 Mbits per second \nto 248 Mbits per second. \n Wireless LANs can work in either access point (AP) \nmode or ad hoc mode, as shown in Figure 11.3 . When a \nwireless LAN is working in AP mode, all communica-\ntion passes through a base station, called an access point . \nThe access point then passes the communication data to \nthe destination node, if it is connected to the access point, \nor forwards the communication data to a router for fur-\nther routing and relaying. When working in ad hoc mode, \nwireless LANs work in the absence of base stations. \nNodes directly communicate with other nodes within their \ntransmission range, without depending on a base station. \n One of the complications that 802.11 wireless LANs \nincur is medium access control in the data link layer. \nMedium access control in 802.11 wireless LANs can be \neither distributed or centralized control by a base sta-\ntion. The distributed medium access control relies on the \nCarrier Sense Multiple Access (CSMA) with Collision \nAvoidance (CSMA/CA) protocol. CSMA/CA allows net-\nwork nodes to compete to transmit data when a channel \nis idle and uses the Ethernet binary exponential backoff \nalgorithm to decide a waiting time before retransmis-\nsion when a collision occurs. CSMA/CA can also oper-\nate based on MACAW (Multiple Access with Collision \n FIGURE 11.2 Cellular networking. \n" }, { "page_number": 204, "text": "Chapter | 11 Wireless Network Security\n171\nAvoidance for Wireless) using virtual channel sensing. \nRequest packets and clear-to-send (CTS) packets are \nbroadcast before data transmission by the sender and the \nreceiver, respectively. All stations within the range of the \nsender or the receiver will keep silent in the course of data \ntransmission to avoid interference on the transmission. \n The centralized medium access control is imple-\nmented by having the base station broadcast a beacon \nframe periodically and poll nodes to check whether they \nhave data to send. The base station serves as a central \ncontrol over the allocation of the bandwidth. It allocates \nbandwidth according to the polling results. All nodes \nconnected to the base station must behave in accordance \nwith the allocation decision made by the base station. \nWith the centralized medium access control, it is possi-\nble to provide quality-of-service guarantees because the \nbase station can control on the allocation of bandwidth \nto a specific node to meet the quality requirements. \n 2. WIRELESS AD HOC NETWORKS \n Wireless ad hoc networks are distributed networks that \nwork without fixed infrastructures and in which each \nnetwork node is willing to forward network packets for \nother network nodes. The main characteristics of wire-\nless ad hoc networks are as follows: \n ● Wireless ad hoc networks are distributed networks that \ndo not require fixed infrastructures to work. Network \nnodes in a wireless ad hoc network can be randomly \ndeployed to form the wireless ad hoc network. \n ● Network nodes will forward network packets for \nother network nodes. Network nodes in a wireless \nad hoc network directly communicate with other \nnodes within their ranges. When these networks \ncommunicate with network nodes outside their \nranges, network packets will be forwarded by the \nnearby network nodes and other nodes that are on the \npath from the source nodes to the destination nodes. \n ● Wireless ad hoc networks are self-organizing. Without \nfixed infrastructures and central administration, \nwireless ad hoc networks must be capable of \nestablishing cooperation between nodes on their own. \nNetwork nodes must also be able to adapt to changes \nin the network, such as the network topology. \n ● Wireless ad hoc networks have dynamic network \ntopologies. Network nodes of a wireless ad hoc \nnetwork connect to other network nodes through \nwireless links. The network nodes are mostly mobile. \nThe topology of a wireless ad hoc network can \nchange from time to time, since network nodes move \naround from within the range to the outside, and new \nnetwork nodes may join the network, just as existing \nnetwork nodes may leave the network. \n Wireless Sensor Networks \n A wireless sensor network is an ad hoc network mainly \ncomprising sensor nodes, which are normally used to \nmonitor and observe a phenomenon or a scene. The sen-\nsor nodes are physically deployed within or close to the \nphenomenon or the scene. The collected data will be sent \nback to a base station from time to time through routes \ndynamically discovered and formed by sensor nodes. \n Sensors in wireless sensor networks are normally small \nnetwork nodes with very limited computation power, lim-\nited communication capacity, and limited power supply. \nThus a sensor may perform only simple computation and \ncan communicate with sensors and other nodes within a \nshort range. The life spans of sensors are also limited by \nthe power supply. \n Wireless sensor networks can be self-organizing, \nsince sensors can be randomly deployed in some inac-\ncessible areas. The randomly deployed sensors can coop-\nerate with other sensors within their range to implement \nthe task of monitoring or observing the target scene or \nthe target phenomenon and to communicate with the \nbase station that collects data from all sensor nodes. \nThe cooperation might involve finding a route to trans-\nmit data to a specific destination, relaying data from one \nneighbor to another neighbor when the two neighbors \nare not within reach of each other, and so on. \n Mesh Networks \n One of the emerging technologies of wireless network \nis wireless mesh networks (WMNs). Nodes in a WMN \ninclude mesh routers and mesh clients. Each node in a \nWMN works as a router as well as a host. When it’s a \nrouter, each node needs to perform routing and to forward \n FIGURE 11.3 (a) A wireless network in AP mode; (b) a wireless net-\nwork in ad hoc mode. \n" }, { "page_number": 205, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n172\npackets for other nodes when necessary, such as when \ntwo nodes are not within direct reach of each other and \nwhen a route to a specific destination for packet delivery \nis required to be discovered. \n Mesh routers may be equipped with multiple wire-\nless interfaces, built on either the same or different wire-\nless technologies, and are capable of bridging different \nnetworks. Mesh routers can also be classified as access \nmesh routers, backbone mesh routers, or gateway mesh \nrouters. Access mesh routers provide mesh clients with \naccess to mesh networks; backbone mesh routers form \nthe backbone of a mesh network; and a gateway mesh \nrouter connects the backbone to an external network. \n Each mesh client normally has only one network \ninterface that provides network connectivity with other \nnodes. Mesh clients are not usually capable of bridging \ndifferent networks, which is different from mesh routers. \n Similar to other ad hoc networks, a wireless mesh net-\nwork can be self-organizing. Thus nodes can establish and \nmaintain connectivity with other nodes automatically, with-\nout human intervention. Wireless mesh networks can divided \ninto backbone mesh networks and access mesh networks. \n 3. SECURITY PROTOCOLS \n Wired Equivalent Privacy (WEP) was defined by the \nIEEE 802.11 standard [2] . WEP is designed to protect \nlinkage-level data for wireless transmission by provid-\ning confidentiality, access control, and data integrity, to \nprovide secure communication between a mobile device \nand an access point in a 802.11 wireless LAN. \n WEP \n Implemented based on shared key secrets and the RC4 \nstream cipher [3] , WEP’s encryption of a frame includes \ntwo operations (see Figure 11.4 ). It first produces a \nchecksum of the data, and then it encrypts the plaintext \nand the checksum using RC4: \n ● Checksumming . Let c be an integrity checksum func-\ntion. For a given message M , a checksum c ( M) is \ncalculated and then concatenated to the end of M , \nobtaining a plaintext P \u0003 \u0004 M , c ( M ) \u0005 . Note that the \nchecksum c ( M ) does not depend on the shared key. \n ● Encryption . The shared key k is concatenated to the \nend of the initialization vector (IV) v , forming \n \u0004 v , k \u0005 . \u0004 v , k \u0005 is then used as the input to the RC4 \nalgorithm to generate a keystream RC 4( v , k ). The \nplaintext P is exclusive-or’ed (XOR, denoted \nby \u0002 ) with the keystream to obtain the ciphertext: \n C \u0003 P \u0002 RC4(v,k) . \n Using the shared key k and the IV v , WEP can greatly \nsimplify the complexity of key distribution because it \n FIGURE 11.4 WEP encryption and decryption. \n" }, { "page_number": 206, "text": "Chapter | 11 Wireless Network Security\n173\nneeds only to distribute k and v but can achieve a rela-\ntively very long key sequence. IV changes from time to \ntime, which will force the RC 4 algorithm to produce a \nnew key sequence, avoiding the situation where the same \nkey sequence is used to encrypt a large amount of data, \nwhich potentially leads to several types of attacks [4, 5] . \n WEP combines the shared key k and the IV v as inputs \nto seed the RC 4 function. 802.11B [6] specifies that the \nseed shall be 64 bits long, with 24 bits from the IV v and \n40 bits from the shared key k . Bits 0 through 23 of the seed \ncontain bits 0 through 23 of the IV v , and bits 24 through \n63 of the seed contain bits 0 through 39 of the shared key k . \n When a receiver receives the ciphertext C , it will \nXOR the ciphertext C with the corresponding keystream \nto produce the plaintext M \u0006 as follows:\n \nM\nC\nRC\nk,v\nP\nRC\nk,v\nRC\nk,v\nM\n\u0006 \u0003\n\u0003\n\u0003\n⊕\n⊕\n⊕\n4\n4\n(\n)\n(\n(\n))\n4(\n)\n \n WPA and WPA2 \n Wi-Fi Protected Access (WPA) is specified by the IEEE \n802.11i standard, which is aimed at providing stronger \nsecurity compared to WEP and is expected to tackle \nmost of the weakness found in WEP [7, 8, 9] . \n WPA \n WPA has been designed to target both enterprise and \nconsumers. Enterprise deployment of WPA is required \nto be used with IEEE 802.1x authentication, which is \nresponsible for distributing different keys to each user. \nPersonal deployment of WPA adopts a simpler mecha-\nnism, which allows all stations to use the same key. This \nmechanism is called the Pre-Shared Key (PSK) mode. \n The WPA protocol works in a similar way to WEP. \nWPA mandates the use of the RC 4 stream cipher with a \n128 – bit key and a 48 – bit initialization vector (IV), com-\npared with the 40 – bit key and the 24 – bit IV in WEP. \n WPA also has a few other improvements over WEP, \nincluding the Temporal Key Integrity Protocol (TKIP) \nand the Message Integrity Code (MIC). With TKIP, WPA \nwill dynamically change keys used by the system peri-\nodically. With the much larger IV and the dynamically \nchanging key, the stream cipher RC 4 is able to produce a \nmuch longer keystream. The longer keystream improved \nWPA’s protection against the well-known key recovery \nattacks on WEP, since finding two packets encrypted \nusing the same key sequences is literally impossible due \nto the extremely long keystream. \n With MIC, WPA uses an algorithm named Michael \nto produce an authentication code for each message, \nwhich is termed the message integrity code . The mes-\nsage integrity code also contains a frame counter to pro-\nvide protection over replay attacks. \n WPA uses the Extensible Authentication Protocol \n(EAP) framework [10] to conduct authentication. \nWhen a user (supplicant) tries to connect to a network, \nan authenticator will send a request to the user asking \nthe user to authenticate herself using a specific type of \nauthentication mechanism. The user will respond with \ncorresponding authentication information. The authenti-\ncator relies on an authentication server to make the deci-\nsion regarding the user’s authentication. \n WPA2 \n WPA2 is not much different from WPA. Though TKIP \nis required in WPA, Advanced Encryption Standard \n(AES) is optional. This is aimed to provide backward \ncompatibility for WPA over hardware designed for WEP, \nas TKIP can be implemented on the same hardware as \nthose for WEP, but AES cannot be implemented on this \nhardware. TKIP and AES are both mandatory in WPA2 \nto provide a higher level of protection over wireless \nconnections. AES is a block cipher, which can only be \napplied to a fixed length of data block. AES accepts key \nsizes of 128 bits, 196 bits, and 256 bits. \n Besides the mandatory requirement of supporting \nAES, WPA2 also introduces supports for fast roaming \nof wireless clients migrating between wireless access \npoints. First, WPA2 allows the caching of a Pair-wise \nMaster Key (PMK), which is the key used for a session \nbetween an access point and a wireless client; thus a \nwireless client can reconnect a recently connected access \npoint without having to reauthenticate. Second, WPA2 \nenables a wireless client to authenticate itself to a wire-\nless access point that it is moving to while the wireless \nclient maintains its connection to the existing access \npoint. This reduces the time needed for roaming clients \nto move from one access point to another, and it is espe-\ncially useful for timing-sensitive applications. \n SPINS: Security Protocols for Sensor \nNetworks \n Sensor nodes in sensor networks are normally low-end \ndevices with very limited resources, such as memory, \ncomputation power, battery, and network bandwidth. \n Perrig et al. [11] proposed a family of security proto-\ncols named SPINS, which were specially designed for low-\nend devices with severely limited resources, such as sensor \nnodes in sensor networks. SPINS consists of two building \nblocks: Secure Network Encryption Protocol (SNEP) and \n" }, { "page_number": 207, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n174\nthe “ micro ” version of the Timed, Effi cient, Streaming, \nLoss-tolerant Authentication Protocol ( μ TESLA). SNEP \nuses symmetry encryption to provide data confidential-\nity, two-party data authentication, and data freshness. \n μ TESLA provides authentication over broadcast streams. \nSPINS assumes that each sensor node shares a master key \nwith the base station. The master key serves as the base of \ntrust and is used to derive all other keys. \n SNEP \n As illustrated in Figure 11.5 , SNEP uses a block cipher \nto provide data confidentiality and message authenti-\ncation code (MAC) to provide authentication. SNEP \nassumes a shared counter C between the sender and the \nreceiver and two keys, the encryption key K encr and the \nauthentication key K mac . \n For an outgoing message D , SNEP processes it as \nfollows: \n ● The message D is first encrypted using a block \ncipher in counter mode with the key K encr and \nthe counter C , forming the encrypted text \n E \u0003 { D } \u0004 Kencr , C \u0005 . \n ● A message authentication code is produced for the \nencrypted text E with the key K mac and the counter \n C , forming the MAC M \u0003 MAC ( K mac , C | E ) where \n MAC() is a one-way function and C | E stands for the \nconcatenation of C and E . \n ● SNEP increments the counter C . \n To send the message D to the recipient, SNEP actu-\nally sends out E and M . In other words, SNEP encrypts \n D to E using the shared key K encr between the sender and \nthe receiver to prevent unauthorized disclosure of the \ndata, and it uses the shared key K mac , known only to the \nsender and the receiver, to provide message authentica-\ntion. Thus data confidentiality and message authentica-\ntion can both be implemented. \n The message D is encrypted with the counter C , \nwhich will be different in each message. The same mes-\nsage D will be encrypted differently even it is sent mul-\ntiple times. Thus semantic security is implemented in \nSNEP. The MAC is also produced using the counter C ; \nthus it enables SNEP to prevent replying to old messages. \n μ TESLA \n TESLA [12, 13, 14] was proposed to provide message \nauthentication for multicast. TESLA does not use any \nasymmetry cryptography, which makes it lightweight in \nterms of computation and overhead of bandwidth. \n μ TESLA is a modified version of TESLA, aiming to \nprovide message authentication for multicasting in sensor \nnetworks. The general idea of μ TESLA is that the sender \nsplits the sending time into intervals. Packets sent out in \ndifferent intervals are authenticated with different keys. \nKeys to authenticate packets will be disclosed after a \nshort delay, when the keys are no longer used to send out \nmessages. Thus packets can be authenticated when the \nauthentication keys have been disclosed. Packets will not \nbe tampered with while they are in transit since the keys \nhave not been disclosed yet. The disclosed authentication \nkeys can be verified using previous known keys to pre-\nvent malicious nodes from forging authentication keys. \n μ TESLA has four phases: sender setup, sending \nauthenticated packets, bootstrapping new receivers, \nand authenticating packets. In the sender setup phase, \na sender generates a chain of keys, K i (0 \b i \b n ). The \nkeychain is a one-way chain such that K i can be derived \nfrom K j if i \b j , such as a keychain K i (i \u0003 0, … ,n), \n K i \u0003 F ( K i \u0002 1 ), where F is a one-way function. The sender \nalso decides on the starting time T 0 , the interval duration \n T int , and the disclosure delay d (unit is interval), as shown \nin Figure 11.6 . \n To send out authenticated packets, the sender attaches \na MAC with each packet, where the MAC is produced \nusing a key from the keychain and the data in the net-\nwork packet. μ TESLA has specific requirements on the \n FIGURE 11.5 Sensor Network Encryption Protocol (SNEP). \n FIGURE 11.6 Sequences of intervals, key usages, and key disclosure. \n" }, { "page_number": 208, "text": "Chapter | 11 Wireless Network Security\n175\nuse of keys for producing MACs. Keys are used in the \nsame order as the key sequence of the keychain. Each \nof the keys is used in one interval only. For the inter-\nval T i \u0003 T 0 \u0002 i \u0007 T int , the key K i is used to produce the \nMACs for the messages sent out in the interval T i . Keys \nare disclosed with a fixed delay d such that the key K i \nused in interval T i will be disclosed in the interval T i \u0002 d . \nThe sequence of key usage and the sequence of key dis-\nclosure are demonstrated in Figure 11.6 . \n To bootstrap a new receiver, the sender needs to syn-\nchronize the time with the receiver and needs to inform \nthe new receiver of a key K j that is used in a past inter-\nval T j , the interval duration T int , and the disclosure delay \n d . With a previous key K j , the receiver will be able to \nverify any key K p where j \b p using the one-way key-\nchain’s property. After this, the new receiver will be \nable to receive and verify data in the same way as other \nreceivers that join the communication prior to the new \nreceiver. \n To receive and authenticate messages, a receiver will \ncheck all incoming messages if they have been delayed \nfor more than d . Messages with a delay greater than d \nwill be discarded, since they are suspect as fake mes-\nsages constructed after the key has been disclosed. The \nreceiver will buffer the remaining messages for at least \n d intervals until the corresponding keys are disclosed. \nWhen a key K i is disclosed at the moment T i \u0002 d , \nthe receiver will verify K i using K i \t 1 by checking if \n K i \t 1 \u0003 F ( K i ). Once the key K i is verified, K i will be used \nto authenticate those messages sent in the interval T i . \n 4. SECURE ROUTING \n Secure Efficient Ad hoc Distance (SEAD) [15] vector \nrouting is designed based on Destination-Sequenced \nDistance Vector (DSDV) routing [16] . SEAD augments \nDSDV with authentication to provide security in the \nconstruction and exchange of routing information. \n SEAD \n Distance vector routing works as follows. Each router \nmaintains a routing table. Each entry of the table con-\ntains a specific destination, a metric (the shortest distance \nto the destination), and the next hop on the shortest path \nfrom the current router to the destination. For a packet \nthat needs to be sent to a certain destination, the router \nwill look up the destination from the routing table to get \nthe matching entry. Then the packet is sent to the next \nhop specified in the entry. \n To allow routers to automatically discover new routes \nand maintain their routing tables, routers exchange rout-\ning information periodically. Each router advises its \nneighbors of its own routing information by broadcast-\ning its routing table to all its neighbors. Each router will \nupdate its routing table according to the information it \nhears from its neighbors. If a new destination is found \nfrom the information advertised by a neighbor, a new \nentry is added to the routing table with the metric recal-\nculated based on the advertised metric and the linking \nbetween the router and the neighbor. If an existing desti-\nnation is found, the corresponding entry is updated only \nwhen a new path that is shorter than the original one has \nbeen found. In this case, the metric and the next hop \nfor the specified destination are modified based on the \nadvertised information. \n Though distance vector routing is simple and effec-\ntive, it suffers from possible routing loops, also known as \nthe counting to infinity problem. DSDV [17] is one of the \nextensions to distance vector routing to tackle this issue. \nDSDV augments each routing update with a sequence \nnumber, which can be used to identify the sequence of rout-\ning updates, preventing routing updates being applied in an \nout-of-order manner. Newer routing updates are advised \nwith sequence numbers greater than those of the previ-\nous routing updates. In each routing update, the sequence \nnumber will be incremented to the next even number. Only \nwhen a broken link has been detected will the router use \nthe next odd sequence number as the sequence number \nfor the new routing update that is to be advertised to all \nits neighbors. Each router maintains an even sequence \nnumber to identify the sequence of every routing update. \nNeighbors will only accept newer routing updates by dis-\ncarding routing updates with sequence numbers less than \nthe last sequence number heard from the router. \n SEAD provides authentication on metrics ’ lower \nbounds and senders ’ identities by using the one-way \nhash chain. Let H be a hash function and x be a given \nvalue. A list of values is computed as follows: \n \n h\nh\nh\nhn\n0\n1\n2\n, \n, \n, \n,…\n \n where h 0 \u0003 x and h i \u0002 1 \u0003 H ( h i ) for 0 \b i \b n . Given any \nvalue h k that has been confirmed to be in the list, to \nauthenticate if a given value d is on the list or not one \ncan compute if d can be derived from h k by applying H \na certain number of times, or if h k can be derived from d \nby applying H to d a certain number of times. If either \n d can be derived from h k or h k can be derived from d \nwithin a certain number of steps, it is said that d can be \nauthenticated by h k . \n" }, { "page_number": 209, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n176\n SEAD assumes an upper bound m \t 1 on the diameter \nof the ad hoc network, which means that the metric of a \nrouting entry will be less than m . Let h 0 , h 1 , h 2 , … , h n \nbe a hash chain where n \u0003 m \u0007 k and k \u0002 Z \u0002 . \n For an update with the sequence number i and the \nmetric value of j , the value h ( k \t i ) m \u0002 j is used to authenti-\ncate the routing update entry. \n By using h ( k \t i ) m \u0002 j to authenticate the routing update \nentry, a node is actually disclosing the value h ( k \t i ) m \u0002 j \nand subsequently all h p where p \n ( k \t i ) m \u0002 j , but not \nany value h q where q \b ( k \t i ) m \u0002 j . \n Using a hash value corresponding to the sequence \nnumber and metric in a routing update entry allows the \nauthentication of the update and prevents any node from \nadvertising a route to some destination, forging a greater \nsequence number or a smaller metric. \n To authenticate the update, a node can use any given \nearlier authentic hash value h p from the same hash chain \nto authenticate the current update with sequence number \n i and metric j . The current update uses the hash value \n h ( k \t i ) m \u0002 j and ( k \t i ) m \u0002 j \b p , thus h p can be computed \nfrom h ( k \t i ) m \u0002 j by applying H for ( k \t i ) m \u0002 j \t p times. \n The disclosure of h ( k \t i ) m \u0002 j does not disclose any \nvalue h q where q \b ( k \t i ) m \u0002 j . Let a fake update be \nadvised with a sequence number p and metric q , where \n p \n i and q \b j, or q \b j . The fake update will need to \nuse the hash value h ( k \t p ) m \u0002 q . If the sequence number p is \ngreater than i or the metric q is less than j , ( k \t p ) m \u0002 q \n \u0003 ( k \t i ) m \u0002 j . This means that a hash value h ( k \t p ) m \u0002 q \nthat has not been disclosed is needed to authenticate the \nupdate. Since the value h ( k \t p ) m \u0002 q has not been disclosed, \nthe malicious node will not be able to have it to fake a \nrouting update. \n Ariadne \n Ariadne [18] is a secure on-demand routing protocol for \nad hoc networks. Ariadne is built on the Dynamic Source \nRouting protocol (DSR) [19] . \n Routing in Ariadne is divided into two stages: the route \ndiscovery stage and the route maintenance stage. In the \nroute discovery stage, a source node in the ad hoc network \ntries to find a path to a specific destination node. The dis-\ncovered path will be used by the source node as the path \nfor all communication from the source node to the destina-\ntion node until the discovered path becomes invalid. In the \nroute maintenance stage, network nodes identify broken \npaths that have been found. A node sends a packet along \na specified route to some destination. Each node on the \nroute forwards the packet to the next node on the specified \nroute and tries to confirm the delivery of the packet to the \nnext node. If a node fails to receive an acknowledgment \nfrom the next node, it will signal the source node using a \nROUTE ERROR packet that a broken link has been found. \nThe source node and other nodes on the path can then be \nadvised of the broken link. \n The key security features Ariadne adds onto the route \ndiscovery and route maintenance are node authentica-\ntion and data verification for the routing relation pack-\nets. Node authentication is the process of verifying the \nidentifiers of nodes that are involved in Ariadne’s route \ndiscovery and route maintenance, to prevent forging \nrouting packets. In route discovery, a node sends out a \nROUTE REQUEST packet to perform a route discovery. \nWhen the ROUTE REQUEST packet reaches the desti-\nnation node, the destination node verifies the originator \nidentity before responding. Similarly, when the source \nnode receives a ROUTE REPLY packet, which is a \nresponse to the ROUTE REQUEST packet, the source \nnode will also authenticate the identity of the sender. \nThe authentication of node identities can be of one of the \nthree methods: TELSA, digital signatures, and Message \nAuthentication Code (MAC). \n Data verification is the process of verifying the integ-\nrity of the node list in route discovery for the preven-\ntion of adding and removing nodes from the node list in \na ROUTE RQUEST. To build a full list of nodes for a \nroute to a destination, each node will need to add itself \ninto the node list in the ROUTE REQUEST when it for-\nwards the ROUTE REQUEST to its neighbor. Data veri-\nfication protects the node list by preventing unauthorized \nadding of nodes and unauthorized removal of nodes. \n ARAN \n Authenticated Routing for Ad hoc Networks (ARAN) \n [20] is a routing protocol for ad hoc networks with \nauthentication enabled. It allows routing messages to be \nauthenticated at each node between the source nodes and \nthe destination nodes. The authentication that ARAN has \nimplemented is based on cryptographic certificates. \n ARAN requires a trusted certificate server, the pub-\nlic key of which is known to all valid nodes. Keys are \nassumed to have been established between the trusted \ncertificate server and nodes. For each node to enter into \na wireless ad hoc network, it needs to have a certificate \nissued by the trusted server. The certificate contains the \nIP address of the node, the public key of the node, a time \nstamp indicating the issue time of the certification, and \nthe expiration time of the certificate. Because all nodes \nhave the public key of the trusted server, a certificate can \nbe verified by all nodes to check whether it is authentic. \n" }, { "page_number": 210, "text": "Chapter | 11 Wireless Network Security\n177\nWith an authentic certificate and the corresponding pri-\nvate key, the node that owns the certificate can authenti-\ncate itself using its private key. \n To discover a route from a source node to the desti-\nnation node, the source node sends out a route discov-\nery packet (RDP) to all its neighbors. The RDP is signed \nby the source node’s private key and contains a nonce, a \ntime stamp, and the source node’s certificate. The time \nstamp and the nonce work to prevent replay attacks and \nflooding of the RDP. \n The RDP is then rebroadcast in the network until it \nreaches the destination. The RDP is rebroadcast with \nthe signature and the certificate of the rebroadcaster. On \nreceiving an RDP, each node will first verify the source’s \nsignature and the previous node’s signature on the RDP. \n On receiving an RDP, the destination sends back a \nreply packet (REP) along the reverse path to the source \nafter validating the RDP. The REP contains the nonce \nspecified in the RDP and the signature from the destina-\ntion node. \n The REP is unicast along the reverse path. Each node \non the path will put its own certificate and its own signa-\nture on the RDP before forwarding it to the next node. \nEach node will also verify the signatures on the RDP. An \nREP is discarded if one or more invalid signatures are \nfound on the REP. \n When the source receives the REP, it will first verify \nthe signatures and then the nonce in the REP. A valid \nREP indicates that a route has been discovered. The node \nlist on a valid REP suggests an operational path from the \nsource node to the destination node that is found. \n As an on-demand protocol, nodes keep track of route \nstatus. If there has been no traffic for a route’s lifetime or \na broken link has been detected, the route will be deac-\ntivated. Receiving data on an inactive route will force \na node to signal an error state by using an error (ERR) \nmessage. The ERR message is signed by the node that \nproduces it and will be forwarded to the source without \nmodification. The ERR message contains a nonce and a \ntime stamp to ensure that the ERR message is fresh. \n SLSP \n Secure Link State Routing Protocol (SLSP) [21] is a \nsecure routing protocol for ad hoc network building based \non link state protocols. SLSP assumes that each node has \na public/private key pair and has the capability of signing \nand verifying digital signatures. Keys are bound with the \nMedium Access Code and the IP address, allowing neigh-\nbors within transmission range to uniquely verify nodes \nif public keys have been known prior to communication. \n In SLSP, each node broadcasts its IP address and the \nMAC to its neighbor with its signature. Neighbors verify \nthe signature and keep a record of the pairing IP address \nand the MAC. The Neighbor Lookup Protocol (NLP) of \nSLSP extracts and retains the MAC and IP address of \neach network frame received by a node. The extracted \ninformation is used to maintain the mapping of MACs \nand IP addresses. \n Nodes using SLSP periodically send out link state \nupdates (LSUs) to advise the state of their network \nlinks. LSU packets are limited to propagating within \na zone of their origin node, which is specified by the \nmaximum number of hops. To restrict the propagation \nof LSU packets, each LSU packet contains the zone \nradius and the hops traversed fields. Let the maximum \nhop be R ; X , a random number; and H be a hash func-\ntion. Zone – radius will be initialized to H R ( X ) and hops_\ntraversed be initialized to H ( X ). Each LSU packet also \ncontains a TTL field initialized as R \t 1. If TTL \u0004 0 \nor H ( hops – traversed ) \u0003 zone – radius , a node will not \nrebroadcast the LSU packet. Otherwise, the node will \nreplace the hops – traversed field with H ( hops – traversed ) \nand decrease TTL by one. In this way, the hop count is \nauthenticated. SLSP also uses signatures to protect LSU \npackets. Receiving nodes can verify the authenticity and \nthe integrity of the received LSU packets, thus prevent-\ning forging or tampering with LSU packets. \n 5. KEY ESTABLISHMENT \n Because wireless communication is open and the signals \nare accessible by anyone within the vicinity, it is impor-\ntant for wireless networks to establish trust to guard the \naccess to the networks. Key establishment builds rela-\ntions between nodes using keys; thus security services, \nsuch as authentication, confidentiality, and integrity can \nbe achieved for the communication between these nodes \nwith the help of the established keys. \n The dynamically changing topology of wireless net-\nworks, the lack of fixed infrastructure of wireless ad \nhoc and sensor networks, and the limited computation \nand energy resources of sensor networks have all added \ncomplication to the key establishment process in wire-\nless networks. \n Bootstrapping \n Bootstrapping is the process by which nodes in a wire-\nless network are made aware of the presence of others in \nthe network. On bootstrapping, a node gets its identifying \n" }, { "page_number": 211, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n178\ncredentials that can be used in the network the node is \ntrying to join. Upon completion of the bootstrapping, the \nwireless network should be ready to accept the node as a \nvalid node to join the network. \n To enter a network, a node needs to present its iden-\ntifying credential to show its eligibility to access the net-\nwork. This process is called preauthentication . Once the \ncredentials are accepted, network security associations \nare established with other nodes. \n These network security associations will serve as fur-\nther proof of authorization in the network. Security asso-\nciations can be of various forms, including symmetric \nkeys, public key pairs, hash key chains, and so on. The \nsecurity associations can be used to authenticate nodes. \nSecurity associations may expire after a certain period \nof time and can be revoked if necessary. For example, if \na node is suspected of being compromised, its security \nassociation will be revoked to prevent the node access-\ning the network. The actual way of revocation depends \non the form of the security associations. \n Bootstrapping in Wireless Ad Hoc Networks \n Wireless ad hoc networks bring new challenges to the \nbootstrapping process by their lack of a centralized \nsecurity infrastructure. It is necessary to build a security \ninfrastructure in the bootstrapping phase. The trust infra-\nstructure should be able to accept nodes with valid cre-\ndentials to enter the network but stop those nodes without \nvalid credentials from joining the network and establish \nsecurity association between nodes within the network. \n To build such a trust infrastructure, we can use any \none of the following three supports: prior knowledge, \ntrusted third parties, or self-organizing capability. Prior \nknowledge is information that has been set on valid \nnodes in advance, such as predistributed secrets or preset \nshared keys. This information can be used to distinguish \nlegitimate nodes from malicious ones. Only nodes with \nprior knowledge will be accepted to enter the network. \nFor example, the predistributed secrets can be used to \nauthenticate legitimate nodes, so the network can simply \nreject those nodes without the predistributed secrets so \nthat they can’t enter the network. \n Trusted third parties can also be used to support the \nestablishment of the trust infrastructure. The trusted third \nparty can be a Certificate Authority (CA), a base station \nof the wireless network, or any nodes that are designated \nto be trusted. If trusted third parties are used, all nodes \nmust mutually agree to trust them and derive their trust \non others from the trusted third parties. One of the issues \nwith this method is that trusted third parties are required \nto be available for access by all nodes across the whole \nnetwork, which is a very strong assumption for wireless \nnetworks as well as an impractical requirement. \n It is desirable to have a self-organizing capability for \nbuilding the trust infrastructure for wireless networks, tak-\ning into account the dynamically changing topology of \nwireless ad hoc networks. Implementing a self-organizing \ncapability for building the trust infrastructure often requires \nan out-of-band authenticated communication channel or \nspecial hardware support, such as tamper-proof hardware \ntokens. \n Bootstrapping in Wireless Sensor Networks \n Bootstrapping nodes in wireless sensor networks is also \nchallenging for the following reasons: \n ● Node capture. Sensor nodes are normally deployed \nin an area that is geographically close or inside the \nmonitoring environment, which might not be a closed \nand confined area under guard. Thus sensor nodes \nare vulnerable to physical capture because it might \nbe difficult to prevent physical access to the area. \n ● Node replication. Once a sensor node is compromised, \nit is possible for adversaries to replicate sensor nodes \nby using the secret acquired from the compromised \nnode. In this case, adversaries can produce fake \nlegitimate node that cannot be distinguished by the \nnetwork. \n ● Scalability . A single-sensor network may comprise \na large number of sensor nodes. The more nodes in a \nwireless sensor network, the more complicated it is \nfor bootstrapping. \n ● Resource limitation. Sensor nodes normally have \nextremely limited computation power and memory \nas well as limited power supply and weak communi-\ncation capability. This makes some of the deliberate \nalgorithms and methods not applicable to wireless \nsensor networks. Only those algorithms that require \na moderate amount of resources can be implemented \nin wireless sensor networks. \n Bootstrapping a sensor node is achieved using an \nincremental communication output power level to discover \nneighbors nearby. The output power level is increased step \nby step from the minimum level to the maximum level, to \nsend out a HELLO message. This will enable the sensor \nnode to discover neighbors in the order of their distance \nfrom the sensor node, from the closest to the farthest away. \n Key Management \n Key management schemes can be classified according \nto the way keys are set up (see Figure 11.7 ). Either keys \n" }, { "page_number": 212, "text": "Chapter | 11 Wireless Network Security\n179\nare managed based on the contribution from all partici-\npating nodes in the network or they are managed based \non a central node in the network. Thus key management \nschemes can be divided into contributory key management \nschemes, in which all nodes work equally together to man-\nage the keys, and distributed key management schemes, in \nwhich only one central node is responsible for key man-\nagement [22] . \n Classification \n The distributed key management scheme can be further \ndivided into symmetric schemes and public key schemes. \nSymmetric key schemes are based on private key cryp-\ntography, whereby shared secrets are used to authenticate \nlegitimate nodes and to provide secure communica-\ntion between them. The underlying assumption is that \nthe shared secrets are known only to legitimate nodes \ninvolved in the interaction. Thus proving the knowledge \nof the shared secrets is enough to authenticate legitimate \nnodes. Shared secrets are distributed via secure channels \nor out-of-band measures. Trust on a node is established \nif the node has knowledge of a shared secret. \n Public key schemes are built on public key cryp-\ntography. Keys are constructed in pairs, with a private \nkey and a public key in each pair. Private keys are kept \nsecret by the owners. Public keys are distributed and \nused to authenticate nodes and to verify credentials. \nKeys are normally conveyed in certificates for distribu-\ntion. Certificates are signed by trusted nodes for which \nthe public keys have been known and validated. Trust on \nthe certificates will be derived from the public keys that \nsign the certificates. Note that given g i (mod p) and g j \n(mod p) , it is hard to compute g i * j (mod p) without the \nknowledge of i and j. \n Contributory Schemes \n Diffie-Hellman (D-H) [23] is a well-known algorithm for \nestablishing shared secrets. The D-H algorithm’s strength \ndepends on the discrete log problem: It is hard to calcu-\nlate s if given the value g s (mod p ), where p is a large \nprime number. \n Diffie-Hellman Key Exchange \n D-H was designed for establishing a shared secret \nbetween two parties, namely node A and node B . Each \nparty agrees on a large prime number p and a generator g. \nA and B each choose a random value i and j, respectively. \n A and B are then exchanged with the public values g i \n(mod p ) and g j (mod p ). On the reception of g j (mod p ) \nfrom B , A is then able to calculate the value g j \u0007 i (mod \n p ). Similarly, B computes g i \u0007 j (mod p ). Thus a shared \nsecret, g i \u0007 j (mod p ), has been set up between A and B . \n ING \n Ingemarsson, Tang, and Wong (ING) [24] extends the \nD-F key exchange to a group of n members, d 1 , … , d n . \nAll group members are organized in a ring, where each \nmember has a left neighbor and a right neighbor. Node d i \nhas a right neighbor d i \t 1 and a left neighbor d i \u0002 1 . Note \nthat for node d i , its right neighbor is d i \u0002 1 ; for node d i \u0002 1 , \nits left neighbor is d i . \n Same as the D-F algorithm, all members in an ING \ngroup assume a large prime number p and a generator g . \nInitially, node d i will choose a random number r i . At the first \nround of key exchange, node d i will compute g r i (mod p ) \nand send it to its left neighbor d i \u0002 1 . At the same time, node \n d i also receives the public value g r i \t 1 (mod p ) from its right \nneighbor d i \t 1 . From the second round on, let q be the value \nthat node d i received in the previous round, node d i will com-\npute a new public value q d i (mod p ). After n \t 1 rounds, \nthe node d i would have received a public value, g k (mod p ) \nwhere k \u0003 \u0004 m \u0003 1 i \t 1 r m \u0007 \u0004 s \u0003 i \u0002 1 n r s , from its right neighbors. \nWith the public value received at the n \t 1th round, the node \n d i can raise it to the power of r i to compute the value g l (mod \n p ) where l \u0003 \u0004 m \u0003 1 n r m . \n Hypercube and Octopus (H & O) \n The Hypercube protocol [25] assumes that there are 2 d \nnodes joining to establish a shared secret and all nodes \nare organized as a d -dimensional vector space GF (2) d \nLet b 1 , … , b d be the basic of GF (2) d The hypercube pro-\ntocol takes d rounds to complete: \n ● In the first round, every participant v \u0002 GF (2) d \nchooses a random number r v and conducts a D-H key \nexchange with another participant v \u0002 b 1 , with the \nrandom values r v and r v \u0002 b 1 , respectively. \n ● In the i th round, every participant v \u0002 GF (2) d per-\nformances a D-H key exchange with the participant \n v \u0002 b i , where both v and v \u0002 b i use the value gener-\nated in the previous round as the random number for \nD-H key exchange. \n This algorithm can be explained using a complete \nbinary tree to make it more comprehensible. All the nodes \n FIGURE 11.7 Key management schemes. \n" }, { "page_number": 213, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n180\nare put in a complete binary tree as leaves, with leaves at \nthe 0 – level and the root at the d-level. D-H key exchanges \nare performed from the leaves up to the root. The key \nexchange takes d rounds: \n ● In the first round, each leaf chooses a random \nnumber k and performs a D-H key exchange with its \nsibling leaf, which has a random number j , and the \nresulting value g k \u0007 j (mod p ) is saved as the random \nvalue for the parent node of the above two leaves. \n ● In the i th round, each node at the i \t 1 level per-\nforms a D-H key exchange with its sibling node \nusing the random numbers m and n , respectively, that \nthey received in the previous round. The resulting \nvalue g m \u0007 n (mod p ) is saved as the random value \nfor the parent node of the above two nodes. \n After d rounds, the root of the complete binary tree \ncontains the established shared secret s . \n The hypercube protocol assumes that there are 2 d net-\nwork nodes. The octopus protocol removes the assump-\ntion and extends the hypercube protocol to work with an \narbitrary number of nodes. Thus the octopus protocol \ncan be used to establish a shared key for a node set con-\ntaining an arbitrary number of nodes. \n Distributed Schemes \n A partially distributed threshold CA scheme [26] works with \na normal PKI system where a CA exists. The private key of \nthe CA is split and distributed over a set of n server nodes \nusing a ( k , n ) secret-sharing scheme [27] . The ( k , n ) secret-\nsharing scheme allows any k or more server nodes within \nthe n server nodes to work together to reveal the CA’s private \nkey. Any set of nodes with fewer than k nodes will not be \nable to reveal the CA’s private key. With the threshold signa-\nture scheme [28] , any k of the n nodes can cooperate to sign \na certificate. Each of the k nodes produces a piece of the sig-\nnature on the request of signing a given certificate. With all \nthe k pieces of the signature, a valid signature, which is the \nsame as the one produced using the CA’s private key, can be \nproduced by combining the k pieces of the signature. \n Partially Distributed Threshold CA Scheme \n In this way, the partial distributed threshold CA scheme can \navoid the bottleneck of the centralized CA of conventional \nPKI infrastructures. As long as there are at least k of the n \nnodes available, the network can always issue and sign new \ncertificates. Attacks to any single node will not bring the \nwhole CA down. Only when an attack manages to paralyze \n n \t k or more nodes will the CA’s signing service not be \navailable. \n To further improve the security of the private key that \nis distributed over the n nodes, proactive security [29] \ncan be imposed. Proactive security forces the private key \nshares to be refreshed periodically. Each refreshment \nwill invalidate the previous share held by a node. Attacks \non multiple nodes must complete within a refresh period \nto succeed. To be specific, only when an attack can com-\npromise k or more nodes within a refresh period can the \nattack succeed. \n While conventional PKI systems depend on directories \nto publish public key certificates, it is suggested that cer-\ntificates should be disseminated to communication peers \nwhen establishing a communication channel with the par-\ntial distributed threshold CA scheme. This is due to the \nfact that the availability of centralized directories cannot be \nguaranteed in wireless networks. Therefore it is not realis-\ntic to assume the availability of a centralized directory. \n Self-Organized Key Management (PGP-A) \n A self-organized key management scheme (PGP-A) [30] \nhas its basis in the Pretty Good Privacy (PGP) [31] \nscheme. PGP is built based on the “ web of trust ” model, \nin which all nodes have equal roles in playing a CA. \nEach node generates its own public/private key pair and \nsigns other nodes ’ public keys if it trusts the nodes. The \nsigned certificates are kept by nodes in their own certifi-\ncate repositories instead of being published by central-\nized directories in the X.509 PKI systems [32] . \n PGP-A treats trust as transitive, so trust can be \nderived from a trusted node’s trust on another node, that \nis, if node A trusts node B , and node B trusts node C , \nthen A should also trust C if A knows the fact that node \n B trusts node C . \n To verify a key of a node u , a node j will merge its \ncertificate repository with those of j ’s trusted nodes, and \nthose of the nodes trusted by j ’s trusted nodes, and so \nforth. In this way, node j can build up a web of trust in \nwhich node j is at the center of the web and j ’s directly \ntrusted nodes as node j ’s neighbors. Node l is linked with \nnode k if node k trusts node l . Node j can search the web \nof trust built as above to find a path from j to u . If such \nas path exists, let it be a sequence of nodes S : node i \nwhere i \u0003 1, … , n, n be the length of the path, and \n node 1 \u0003 j and node n \u0003 u . This means that node i trust \n node i \u0002 1 for all i \u0003 1, … , n \t 1. Therefore u can be \ntrusted by j . The path S represents a verifiable chain of \ncertificates. PGP-A does not guarantee that a node u \nthat should be trusted by node j will always be trusted \nby node j , since there are chances that the node j fails \nto find a path from node j to node u in the web of trust. \nThis might be due to the reason that node j has not \n" }, { "page_number": 214, "text": "Chapter | 11 Wireless Network Security\n181\nacquired enough certificates from its trusted nodes to \ncover the path from node j to node u . \n Self-Healing Session Key Distribution \n The preceding two key management schemes are pub-\nlic key management schemes. The one discussed here, \na self-healing session key distribution [33] , is a symmet-\nric key management scheme. In such a scheme, keys can \nbe distributed either by an online key distribution server \nor by key predistribution. A key predistribution scheme \nnormally comprises the key predistribution phase, the \nshared-key discovery phase, and the path key establish-\nment phase. \n In the key predistribution phase, a key pool of a large \nnumber of keys is created. Every key can be identified \nby a unique key identifier. Each network node is given a \nset of keys from the key pool. The shared-key discovery \nphase begins when a node tries to communicate with the \nothers. All nodes exchange their key identifiers to find \nout whether there are any keys shared with others. The \nshared keys can then be used to establish a secure chan-\nnel for communication. If no shared key exists, a key path \nwill need to be discovered. The key path is a sequence \nof nodes with which all adjacent nodes share a key. With \nthe key path, a message can travel from the first node to \nthe last node securely, by which a secure channel can be \nestablished between the first node and the last node. \n The self-healing session key distribution (S-HEAL) \n [34] assumes the existence of a group manager and pre-\nshared secrets. Keys are distributed from the group man-\nager to group members. Let h be a polynomial, where for \na node i , node i knows about h ( i ). Let K be the group key \nto be distributed, K is covered by h in the distribution: \n f ( x ) \u0003 h ( x ) \u0002 K . The polynomial f ( x ) is the information \nthat the group manager sends out to all its group mem-\nbers. For node j , node j will calculate K \u0003 f ( j ) \t h ( j ) \nto reveal the group key. Without the knowledge of h ( j ), \nnode j will not be able to recover K . \n To enable revocation in S-HEAL, the polynomial \n h ( x ) is replaced by a bivariate polynomial s ( x , y ). The \ngroup key is covered by the bivariate polynomial s ( x , y ) \nwhen it is distributed to group members, in the way that \n f ( N , x ) \u0003 s ( N , x ) \u0002 K . Node i must calculate s ( N , i ) to \nrecover K . The revocation enabled S-HEAL tries to stop \nrevoked nodes to calculate s ( N , i ), thus preventing them \nto recover K . \n Let s of degree t ; then t \u0002 1 values are needed to \ncompute s ( x , i ). Assuming that s ( i , i ) is predistributed \nto node i , node i will need another t values to recover \n s ( N , i ), namely s ( r 1 , x ), … , s ( r t , x ). These values will be \ndisseminating to group members together with the key \nupdate. If the group manager wants to revoke node i , \nthe group manager can set one of the values s ( r 1 , x ), … , \n s ( r t , x ) to s ( i , x ). In this case, node i obtains only t values \ninstead of t \u0002 1 values. Therefore, node i will not be able \nto compute s ( x , i ), thus it will not be able to recover K . \nThis scheme can only revoke maximum t nodes at the \nsame time. \n REFERENCES \n [1] L. M. S. C. of the IEEE Computer Society. Wireless LAN medium \naccess control (MAC) and physical layer (PHY) specifications, \ntechnical report, IEEE Standard 802.11, 1999 ed., 1999. \n [2] L. M. S. C. of the IEEE Computer Society. Wireless LAN medium \naccess control (MAC) and physical layer (PHY) specifications, \ntechnical report, IEEE Standard 802.11, 1999 ed., 1999. \n [3] R.L. Rivest , The RC4 encryption algorithm , RSA Data Security, \nInc. , March 1992 technical report . \n [4] E. Dawson , L. Nielsen , Automated cryptanalysis of XOR plain-\ntext strings , Cryptologia , 20 ( 2 ) , April 1996 . \n [5] S. Singh , The code book: the evolution of secrecy from Mary, \nQueen of Scots, to quantum cryptography , Doubleday , 1999 . \n [6] L. M. S. C. of the IEEE Computer Society. Wireless LAN medium \naccess control (MAC) and physical layer (PHY) specifications, \ntechnical report, IEEE Standard 802.11, 1999 ed., 1999. \n [7] W.A. Arbaugh, An inductive chosen plaintext attack against \nWEP/WEP2, IEEE Document 802.11-01/230, May 2001. \n [8] J.R. Walker, Unsafe at any key size; an analysis of the WEP \nencapsulation, IEEE Document 802.11-00/362, October 2000. \n [9] N. Borisov, I. Goldberg, D. Wagner, Intercepting Mobile \nCommunications: The Insecurity of 802.11, MobiCom 2001. \n [10] B. Aboba , L. Blunk , J. Vollbrecht , J. Carlson , E.H. Levkowetz , \n Extensible Authentication Protocol (EAP) , request for comment, \nNetwork Working Group , 2004 . \n [11] A. Perrig, R. Szewczyk, V. Wen, D. Culler, J.D. Tygar, SPINS: \nSecurity protocols for sensor networks, MobiCom ’ 01: \nProceedings of the 7th annual international conference on Mobile \ncomputing and networking, 2001. \n [12] A. Perrig, R. Canetti, D. Xiaodong Song, J.D. Tygar, Efficient \nand secure source authentication for multicast, NDSS 01: \nNetwork and Distributed System Security Symposium, 2001. \n [13] A. Perrig, J.D. Tygar, D. Song, R. Canetti, Efficient authentication \nand signing of multicast streams over lossy channels, SP ’ 00: \nProceedings of the 2000 IEEE Symposium on Security and \nPrivacy, 2000. \n [14] A. Perrig, R. Canetti, J.D. Tygar, D. Song, RSA CryptoBytes, \n5, 2002. \n [15] Yih-Chun Hu , D.B. Johnson , A. Perrig , SEAD: Secure efficient \ndistance vector routing for mobile wireless ad hoc networks , in: \n WMCSA ’ 02: Proceedings of the Fourth IEEE Workshop on \nMobile Computing Systems and Applications , IEEE Computer \nSociety , Washington, DC , 2002 , p. 3 . \n [16] C.E. Perkins , P. Bhagwat , Highly dynamic destination-sequenced \ndistance-vector routing (DSDV) for mobile computers , SIGCOMM \nComput. Commun. Rev. 24 ( 4 ) , 1994 , 234 – 244 . \n [17] C.E. Perkins , P. Bhagwat , Highly dynamic destination-sequenced \ndistance-vector routing (DSDV) for mobile computers , SIGCOMM \nComput. Commun. Rev. 24 ( 4 ) ( 1994 ) 234 – 244 . \n" }, { "page_number": 215, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n182\n [18] Yih-Chun Hu , A. Perrig , D. Johnson , Ariadne: a secure \non-demand routing protocol for ad hoc networks , Wireless \nNetworks Journal , 11 , ( 1 ) , 2005 . \n [19] D.B. Johnson , D.A. Maltz , Dynamic source routing in ad hoc \nwireless networks , Mobile Computing , Kluwer Academic \nPublishers , 1996 , pp. 153 – 181 . \n [20] K. Sanzgiri, B. Dahill, B.N. Levine, C. Shields, E.M. Belding-\nRoyer, A secure routing protocol for ad hoc networks, 10th IEEE \nInternational Conference on Network Protocols (ICNP’02), 2002. \n [21] P. Papadimitratos, Z.J. Haas, Secure link state routing for mobile \nad hoc networks, saint-w , 00, 2003. \n [22] E. Cayirci , C. Rong , Security in wireless ad hoc, sensor, and \nmesh networks , John Wiley & Sons , 2008 . \n [23] W. Diffie , M.E. Hellman , New directions in cryptography , IEEE \nTransactions on Information Theory , IT-22 ( 6 ) , 644 – 654 , 1976 . \n [24] I. Ingemarsson , D. Tang , C. Wong , A conference key distribu-\ntion system , IEEE Transactions on Information Theory , 28 , ( 5 ) , \n 714 – 720, September 1982 . \n [25] K. Becker, U. Wille, Communication complexity of group key \ndistribution, ACM conference on computer and communications \nsecurity , 1998. \n [26] L. Zhou , Z.J. Haas , Securing ad hoc networks , IEEE Network 13 , \n 24 – 30, 1999 . \n [27] A. Shamir , How to share a secret , Comm. ACM, 22 ( 11 ) , 1979 . \n [28] Y. Desmedt , Some recent research aspects of threshold cryptog-\nraphy , ISW , 158 – 173, 1997 . \n [29] R. Canetti , A. Gennaro , Herzberg , D. Naor , Proactive secu-\nrity: Long-term protection against break-ins , CryptoBytes 3 ( 1 ) , \n Spring 1997 . \n [30] S. Capkun , L. Butty á n , J.-P. Hubaux , Self-organized public-key \nmanagement for mobile ad hoc networks , IEEE Trans. Mob. \nComput. , 2 ( 1 ) , 52 – 64 , 2003 . \n [31] P. Zimmermann , The Official PGP User’s Guide , The MIT Press , \n 1995 . \n [32] ITU-T. Recommendation X.509, ISO/IEC 9594-8, Information \nTechnology: Open Systems Interconnection – The Directory: Public-\nkey and Attribute Certificate Frameworks, 4th ed ., 2000, ITU. \n [33] J. Staddon, S.K. Miner, M.K. Franklin, D. Balfanz, M. Malkin, \nD. Dean, Self-healing key distribution with revocation, IEEE \nSymposium on Security and Privacy , 2002. \n [34] J. Staddon, S.K. Miner, M.K. Franklin, D. Balfanz, M. Malkin, \nD. Dean, Self-healing key distribution with revocation, IEEE \nSymposium on Security and Privacy , 2002. \n" }, { "page_number": 216, "text": "183\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Cellular Network Security \n Peng Liu \n Pennsylvania State University \n Thomas F. LaPorta \n Pennsylvania State University \n Kameswari Kotapati \n Pennsylvania State University \n Chapter 12 \n In recent years, cellular networks have become open pub-\nlic networks to which end subscribers have direct access. \nThis has greatly increased the threats to the cellular net-\nwork. Though cellular networks have vastly advanced in \ntheir performance abilities, the security of these networks \nstill remains highly outdated. As a result, they are one of \nthe most insecure networks today — so much so that using \nsimple off-the-shelf equipment, any adversary can cause \nmajor network outages affecting millions of subscribers. \n In this chapter, we address the security of the cellu-\nlar network. We educate readers on the current state of \nsecurity of the network and its vulnerabilities. We also \noutline the cellular network specific attack taxonomy, \nalso called the three-dimensional attack taxonomy . We \ndiscuss the vulnerability assessment tools for cellular \nnetworks. Finally, we provide insights as to why the net-\nwork is so vulnerable and why securing it can prevent \ncommunication outages during emergencies. \n 1. INTRODUCTION \n Cellular networks are high-speed, high-capacity voice \nand data communication networks with enhanced multi-\nmedia and seamless roaming capabilities for supporting \ncellular devices. With the increase in popularity of cel-\nlular devices, these networks are used for more than just \nentertainment and phone calls. They have become the \nprimary means of communication for finance-sensitive \nbusiness transactions, lifesaving emergencies, and life-/\nmission-critical services such as E-911. Today these net-\nworks have become the lifeline of communications. \n A breakdown in the cellular network has many adverse \neffects, ranging from huge economic losses due to finan-\ncial transaction disruptions; loss of life due to loss of \nphone calls made to emergency workers; and communi-\ncation outages during emergencies such as the September \n11, 2001, attacks. Therefore, it is a high priority for the \ncellular network to function accurately. \n It must be noted that it is not difficult for unscru-\npulous elements to break into the cellular network and \ncause outages. The major reason for this is that cellular \nnetworks were not designed with security in mind. They \nevolved from the old-fashioned telephone networks that \nwere built for performance. To this day, the cellular net-\nwork has numerous well-known and unsecured vulner-\nabilities providing access to adversaries. Another feature \nof cellular networks is network relationships (also called \n dependencies ) that cause certain types of errors to propa-\ngate to other network locations as a result of regular net-\nwork activity. Such propagation can be very disruptive to \nthe network, and in turn it can affect subscribers. Finally, \nInternet connectivity to the cellular network is another \nmajor contributor to the cellular network’s vulnerability \nbecause it gives Internet users direct access to cellular \nnetwork vulnerabilities from their homes. \n To ensure that adversaries do not access the network and \ncause breakdowns, a high level of security must be main-\ntained in the cellular network. However, though great efforts \nhave been made to improve the cellular network in terms of \nsupport for new and innovative services, greater number of \nsubscribers, higher speed, and larger bandwidth, very little \nhas been done to update the security of the cellular network. \nAccordingly, these networks have become highly attractive \n" }, { "page_number": 217, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n184\ntargets to adversaries, not only because of their lack of secu-\nrity but also due to the ease with which these networks can \nbe exploited to affect millions of subscribers. \n In this chapter we analyze the security of cellular \nnetworks. Toward understanding the security issues in \ncellular networks, the rest of the chapter is organized as \nfollows. We present a comprehensive overview of cel-\nlular networks with a goal of providing a fundamental \nunderstanding of their functioning. Next we present the \ncurrent state of cellular network security through an \nin-depth discussion on cellular network vulnerabilities \nand possible attacks. In addition, we present the cellular \nnetwork specific attack taxonomy. Finally, we present a \nreview of current cellular network vulnerability assess-\nment techniques and conclude with a discussion. \n 2. OVERVIEW OF CELLULAR NETWORKS \n The current cellular network is an evolution of the early-\ngeneration cellular networks that were built for optimal \nperformance. These early-generation cellular networks \nwere proprietary and owned by reputable organizations . \nThey were considered secure due to their proprietary \nownership and their closed nature , that is, their control \ninfrastructure was unconnected to any public network \n(such as the Internet) to which end subscribers had direct \naccess. Security was a nonissue in the design of these \nnetworks. \n Recently, connecting the Internet to the cellular network \nhas not only imported the Internet vulnerabilities to the cel-\nlular network, it has also given end subscribers direct access \nto the control infrastructure of the cellular network, thereby \nopening the network. Also, with the increasing demand for \nthese networks, a large number of new network operators \nhave come into the picture. Thus, the current cellular envi-\nronment is no longer a safe, closed network but rather an \ninsecure, open network with many unknown network oper-\nators having nonproprietary access to it. Here we present a \nbrief overview of the cellular network architecture. \n Overall Cellular Network Architecture \n Subscribers gain access to the cellular network via radio \nsignals enabled by the radio access network, as shown \nin Figure 12.1 . The radio access network is connected to \nthe wireline portion of the network, also called the core \nnetwork . Core network functions include servicing sub-\nscriber requests and routing traffic. The core network is \nalso connected to the Public Switched Telephone Network \n(PSTN) and the Internet, as illustrated in Figure 12.1 [1] . \n The PSTN is the circuit-switched public voice tel-\nephone network that is used to deliver voice telephone \ncalls on the fixed landline telephone network . The PSTN \nuses Signaling System No. 7 (SS7), a set of teleph-\nony signaling protocols defined by the International \nTelecommunication Union (ITU) for performing teleph-\nony functions such as call delivery, call routing, and \nbilling. The SS7 protocols provide a universal structure \nfor telephony network signaling, messaging, interfac-\ning, and network maintenance. PSTN connectivity to \nthe core network enables mobile subscribers to call fixed \nnetwork subscribers, and vice versa. In the past, PSTN \nnetworks were also closed networks because they were \nunconnected to other public networks. \n The core network is also connected to the Internet. \nInternet connectivity allows the cellular network to pro-\nvide innovative multimedia services such as weather \nreports, stock reports, sports information, chat, and elec-\ntronic mail. Interworking with the Internet is possible \nusing protocol gateways, federated databases, and mul-\ntiprotocol mobility managers [2] . Interworking with the \nInternet has created a new generation of services called \n cross-network services. These are multivendor, multido-\nmain services that use a combination of Internet-based \ndata and data from the cellular network to provide a vari-\nety of services to the cellular subscriber. A sample cross-\nnetwork service is the Email Based Call Forwarding \nService (CFS), which uses Internet-based email data (in \na mail server) to decide on the call-forward number (in a \ncall-forward server) and delivers the call via the cellular \nnetwork. \n From a functional viewpoint, the core network may \nalso be further divided into the circuit-switched (CS) \ndomain, the packet-switched (PS) domain, and the IP \nMultimedia Subsystem (IMS). In the following, we fur-\nther discuss the core network organization. \nRadio\nAccess\nNetwork\nIP\nMultimedia\nSystem\nCircuit\nSwitched\nDomain\nPacket\nSwitched\nDomain\nPSTN\nCore Network\nInternet\n FIGURE 12.1 Cellular network architecture. \n" }, { "page_number": 218, "text": "Chapter | 12 Cellular Network Security\n185\n Core Network Organization \n Cellular networks are organized as collections of inter-\nconnected network areas , where each network area \ncovers a fixed geographical region (as shown in Figure \n12.2 ). Every subscriber is affiliated with two networks: \nthe home network and the visiting network . \n Every subscriber is permanently assigned to the \nhome network, from which they can roam onto other \nvisiting networks. The home network maintains the \nsubscriber profile and current subscriber location. The \nvisiting network is the network where the subscriber is \ncurrently roaming. It provides radio resources, mobility \nmanagement, routing, and services for roaming subscrib-\ners. The visiting network provides service capabilities to \nthe subscribers on behalf of the home environment [3] . \n The core network is facilitated by network servers \n(also called service nodes ). Service nodes are composed \nof (1) a variety of data sources (such as cached read-\nonly, updateable, and shared data sources) to store data \nsuch as subscriber profile and (2) service logic to per-\nform functions such as computing data items, retrieving \ndata items from data sources, and so on. \n Service nodes can be of different types, with each \ntype assigned specific functions. The major service node \ntypes in the circuit-switched domain include the Home \nLocation Register (HLR), the Visitor Location Register \n(VLR), the Mobile Switching Center (MSC), and the \nGateway Mobile Switching Center (GMSC) [4] . \n All subscribers are permanently assigned to a fixed \nHLR located in the home network. The HLR stores per-\nmanent subscriber profile data and relevant temporary data \nsuch as current subscriber location (pointer to VLR) of all \nsubscribers assigned to it. Each network area is assigned a \nVLR. The VLR stores temporary data of subscribers cur-\nrently roaming in its assigned area; this subscriber data \nis received from the HLR of the subscriber. Every VLR \nis always associated with an MSC. The MSC acts as an \ninterface between the radio access network and the core \nnetwork. It also handles circuit-switched services for \nsubscribers currently roaming in its area. The GMSC is \nin charge of routing the call to the actual location of the \nmobile station. Specifically, the GMSC acts as interface \nbetween the fixed PSTN network and the cellular network. \nThe radio access network comprises a transmitter, receiver, \nand speech transcoder called the base station (BS) [5] . \n Service nodes are geographically distributed and \nservice the subscriber through collaborative function-\ning of various network components. Such collaborative \nfunctioning is possible due to the network relation-\nships (called dependencies). A dependency means that \na network component must rely on other network com-\nponents to perform a function. For example, there is a \n dependency between service nodes to service subscrib-\ners. Such a dependency is made possible through sign-\naling messages containing data items. Service nodes \ntypically request other service nodes to perform specific \noperations by sending them signaling messages contain-\ning data items with predetermined values. On receiving \nsignaling messages, service nodes realize the opera-\ntions to perform based on values of data items received \nin signaling messages. Further, dependencies may exist \nbetween data items so that received data items may be \nused to derive other data items. Several application layer \nprotocols are used for signaling messages. Examples of \nsignaling message protocols include Mobile Application \nPart (MAP), ISDN User Part (ISUP), and Transaction \nCapabilities Application Part (TCAP) protocols. \n Typically in the cellular network, to provide a spe-\ncific service a preset group of signaling messages is \nexchanged between a preset group of service node types. \nThe preset group of signaling messages indicates the \noperations to be performed at the various service nodes \nand is called a signal flow . In the following, we use the \n call delivery service [6] to illustrate a signal flow and \nshow how the various geographically distributed service \nnodes function together. \n Call Delivery Service \n The call delivery service is a basic service in the circuit-\nswitched domain. It is used to deliver incoming calls to \nany subscriber with a mobile device regardless of their \nlocation. The signal flow of the call delivery service is \nillustrated in Figure 12.3 . The call delivery service sig-\nnal flow comprises MAP messages SRI , SRI_ACK , PRN , \nand PRN_ACK ; ISUP message IAM ; and TCAP messages \n SIFIC , Page MS , and Page . \n Figure 12.3 illustrates the exchange of signal mes-\nsages between different network areas. It shows that when \nBS\nHLR\nVisiting Network\nHome Network\nNetwork Area A\nNetwork Area B\nGMSC\nMSC\nVLR\nGMSC\nGMSC\n FIGURE 12.2 Core network organization. \n" }, { "page_number": 219, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n186\na subscriber makes a call using his mobile device, the call \nis sent in the form of a signaling message IAM to the near-\nest GMSC, which is in charge of routing calls and passing \nvoice traffic between different networks. This signaling \nmessage IAM contains data items such as called number \nthat denotes the mobile phone number of the subscriber \nreceiving this call. The called number is used by the \nGMSC to locate the address of the HLR (home network) \nof the called party. The GMSC uses this address to send \nthe signaling message SRI . \n The SRI message is an intimation to the HLR of the \narrival of an incoming call to a subscriber with called \nnumber as mobile phone number. It contains data items \nsuch as the called number and alerting pattern . The alert-\ning pattern denotes the pattern ( packet-switched data, \nshort message service, or circuit-switched call ) used \nto alert the subscriber receiving the call. The HLR uses \nthe called number to retrieve from its database the cur-\nrent location (pointer to VLR) of the subscriber receiving \nthe call. The HLR uses this subscriber location to send \nthe VLR the message PRN . The PRN message is a request \nfor call routing information (also called roaming number ) \nfrom the VLR where the subscriber is currently roaming. \nThe PRN message contains the called number , alerting \npattern , and other subscriber call profile data items. \n The VLR uses the called number to store the alerting \npattern and subscriber call profile data items and assign \nthe roaming number for routing the call. This roaming \nnumber data item is passed on to the HLR (in message \n PRN_ACK ), which forwards it to the GMSC (in message \n SRI_ACK ). The GMSC uses this roaming number to route \nthe call (message IAM ) to the MSC where the subscriber \nis currently roaming. On receipt of the message IAM , the \nMSC assigns the called number resources for the call and \nalso requests the subscriber call profile data items, and \n alerting pattern for the called number (using message \n SIFIC ) from the VLR, and receives the same in the Page MS \nmessage. The MSC uses the alerting pattern in the incom-\ning call profile to derive the page type data item. The page \ntype data item denotes the manner in which to alert the \nmobile station. It is used to page the mobile subscriber \n(using message Page ). Thus subscribers receive incoming \ncalls irrespective of their locations in the network. \n If data item values are inaccurate, the network can \nmisoperate and subscribers will be affected. Hence, accu-\nrate functioning of the network is greatly dependent on \nthe integrity of data item values. Thus signal flows allow \nthe various service nodes to function together, ensuring \nthat the network services its subscribers effectively. \n 3. THE STATE OF THE ART OF CELLULAR \nNETWORK SECURITY \n This part of the chapter presents the current state of the \nart of cellular network security. Because the security of \nthe cellular network is the security of each aspect of the \nnetwork, that is, radio access network, core network, \nInternet connection, and PSTN connection, we detail the \nsecurity of each in detail. \n Security in the Radio Access Network \n The radio access network uses radio signals to connect \nthe subscriber’s cellular device with the core wireline \n2. Send Rout\nInfo (SRI)\n3. Provide Roam\nNum (PRN)\n4. Provide Roam\nNum Ack\n5. Send Rout\nInfo Ack\n(SRI_ACK)\n7. SIFIC\n8. Page MS\n9. Page\nAir\nInterface\nHome Network\nVisiting Network\n(PRN_ACK)\nGMSC\nHLR\nVLR\nMSC\nMessage\n(IAM)\n1. Initial\nAddress\n6. Initial Address Message (IAM)\n FIGURE 12.3 Signal flow in the call delivery service. \n" }, { "page_number": 220, "text": "Chapter | 12 Cellular Network Security\n187\nnetwork. Hence it would seem that attacks on the radio \naccess network could easily happen because anyone \nwith a transmitter/receiver could capture these signals. \nThis was very true in the case of early-generation cellu-\nlar networks (first and second generations), where there \nwere no guards against eavesdropping on conversations \nbetween the cellular device and BS; cloning of cellular \ndevices to utilize the network resources without paying; \nand cloning BSs to entice users to camp at the cloned BS \nin an attack called a false base station attack , so that the \ntarget user provides secret information to the adversary. \n In the current generation (third-generation) cellular net-\nwork, all these attacks can be prevented because the net-\nwork provides adequate security measures. Eavesdropping \non signals between the cellular device and BS is not pos-\nsible, because cipher keys are used to encrypt these sig-\nnals. Likewise, replay attacks on radio signals are voided \nby the use of nonrepeated random values. Use of integrity \nkeys on radio conversations voids the possibility of dele-\ntion and modification of conversations between cellular \ndevices and BSs. By allowing the subscriber to authenti-\ncate the network, and vice versa, this generation voids the \nattacks due to cloned cellular devices and BSs. Finally, as \nthe subscriber’s identity is kept confidential by only using \na temporary subscriber identifier on the radio network, it is \nalso possible to maintain subscriber location privacy [7] . \n However, the current generation still cannot prevent a \ndenial-of-service attack from occurring if a large number \nof registration requests are sent via the radio access \nnetwork (BS) to the visiting network (MSC). Such a \nDoS attack is possible because the MSC cannot realize \nthat the registration requests are fake until it attempts \nto authenticate each request and the request fails. To \nauthenticate each registration request, the MSC must \nfetch the authentication challenge material from the cor-\nresponding HLR. Because the MSC is busy fetching the \nauthentication challenge material, it is kept busy and the \ngenuine registration requests are lost [8] . Overall there is \na great improvement in the radio network security in the \ncurrent third-generation cellular network. \n Security in Core Network \n Though the current generation network has seen many \nsecurity improvements in the radio access network, the \nsecurity of the core network is not as improved. Core \nnetwork security is the security at the service nodes and \nsecurity on links (or wireline signaling message) between \nservice nodes. \n With respect to wireline signaling message secu-\nrity, of the many wireline signaling message protocols, \nprotection is only provided for the Mobile Application \nPart (MAP) protocol. The MAP protocol is the cleart-\next application layer protocol that typically runs on the \nsecurity-free SS7 protocol or the IP protocol. MAP is an \nessential protocol and it is primarily used for message \nexchange involving subscriber location management, \nauthentication, and call handling. The reason that protec-\ntion is provided for only the MAP protocol is that it car-\nries authentication material and other subscriber-specific \nconfidential data; therefore, its security was considered \ntop priority and was standardized [9 – 11] . Though pro-\ntection for other signaling message protocols was also \nconsidered important, its was left as an improvement for \nthe next-generation network [12] . \n Security for the MAP protocol is provided in the \nform of the newly proposed protocol called Mobile \nApplication Part Security (MAPSec) when MAP runs \non the SS7 protocol stack, or Internet Protocol Security \n(IPSec) when MAP runs on the IP protocol. \n Both MAPSec and IPSec, protect MAP messages \non the link between service nodes by negotiating secu-\nrity associations. Security associations comprise keys, \nalgorithms, protection profiles, and key lifetimes used \nto protect the MAP message. Both MAPSec and IPSec \nprotect MAP messages by providing source service node \nauthentication and message encryption to prevent eaves-\ndropping, MAP corruption, and fabrication attacks. \n It must be noted that though MAPSec and IPSec are \ndeployed to protect individual MAP messages on the \nlink between service nodes, signaling messages typically \noccur as a group in a signal flow, and hence signaling \nmessages must be protected not only on the link but also \nin the intermediate service nodes. Also, the deployment \nof MAPSec and IPSec is optional; hence if any service \nprovider chooses to omit MAPSec/IPSec’s deployment, \nthe efforts of all other providers are wasted. Therefore, \nto completely protect MAP messages, MAPSec/IPSec \nmust be used by every service provider. \n With respect to wireline service nodes, while MAPSec/\nIPSec protects links between service nodes, there is no \nstandardized method for protecting service nodes [13] . \nRemote and physical access to service nodes may be \nsubject to operator’s security policy and hence could be \nexploited (insider or outsider) if the network operator is \nlax with security. Accordingly, the network suffers from \nthe possibility of node impersonation, corruption of data \nsources, and service logic attacks. For example, unauthor-\nized access to the HLR could deactivate customers or acti-\nvate customers not seen by the building system. Similarly, \nunauthorized access to the MSC could cause outages for a \nlarge number of users in a given network area. \n" }, { "page_number": 221, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n188\n Corrupt data sources or service logic in service nodes \nhave the added disadvantage of propagating this corrup-\ntion to other service nodes in the network [14 – 16] via \nsignaling messages. This fact was recently confirmed \nby a security evaluation of cellular networks [17] that \nshowed the damage potential of a compromised serv-\nice node to be much greater than the damage potential \nof compromised signaling messages. Therefore, it is of \nutmost importance to standardize a scheme for protect-\ning service nodes in the interest of not only preventing \nnode impersonation attacks but also preventing the cor-\nruption from propagating to other service nodes. \n In brief, the current generation core network is lack-\ning in security for all types of signaling messages, for \nMAP signaling messages in service nodes, and a stand-\nardized method for protecting service nodes. To protect \nall types of signaling message protocols and ensure that \nmessages are secured not only on the link between serv-\nice nodes but also on the intermediate service nodes (that \nis, secured end to end), and prevent service logic corrup-\ntion from propagating to other service nodes, the End-to-\nEnd Security (EndSec) protocol was proposed [18] . \n Because signaling message security essentially depends \non security of data item values contained in these messages, \nEndSec focuses on securing data items. EndSec requires \nevery data item to be signed by its source service nodes \nusing public key encryption. By requiring signatures, if \ndata items are corrupt by compromised intermediate serv-\nice nodes en route, the compromised status of the service \nnode is revealed to the service nodes receiving the corrupt \ndata items. Revealing the compromised status of service \nnodes prevents corruption from propagating to other serv-\nice nodes, because service nodes are unlikely to accept cor-\nrupt data items from compromised service nodes. \n EndSec also prevents misrouting and node imperson-\nation attacks by requiring every service node in a signal \nflow to embed the PATH taken by the signal flow in every \nEndSec message. Finally, EndSec introduces several \ncontrol messages to handle and correct the detected cor-\nruption. Note that EndSec is not a standardized protocol. \n Security Implications of Internet \nConnectivity \n Internet connectivity introduces the biggest threat to the \nsecurity of the cellular network. This is because cheap PC-\nbased equipment with Internet connectivity can now access \ngateways connecting to the core network. Therefore, any \nattack possible in the Internet can now filter into the core \ncellular network via these gateways. For example, Internet \nconnectivity was the reason for the slammer worm to filter \ninto the E-911 service in Bellevue, Washington, making it \ncompletely unresponsive [19] . Other attacks that can filter \ninto the core network from the Internet include spamming \nand phishing of short messages [20] . \n We expect low-bandwidth DoS attacks to be the most \ndamaging attacks brought on by Internet connectivity [21 –\n 23] . These attacks demonstrate that by sending just 240 \nshort messages per second, it is possible to saturate the cel-\nlular network and cause the MSC in charge of the region to \nbe flooded and lose legitimate short messages per second. \nLikewise, it shows that it is possible to cause a specific user \nto lose short messages by flooding that user with a large \nnumber of messages, causing a buffer overflow. Such DoS \nattacks are possible because the short message delivery \ntime in the cellular network is much greater than the short \nmessage submission time using Internet sites [24] . \n Also, short messages and voices services use the same \nradio channel, so contention for these limited resources \nmay still occur and cause a loss of voice service. To \navoid loss of voice services due to contention, separation \nof voice and data services on the radio network has been \nsuggested [16] . However, such separation requires major \nstandardization and overhaul of the network and is there-\nfore unlikely be implemented very soon. Other minor \ntechniques such as queue management and resource pro-\nvisioning have been suggested [25] . \n Though such solutions could reduce the impact of \nshort message flooding, they cannot eliminate other types \nof low-bandwidth, DoS attacks such as attacks on connec-\ntion setup and teardown of data services. The root cause \nfor such DoS attacks from the Internet to the core network \nwas identified as the difference in the design principles of \nthese networks. Though the Internet makes no assump-\ntions on the content of traffic and simply passes it on to \nthe next node, the cellular network identifies the traffic \ncontent and provides a highly tailored service involving \nmultiple service nodes for each type of traffic [26] . \n Until this gap is bridged, such attacks will continue, \nbut bridging the gap itself is a major process because \neither the design of the cellular network must be changed \nto match the Internet design, or vice versa, which is \nunlikely to happen soon. Hence a temporary fix would be \nto secure the gateways connecting the Internet and core \nnetwork. As a last note, Internet connectivity filters attacks \nnot only into the core network, but also into the PSTN net-\nwork. Hence PSTN gateways must also be guarded. \n Security Implications of PSTN Connectivity \n PSTN connectivity to the cellular network allows calls \nbetween the fixed and cellular networks. Though the PSTN \n" }, { "page_number": 222, "text": "Chapter | 12 Cellular Network Security\n189\nwas a closed network, the security-free SS7 protocol stack \non which it is based was of no consequence. However, by \nconnecting the PSTN to the core network that is in turn \nconnected to the Internet, the largest open public network, \nthe SS7-based PSTN network has “ no security left ” [27] . \n Because SS7 protocols are plaintext and have no \nauthentication features, it is possible to introduce fake \nmessages, eavesdrop, cause DoS by traffic overload, and \nincorrectly route signaling messages. Such introduction \nof SS7 messages into the PSTN network is very easily \ndone using cheap PC-based equipment. Attacks in which \ncalls for 800 and 900 numbers were rerouted to 911 serv-\ners so that legitimate calls were lost are documented [28] . \nSuch attacks are more so possible due to the IP interface \nof the PSTN service nodes and Web-based control of \nthese networks. \n Because PSTN networks are to be outdated soon, there \nis no interest in updating these networks. So, they will \nremain “ security free ” until their usage is stopped [29] . \n So far, we have addressed the security and attacks on \neach aspect of the cellular network. But an attack that is \ncommon to all the aspects of the cellular network is the \n cascading attack. Next we detail the cascading attack \nand present vulnerability assessment techniques to iden-\ntify the same. \n 4. CELLULAR NETWORK ATTACK \nTAXONOMY \n In this part of the chapter, we present the cellular net-\nwork specific attack taxonomy. This attack taxonomy is \ncalled the three-dimensional taxonomy because attacks \nare classified based on the following three dimensions: \n(1) adversary’s physical access to the network when the \nattack is launched; (2) type of attack launched; and (3) \nvulnerability exploited to launch the attack. \n The three-dimensional attack taxonomy was moti-\nvated by the cellular network specific abstract model, \nwhich is an atomic model of cellular network service \nnodes. It enabled better study of interactions within \nthe cellular network and aided in derivation of sev-\neral insightful characteristics of attacks on the cellular \nnetwork. \n The abstract model not only led to the development \nof the three-dimensional attack taxonomy that has been \ninstrumental in uncovering (1) cascading attacks , a \ntype of attack in which the adversary targets a specific \nnetwork location but attacks another location, which in \nturn propagates the attack to the target location, and (2) \n cross-infrastructure cyber attack , a new breed of attack \nin which the cellular network may be attacked from the \nInternet [30] . In this part of the chapter we further detail \nthe three-dimensional attack taxonomy and cellular net-\nwork abstract model. \n Abstract Model \n The abstract model dissects functionality of the cellular \nnetwork to the basic atomic level, allowing it to system-\natically isolate and identify vulnerabilities. Such iden-\ntification of vulnerabilities allows attack classification \nbased on vulnerabilities, and isolation of network func-\ntionality aids in extraction of interactions between net-\nwork components, thereby revealing new vulnerabilities \nand attack characteristics. \n Because service nodes in the cellular network com-\nprise sophisticated service logic that performs numerous \nnetwork functions, the abstract model logically divides \nthe service logic into basic atomic units, called agents \n(represented by the elliptical shape in Figure 12.4 ). Each \nagent performs a single function. Service nodes also \nmanage data, so the abstract model also logically divides \ndata sources into data units specific to the agents they \nsupport. The abstract model also divides the data sources \ninto permanent (represented by the rectangular shape in \n Figure 12.4 ) or cached (represented by the triangular \nshape in Figure 12.4 ) from other service nodes. \n The abstract model developed for the CS domain is \nillustrated in Figure 12.4. It shows agents, permanent, \nand cached data sources for the CS service nodes. For \nexample, the subscriber locator agent in the HLR is the \nagent that tracks the subscriber location information. \nIt receives and responds to location requests during an \nincoming call and stores a subscriber’s location every \ntime they move. This location information is stored in \nthe location data source . Readers interested in further \ndetails may refer to [31,32] . \n Abstract Model Findings \n The abstract model had lead to many interesting find-\nings. We outline them as follows: \n Interactions \n To study the network interactions, service nodes in signal \nflows (e.g. call delivery service) were replaced by their \ncorresponding abstract model agents and data sources. \nSuch an abstract-model – based signal flow based on the \ncall delivery service is shown in Figure 12.5 . \n" }, { "page_number": 223, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n190\nSession\nState\nData\nRouting\nData\nSession\nControl\nAgent\nRouting\nAgent\nSubscribed\nServices\nSupport\nAgent\nSubscriber \nServices\nData\nChannel\nAgent\nPaging\nAgent\nPaging\nData\nChannel\nAccess\nData\nMSC\nRegistration \nAgent\nSubscriber \nLocator \nAgent\nUser\nProfile\nSettings\nUsers\nTerminal\nData\nUsers New\nSubscribed\nServices\nData\nSubscriber\nProfile Manager\nAgent\nUser CAMEL &\nSupplementary\nServices Data\nAuthentication \nData\nHLR\nLocation Data\nAuthenticator \nAgent\nForeign\nRegistration\nAgent\nForeign\nLocator\nAgent\nForeign\nAuthenticator\nAgent\nForeign Agent\nProfile Manager\nAuthentication \nData\nRouting\nAgent Location Data\nVLR\nSubscribed \nServices\nData\nUser Profile\nSettings\nUser CAMEL &\nSupplementary\nServices \nData\n FIGURE 12.4 Abstract model of circuit-switched service nodes. \nLocation \nData\nRouting Agent\nLocation Data\nRouting\nData\nSession State\nData\nUser Profile\nSettings\nSubscriber\nServices\nData\n10. Provide\nSubscriber Data\nForeign Agent\nProfile Manager\nSession\nControl\nAgent\nRouting\nAgent\nForeign\nLocation\nAgent\nSubscriber Locator \nAgent\n5. Routing\nRequest/\nResponse\n9. SIFIC\n2b. Provide\nLocation\n2a. Provide\nTerminal(s)\nInfo\n11. Page MS\n8. IAM\n1. Send\nRouting\nInformation\n7. PRN_ACK\nUsers\nTerminal\nData\n3. Provide\nRoaming\nNumber\n4. Find Routing\nAgent for User\n6. Give Routing\nNumber\n12. Save State\n FIGURE 12.5 Abstract model-based signal flow for the call delivery service. \n" }, { "page_number": 224, "text": "Chapter | 12 Cellular Network Security\n191\n In studying the abstract model signal flow, it was \nobserved that interactions happen (1) between agents \ntypically using procedure calls containing data items; (2) \nbetween agents and data sources using queries contain-\ning data items; and (3) between agents belonging to dif-\nferent service nodes using signaling messages containing \ndata items. \n The common behavior in all these interactions is that \nthey typically involve data items whose values are set \nor modified in agents or data source, or it involves data \nitems passed between agents, data sources, or agents and \ndata sources. Hence, the value of a data item not only \ncan be corrupt in an agent or data source, it can also be \neasily passed on to other agents, resulting in propaga-\ntion of corruption. This propagation of corruption is \ncalled the cascading effect , and attacks that exhibit this \neffect are called cascading attacks . In the following, we \npresent a sample of the cascading attack. \n Sample Cascading Attack \n In this sample cascading attack, cascading due to corrupt \ndata items and ultimately their service disruption are \nillustrated in Figure 12.6 . Consider the call delivery serv-\nice explained previously. Here the adversary may corrupt \nthe roaming number data item (used to route the call) in \nthe VLR. This corrupt roaming number is passed on in \nmessage PRN_ACK to the HLR, which in turn passes this \ninformation to the GMSC. The GMSC uses the incorrect \n roaming number to route the call to the incorrect MSC B , \ninstead of the correct MSC A . This results in the caller \nlosing the call or receiving a wrong-number call. Thus \ncorruption cascades and results in service disruption. \n The type of corruption that can cascade is system-\nacceptable incorrect value corruption , a type of corrup-\ntion in which corrupt values taken on system-acceptable \nvalues, albeit incorrect values. Such a corruption can \ncause the roaming number to be incorrect but a system-\nacceptable value. \n Note that it is easy to cause such system-acceptable \nincorrect value corruption due to the availability of Web \nsites that refer to proprietary working manuals of service \nnodes such as the VLR [33,34] . Such command inser-\ntion attacks have become highly commonplace, the most \ninfamous being the telephone tapping of the Greek gov-\nernment and top-ranking civil servants [35] . \n Cross-Infrastructure Cyber Cascading Attacks \n When cascading attacks cross into the cellular networks \nfrom the Internet through cross-network services , they’re \ncalled cross-infrastructure cyber cascading attacks. This \nattack is illustrated on the CFS in Figure 12.7 . \n As the CFS forwards calls based on the emails \nreceived, corruption is shown to propagate from the \nmail server to a call-forward (CF) server and finally to \nthe MSC. In the attack, using any standard mail server \nvulnerabilities, the adversary may compromise the mail \nserver and corrupt the email data source by deleting \nemails from people the victim is expecting to call. The \nCF server receives and caches incorrect email from the \nmail server. \nGMSC\nInitial Address Message (IAM)\nInitial\nAddress\nMessage\n(IAM)\nHLR\nVLR\nMSCA\nSend Rout\nInfo\n(SRI)\nSend Rout\nInfo Ack\n(SRI_ACK)\nProvide Roam\nNum (PRN)\nProvide Roam\nNum Ack\n(PRN_ACK)\nMSCB\nPropagate\nAttack\nPropagate\nInitial Address Message (IAM)\n FIGURE 12.6 Sample cascading attacks in the call delivery service. \n" }, { "page_number": 225, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n192\n When calls arrive for the subscriber, the call-\n forwarding service is triggered, and the MSC queries \nthe CF server on how to forward the call. The CF server \nchecks its incorrect email cache, and because there are \nno emails from the caller, it responds to the MSC to \nforward the call to the victim’s voicemail when in real-\nity the call should have been forwarded to the cellular \ndevice. Thus the effect of the attack on the mail server \npropagates to the cellular network service node. This is a \nclassic example of a cross-infrastructure cyber cascading \nattack, whereby the adversary gains access to the cross-\nnetwork server, and attacks by modifying data in the \ndata source of the cross-network server. \n Note that it has become highly simplified to launch \nsuch attacks due to easy accessibility to the Internet and \nsubscriber preference for Internet-based cross-network \nservices. \n Isolating Vulnerabilities \n From the abstract model, the major network components \nthat are vulnerable to adversaries are (1) data sources; \n(2) agents (more generally called service logic); and (3) \nsignaling messages. By exploiting each of these vulner-\nabilities, data items that are crucial to the correct work-\ning of the network can be corrupted, leading to ultimate \nservice disruption through cascading effects. \n In addition, the effect of corrupt signaling messages \nis different from the effect of corrupt data sources. By \ncorrupting data items in a data source of a service node, \nall the subscribers attached to this service node may be \naffected. However, by corrupting a signaling message, \nonly the subscribers (such as the caller and called party in \ncase of call delivery service) associated with the message \nare affected. Likewise, corrupting the agent in the service \nnode can affect all subscribers using the agent in the serv-\nice node. Hence, in the three-dimensional taxonomy, a \nvulnerability exploited is considered as an attack dimen-\nsion, since the effect on each vulnerability is different. \n Likewise, the adversary’s physical access to the net-\nwork also affects how the vulnerability is exploited and \nhow the attack cascades. For example, consider the case \nwhen a subscriber has access to the air interface. The \nadversary can only affect messages on the air interface. \nSimilarly, if the adversary has access to a service node, the \ndata sources and service logic may be corrupted. Hence, \nin the three-dimensional taxonomy, the physical access is \nconsidered a category as it affects how the vulnerability is \nexploited and its ultimate effect on the subscriber. \n Finally, the way the adversary chooses to launch an \nattack ultimately affects the service in a different way. \nConsider a passive attack such as interception . Here the \nservice is not affected, but it can have a later effect on \nthe subscriber, such as identity theft or loss of privacy. \nAn active attack such as interruption can cause complete \nservice disruption. Hence, in the three-dimensional tax-\nonomy, the attack means are considered a category due \nthe ultimate effect on service. \n In the next part of the chapter, we detail the cellular \nnetwork specific three-dimensional taxonomy and the way \nthe previously mentioned dimensions are incorporated. \n Three-Dimensional Attack Taxonomy \n The three dimensions in the taxonomy include Dimension \nI: Physical Access to the Network, Dimension II: Attack \nCategories, and Dimension III: Vulnerability Exploited. \nIn the following, we outline each dimension. \n Dimension I: Physical Access to the Network \n In this dimension, attacks are classified based on the adver-\nsary’s level of physical access to the cellular network. \nDimension I may be further classified into single infra-\nstructure attacks (Level I – III) and cross-infrastructure \ncyber attacks (Level IV – V): \n Level I: Access to air interface with physical device. Here \nthe adversary launches attacks via access to the radio \naccess network using standard inexpensive “ off-the-\nshelf ” equipment [36] . Attacks include false base sta-\ntion attacks, eavesdropping, and man-in-the-middle \nattacks and correspond to attacks previously mentioned. \nHLR\nInternet\nMail Server\nAttack\nPropagate\nPropagate\nGMSC\nVLR\nMSC\nCF Server\nCore Network\n FIGURE 12.7 Cross-infrastructure cyber cascading attacks on call-forward service. \n" }, { "page_number": 226, "text": "Chapter | 12 Cellular Network Security\n193\n Level II: Access to links connecting core service nodes. \nHere the adversary has access to links connecting to \ncore service nodes. Attacks include disrupting normal \ntransmission of signaling messages and correspond to \nmessage corruption attacks previously mentioned. \n Level III: Access core service nodes. In this case, the \nadversary could be an insider who managed to \ngain physical access to core service nodes. Attacks \ninclude editing the service logic or modifying data \nsources, such as subscriber data (profile, security and \nservices) stored in the service node and correspond-\ning to corrupt service logic, data source, and node \nimpersonation attacks previously mentioned. \n Level IV: Access to links connecting the Internet and \nthe core network service nodes. This is a cross-\ninfrastructure cyber attack. Here the adversary has \naccess to links connecting the core network and \nInternet service nodes. Attacks include editing and \ndeleting signaling messages between the two networks. \nThis level of attack is easier to achieve than Level II. \n Level V: Access to Internet servers or cross-network \nservers: This is a cross-infrastructure cyber attack. \nHere the adversary can cause damage by editing the \nservice logic or modifying subscriber data (profile, \nsecurity and services) stored in the cross-network \nservers. Such an attack was previously outlined ear-\nlier in the chapter. This level of attack is easier to \nachieve than Level III. \n Dimension II: Attack Type \n In this dimension, attacks are classified based on the type \nof attack. The attack categories are based on Stallings [37] : \n Interception. The adversary intercepts signaling mes-\nsages on a cable (Level II) but does not modify or \ndelete them. This is a passive attack. This affects the \nprivacy of the subscriber and the network operator. \nThe adversary may use the data obtained from inter-\nception to analyze traffic and eliminate the competi-\ntion provided by the network operator. \n Fabrication or replay. In this case, the adversary inserts \nspurious messages, data, or service logic into the \nsystem, depending on the level of physical access. \nFor example, in a Level II, the adversary inserts fake \nsignaling messages, and in a Level III, the adversary \ninserts fake service logic or fake subscriber data into \nthis system. \n Modification of resources. Here the adversary modifies \ndata, messages, or service logic. For example, in a \nLevel II, the adversary modifies signaling messages \non the link, and in a Level III, the adversary modifies \nservice logic or data. \n Denial of service. In this case, the adversary takes \nactions to overload the network results in legitimate \nsubscribers not receiving service. \n Interruption. Here the adversary causes an interruption \nby destroying data, messages, or service logic. \n Dimension III: Vulnerability Exploited \n In this dimension, attacks are classified based on the vul-\nnerability exploited to cause the attack. Vulnerabilities \nexploited are explained as follows: \n Data . The adversary attacks the data stored in the sys-\ntem. Damage is inflicted by modifying, inserting, \nand deleting the data stored in the system. \n Messages. The adversary adds, modifies, deletes, or \nreplays signaling messages. \n Service logic. Here the adversary inflicts damage by \nattacking the service logic running in the various cel-\nlular core network service nodes. \n Attack classification. In classifying attacks, we can \ngroup them according to Case 1: Dimension I ver-\nsus Dimension II , and Case 2: Dimension II versus \nDimension III . Note that the Dimension I versus \nDimension III case can be transitively inferred from \nCase 1 and Case 2. \n Table 12.1 shows a sample tabulation of Level 1 \nattacks grouped in Case 1. For example, with Level 1 \naccess an adversary causes interception attacks by observ-\ning traffic and eavesdropping. Likewise, fabrication \nattacks due to Level I include sending spurious registra-\ntion messages. Modification of resources due to Level 1 \nincludes modifying conversations in the radio access net-\nwork. DoS due to Level 1 occurs when a large number of \nfake registration messages are sent to keep the network \nbusy so as to not provide service to legitimate subscribers. \nFinally, interruption attacks due to Level 1 occur when \nadversaries jam the radio access channel so that legitimate \nsubscribers cannot access the network. For further details \non attack categories, refer to [38] . \n 5. CELLULAR NETWORK VULNERABILITY \nANALYSIS \n Regardless of how attacks are launched, if attack actions \ncause a system-acceptable incorrect value corruption, \nthe corruption propagates, leading to many unexpected \ncascading effects. To detect remote cascading effects and \nidentify the origin of cascading attacks, cellular network \nvulnerability assessment tools were developed. \n These tools, the Cellular Network Vulnerability \nAssessment Toolkit (CAT) and the advanced Cellular \n" }, { "page_number": 227, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n194\nNetwork Vulnerability Assessment Toolkit (aCAT) \n [39,40] allow users to input the data item that might be \ncorrupted and output an attack graph. The CAT attack \ngraph not only shows the network location and the net-\nwork service where the corruption might originate, it \nalso shows the various messages and service nodes \nthrough which the corruption propagates. \n An attack graph is a diagrammatic representation \nof an attack on a real system. It shows various ways \nan adversary can break into a system or cause corrup-\ntion and the various ways in which the corruption may \npropagate within the system. Attack graphs are typically \nproduced manually by red teams and used by systems \nadministrators for protection. CAT and aCAT attack \ngraphs allow users to trace the effect of an attack through \nthe network and determine its side effects, thereby mak-\ning them the ultimate service disruption. \n The cellular network is at the nascent stage of devel-\nopment with respect to security, so it is necessary to eval-\nuate security protocols before deploying them. Hence, \naCAT can be extended with security protocol evalua-\ntion capabilities into a tool [41] called Cellular Network \nVulnerability Assessment Toolkit for evaluation (eCAT). \neCAT allows users to quantify the benefits of security \nsolutions by removing attack effects from attack graphs \nbased on the defenses provided. One major advantage of \nthis approach is that solutions may be evaluated before \nexpensive development and deployment. \n It must be noted that developing such tools — CAT, \naCAT, and eCAT — presented many challenges: (1) cel-\nlular networks are extremely complex systems; they \ncomprise several types of service nodes and control pro-\ntocols, contain hundreds of data elements, and support \nhundreds of services; hence developing such toolkits \nrequires in-depth working knowledge of these systems; \nand (2) every deployment of the cellular network com-\nprises a different physical configuration; toolkits must be \nimmune to the diversity in physical configuration; and \nfinally (3) attacks cascade in the network due to regu-\nlar network activity as a result of dependencies; toolkits \nmust be able to track the way that corruption cascades \ndue to network dependencies. \n The challenge of in-depth cellular network knowl-\nedge was overcome by incorporating the toolkits with \ncellular network specifications defined by the Third \nGeneration Partnership Project (3GPP) and is avail-\nable at no charge [42] . The 3GPP is a telecommunica-\ntions standards body formed to produce, maintain, and \ndevelop globally applicable “ technical specifications \nand technical reports ” for a third-generation mobile sys-\ntem based on evolved GSM core networks and the radio \naccess technologies that they support [43] . \n Usage of specifications allows handling of the diver-\nsity of physical configuration, as they detail the func-\ntional behavior and not the implementation structure of \nthe cellular network. Specifications are written using \nsimple flow-like diagrams called the Specification and \nDescription Language (SDL) [44] , and are referred to \nas SDL specifications . Equipment and service providers \nuse these SDL specifications as the basis for their serv-\nice implementations. \n Corruption propagation is tracked by incorporat-\ning the toolkits with novel dependency and propagation \nmodels to trace the propagation of corruption. Finally, \nBoolean properties are superimposed on the propagation \nmodel to capture the impact of security solutions. \n TABLE 12.1 Sample Case 1 Classification \n \n Interception \n Fabrication/\nInsertion \n Modification of \nResources \n Denial of Service \n Interruption \n Level I \n ● Observe time, \nrate, length, \nsource, and \ndestination \nof victim’s \nlocations. \n ● With modified \ncellular devices, \neavesdrop on \nvictim. \n ● Using modified \ncellular devices, \nthe adversary can \nsend spurious \nregistration \nmessages to the \ntarget network. \n ● Likewise, using \nmodified base \nstations, the \nadversary can \nsignal victims \nto camp at their \nlocations. \n ● With a modified \nbase station and \ncellular devices, \nthe adversary \nmodifies \nconversations \nbetween \nsubscribers \nand their base \nstations. \n ● The adversary \ncan cause DoS \nby sending a \nlarge number of \nfake registration \nmessages. \n ● Jam victims ’ \ntraffic channels \nso that victims \ncannot access \nthe channels. \n ● Broadcast at a \nhigher intensity \nthan allowed, \nthereby hogging \nthe bandwidth. \n" }, { "page_number": 228, "text": "Chapter | 12 Cellular Network Security\n195\n CAT is the first version of the toolkit developed for \ncellular network vulnerability assessment. CAT works by \ntaking user input of seeds (data items directly corrupted \nby the adversary and the cascading effect of which leads \nto a goal) and goals (data parameters that are derived \nincorrectly due to the direct corruption of seeds by the \nadversary) and uses SDL specification to identify cas-\ncading attacks. However, SDL is limited in its expres-\nsion of relationships and inexplicit in its assumptions \nand hence cannot capture all the dependencies; therefore \nCAT misses several cascading attacks. \n To detect a complete set of cascading effects, CAT \nwas enhanced with new features, to aCAT. The new fea-\ntures added to aCAT include (1) a network dependency \nmodel that explicitly specifies the exact dependencies in \nthe network; (2) infection propagation rules that iden-\ntify the reasons that cause corruption to cascade; and \n(3) a small amount of expert knowledge. The network \ndependency model and infection propagation rules may \nbe applied to SDL specifications and help alleviate their \nlimited expression capability. The expert knowledge \nhelps capture the inexplicit assumptions made by SDL. \n In applying these features, aCAT captures all those \ndependencies that were previously unknown to CAT, and \nthereby aCAT was able to detect a complete set of cas-\ncading effects. Through extensive testing of aCAT, sev-\neral interesting attacks were found and the areas where \nSDL is lacking was identified. \n To enable evaluation of new security protocols, aCAT \nwas extended to eCAT. eCAT uses Boolean probabilities \nin attack graphs to detect whether a given security pro-\ntocol can eliminate a certain cascading effect. Given a \nsecurity protocol, eCAT can measure effective cover-\nage, identify the types of required security mechanisms \nto protect the network, and identify the most vulnerable \nnetwork areas. eCAT was also used to evaluate MAPSec, \nthe new standardized cellular network security protocol. \nResults from MAPSec’s evaluation gave insights into \nMAPSec’s performance and the network’s vulnerabili-\nties. In the following, we detail each toolkit. \n Cellular Network Vulnerability Assessment \nToolkit (CAT) \n In this part of the chapter, we present an overview of \nCAT and its many features. CAT is implemented using \nthe Java programming language. It is made up of a \nnumber of subsystems (as shown in Figure 12.8 ). The \n knowledge base contains the cellular network knowledge \nobtained from SDL specifications. SDL specifications \ncontain simple flowchart-like diagrams. The flowcharts \nare converted into data in the knowledge base . The inte-\ngrated data structure is similar to that of the knowledge \nbase; it holds intermediate attack graph results. \n The GUI subsystem takes user input in the form \nof seeds and goals. The analysis engine contains algo-\nrithms (forward and midpoint) incorporated with cascad-\ning effect detection rules. It explores the possibility of \nthe user input seed leading to the cascading effect of the \nuser input goal , using the knowledge base, and outputs \nthe cascading attack in the form of attack graphs. \n Using these attack graphs, realistic attack scenarios \nmay be derived. Attack scenarios explain the effect of \nthe attack on the subscriber in a realistic setting. Each \nattack graph may have multiple interpretations and give \nrise to multiple scenarios. Each scenario gives a different \nperspective on how the attack may affect the subscriber: \n ● Cascading effect detection rules. These rules were \ndefined to extract cascading effects from the SDL \nspecifications contained in the knowledge base. They \nare incorporated into the algorithms in the analysis \nengine. These rules define what constitutes propa-\ngation of corruption from a signaling message to a \nblock, and vice versa, and propagation of corruption \nwithin a service node. For example, when a service \nnode receives a signaling message with a corrupt \ndata item and stores the data item, it constitutes \npropagation of corruption from a signaling message \nto a block. Note that these rules are high level. \n ● Attack graph. The CAT attack graph may be defined as \na state transition showing the paths through a system, \nstarting with the conditions of the attack, followed by \nattack action, and ending with its cascading effects. \n In Figure 12.9 , we present the CAT attack graph out-\nput, which was built using user input of ISDN Bearer \nCapability as a seed and Bearer Service as goal. The \nUser Input\nGUI\nAttack \nGraph\nOutput\nAnalysis Engine\nKnowledge Base\nIntegrated\nData Structure\nAttack Scenario n\nAttack Scenario 2\nAttack Scenario 1\n FIGURE 12.8 Architecture of CAT. \n" }, { "page_number": 229, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n196\nattack graph constitutes nodes and edges. Nodes repre-\nsent states in the network with respect to the attack, and \n edges represent network state transitions. For descrip-\ntion purposes, each node has been given a node label \nfollowed by an alphabet, and the attack graph has been \ndivided into layers. \n Nodes may be broadly classified as conditions, actions, \nand goals, with the conditions of the attack occurring at \nthe lowest layer and the final cascading effect at the high-\nest layer. In the following, we detail each node type: \n ● Condition nodes. Nodes at the lowest layer typically \ncorrespond to the conditions that must exist for the \nattack to occur. These condition nodes directly fol-\nlow from the taxonomy. They are an adversary’s \nphysical access, target service node, and vulnerability \nexploited. For example, the adversary has access to \nlinks connecting to the GMSC service node, that is, \nLevel II physical access; this is represented as Node \nA in the attack graph. Likewise, the adversary cor-\nrupts data item ISDN Bearer Capability in the IAM \nmessage arriving at the GMSC. Hence the target of \nthe attack is the GMSC and is represented by Node B. \nSimilarly, the adversary exploits vulnerabilities in a \nmessage (IAM), and this is represented by Node D in \nthe attack graph. \n \n The CAT attack graphs show all the possible con-\nditions for an attack to happen, that is, we see not \nonly the corruption due to the seed ISDN Bearer \nCapability in signaling message IAM arriving at the \nGMSC but also other possibilities such as the cor-\nruption of goal Bearer Service in signaling message \nSIFIC, represented by Node M. \n ● Action nodes. Nodes at higher layers are actions \nthat typically correspond to effects of the attack \npropagating through the network. Effects typically \ninclude propagation of corruption between service \nnodes, such as from MSC to VLR (Node N), \npropagation of corruption within service nodes \nsuch as ISDN Bearer Capability corrupting \nBearer Service (Node L), and so on. Actions may \nfurther be classified as adversary actions, normal \nnetwork operations, or normal subscriber activities. \nAdversary actions include insertion, corruption, or \n2, Action: Corrupt Data\nISDN BC in Process\nICH_MSC of MSC\nLayer 6\nLayer 5\nLayer 4\nLayer 3\nNode H\nNode I\nNode J\nNode K\nNode M\nNode L\nNode N\nNode O\nNode G\nNode F\nNode E\nNode D\nNode C\nNode B\nNode A\nLayer 2\nLayer 1\nLayer 0\nA\nA\nA\nA\nA\nA\nA\nA\n1,2, Goal: Incorrect\nBearer Service Provided\n1, Action: Corrupt\nBearer Service in\nMessage SIFIC\n2, Action: Respond with\nCorrupt ‘Bearer Service’ in\nMessage SIFIC arriving at VLR\n1, PA: Level II\n1, Tgt: VLR\n1, Action: Incoming\nMessage SIFIC\narriving at VLR\n1, Vul: Message\n2, Action: Corrupt data\n‘ISDN BC’ Corrupts\n‘Bearer Service’ in\nProcess ICH_MSC of MSC\n2, Action: Message IAM arriving\nat MSC with Corrupt ISDN BC\n2, Action: Corrupt \nISDN BC in Message\nIAM\n2, Vul: Message\n2, Tgt: GMSC\n2, PA: Level II\n2, Action: Incoming\nMessage IAM arriving\nat GMSC\n FIGURE 12.9 CAT attack graph output. \n" }, { "page_number": 230, "text": "Chapter | 12 Cellular Network Security\n197\ndeletion of data, signaling messages, or service logic \nrepresented by Node E. Normal network operations \ninclude sending (Node N) and receiving signaling \nmessages (Node E). Subscriber activity may include \nupdating personal data or initiating service. \n ● Goal nodes . Goal nodes typically occur at the highest \nlayer of the attack graph. They indicate corruption of \nthe goal items due to the direct corruption of seeds \nby the adversary (Node A). \n ● Edges. In our graph, edges represent network \ntransitions due to both normal network actions \nand adversary actions. Edges help show the global \nnetwork view of adversary action. This is the \nuniqueness of our attack graph. Transitions due to \nadversary action are indicated by an edge marked by \nthe letter A (edges connecting Layer 0 and Layer 1). \nBy inclusion of normal network transitions in addition \nto the transitions caused by the adversary, our attack \ngraph shows not only the adversary’s activity but also \nthe global network view of the adversary’s action . \nThis is a unique feature of the attack graph. \n ● Trees. In the graph, trees are distinguished by the tree \nnumbers assigned to its nodes. For example, all the \nnodes marked with number 2 belong to Tree 2 of the \ngraph. Some nodes in the graph belong to multiple \ntrees. Tree numbers are used to distinguish between \nAND and OR nodes in the graph. Nodes at a par-\nticular layer with the same tree number(s) are AND \nnodes. For example, at Layer 4, Nodes H, I, J, and \nK are AND nodes; they all must occur for Node M \nat Layer 5 to occur. Multiple tree numbers on a node \nare called OR nodes. The OR node may be arrived at \nusing alternate ways. For example, Node O at Layer \n6 is an OR node, the network state indicated by Node \nO may be arrived at from Node M or Node N. \n Each attack tree shows the attack effects due to cor-\nruption of a seed at a specific network location (such as \nsignaling message or process in a block). For example, \nTree 1 shows the attack due to the corruption of the seed \nBearer Service at the VLR. Tree 2 shows the propaga-\ntion of the seed ISDN Bearer Capability in the signaling \nmessage IAM . These trees show that the vulnerability of \nthe cellular network is not limited to one place but can \nbe realized due to the corruption of data in many net-\nwork locations. \n In constructing the attack graph, CAT assumes that an \nadversary has all the necessary conditions for launching \nthe attack. The CAT attack graph format is well suited \nto cellular networks because data propagates through the \nnetwork in various forms during the normal operation of \na network; thus an attack that corrupts a data item mani-\nfests itself as the corruption of a different data item in a \ndifferent part of the network after some network opera-\ntions take place. \n ● Attack scenario derivation. The CAT attack graph \nis in cellular network semantics, and realistic attack \nscenarios may be derived to understand the implica-\ntions of the attack graph. Here we detail the principles \ninvolved in the derivation of realistic attack scenarios: \n 1. End-user effect. Goal node(s) are used to infer \nthe end effect of the attack on the subscriber. \nAccording to the goal node in Figure 12.9 , the \nSIFIC message to the VLR has incorrect goal \nitem Bearer Service. The SIFIC message is used \nto inform the VLR the calling party’s preferences \nsuch as voice channel requirements and request \nthe VLR to set up the call based on the calling \nparty and receiving party preferences. \n \n If the calling party’s preferences (such as Bearer \nService) are incorrect, the call setup by the VLR \nis incompatible with the calling party, and the \ncommunication is ineffective (garbled speech). \nFrom the goal node, it can be inferred that Alice, \nthe receiver of the call, is unable to communicate \neffectively with Bob, the caller, because Alice can \nonly hear garbled speech from Bob’s side. \n 2. Origin of attack. Nodes at Layer 0 indicate the \norigin of the attack, and hence the location of the \nattack may be inferred. The speech attack may \noriginate at the signaling messages IAM, or the \nVLR service node. \n 3. Attack propagation and side effects. Nodes at all \nother layers show the propagation of corruption \nacross the various service nodes in the network. \nFrom other layers in Figure 12.9 , it can be inferred \nthat the seed is the ISDN Bearer Capability and the \nattack spreads from the MSC to the VLR. \n ● Attack Scenario: Using these guidelines, an attack \nscenario may be derived as follows. Trudy, the \nadversary, corrupts the ISDN Bearer Capability of \nBob, the victim, at the IAM message arriving at the \nGMSC. The GMSC propagates this corruption to \nthe MSC, which computes, and hence corrupts, the \nBearer Service. The corrupt Bearer Service is passed \non to the VLR, which sets up the call between Bob, \nthe caller, and Alice, the receiver. Bob and Alice can-\nnot communicate effectively because Alice is unable \nto understand Bob. \n Though CAT has detected several cascading attacks, \nits output to a great extent depends on SDL’s ability to \n" }, { "page_number": 231, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n198\ncapture data dependencies. SDL is limited in its expres-\nsion capability in the sense that it does not always accu-\nrately capture the relationship between data items, and \nin many cases, SDL does even specify the relationship. \nWithout these details CAT may miss some cascading \neffects due to loss of data relationships. CAT’s output to \na minor extent also depends on user input in the sense \nthat to accurately capture all the cascading effect of a \nseed, the user’s input must comprise all the seeds that \ncan occur in the cascading effect; otherwise the exact \ncascading effect is not captured. To alleviate CAT’s inad-\nequacies, aCAT was developed. \n Advanced Cellular Network Vulnerability \nAssessment Toolkit (aCAT) \n In this part of the chapter, we present aCAT, an exten-\nsion of CAT with enhanced features. These enhanced \nfeatures include (1) incorporating expert knowledge to \ncompensate for the lacking caused by SDL’s inexplicit \nassumptions; expert knowledge added to the knowledge \nbase with the SDL specifications; (2) defining a network \ndependency model that accurately captures the depend-\nencies in the cellular network; the network dependency \nmodel is used to format the data in knowledge base, \nthereby clarifying the nature of the network dependency; \nand (3) defining infection propagation rules that define \nfine-grained rules to detect cascading attacks; these \ninfection propagation rules are incorporated into the \nanalysis engine, which comprises the forward, reverse, \nand combinatory algorithms. aCAT is also improved in \nterms of its user input requirements. It requires as input \neither seeds or goals, whereas CAT required both seeds \nand goals. \n In principle, cascading attacks are the result of propa-\ngation of corruption between network components (such \nas signaling messages, caches, local variables, and serv-\nice logic) due to dependencies that exist between these \nnetwork components. Hence, to uncover these attacks, \nthe network dependency model and infection propaga-\ntion (IP) rules were defined. In the following, we detail \nthe network dependency model and infection propaga-\ntion model using Figure 12.10 . \n ● Network dependency model. The network depend-\nency model accurately defines fine-grained depend-\nencies between the various network components. \nGiven that service nodes comprise agents and data \nsources (from the abstract model), the dependencies \nare defined as follows. In interagent dependency, \nagents communicate with each other using agent \nIndicates that data parameter dx\nis corrupt due to intruder action\nIndicates that data parameter dx\nis corrupt due to cascading effect\nNode Nm\nNode Nk\nNode\n Ni\nNode Nj\n6. InvokeA2[dH]\nAgent A2\n8. f(dAor dC)\u0003dF\n9. f(dAand dF)\u0003dG\nAgent A1\n2. f(dAand dB)\u0003dC\n3. f(dA\nk key dB)\u0003dH\nData Source\nDj\n11. Message M3 (dA, dF, dG)\n5. Message\nLocal\nVariables\nCache\n4. Write[ dA, dB, dC, dH]\n10. Write[ dF, dG]\n7. Read\nM2 (dA, dB, dC, dH)\nM1 (dA, dB)\n[dA, dC]\n1. Message\ndX :\ndX :\n FIGURE 12.10 Network dependency model. \n" }, { "page_number": 232, "text": "Chapter | 12 Cellular Network Security\n199\ninvocations (as shown by 6 in Figure 12.10 ) contain-\ning data items. Thus, agents are related to each other \nthrough data items. Likewise, in agent to data source \ndependency, agents communicate with data sources \nusing Read and Write operations containing data \nitems. Therefore, agents and data items are related to \neach other through data items. Within agents, deriva-\ntive dependencies define relationships between data \nitems. Here data items are used as input to derive data \nitems using derivation operations such as AND, OR \noperations. Therefore, data items are related to each \nother through derivation operation. For further detail \non the network dependency model, refer to [45] . \n ● Infection propagation (IP) rules. These are fine-\ngrained rules to detect cascading effects. They are \nincorporated into the algorithms in the analysis \nengine. An example of the IP rule is that an output \ndata item in the AND dependency is corrupt only if \nboth the input data items are corrupt (as shown by 9 \nin Figure 12.10 ). Likewise, an output data item in the \nOR dependency is corrupt if a single input data item \nis corrupt (as shown by 8 in Figure 12.10 ). Similarly, \ncorruption propagates between agents when the data \nitem used to invoke the agent is corrupt, and the same \ndata item is used as an input in the derivative depend-\nency whose output may be corrupt (as shown by 6, 8 \nin Figure 12.10 ). Accordingly, corruption propagates \nfrom an agent to a data source if the data item written \nto the data source is corrupt (as shown by 4 in Figure \n12.10 ). Finally, corruption propagates between serv-\nice nodes if a data item used in the signaling message \nbetween the service nodes is corrupt, and the corrupt \ndata item is used to derive corrupt output items or \nthe corrupt data item is stored in the data source (as \nshown by 1, 3 or 1, 4 in Figure 12.10 ) [46] . \n With such a fine-grained dependency model and \ninfection propagation rules, aCAT was very successful in \nidentifying cascading attacks in several of the key serv-\nices offered by the cellular network, and it was found \nthat aCAT can indeed identify a better set of cascading \neffects in comparison to CAT. aCAT has also detected \nseveral interesting and unforeseen cascading attacks that \nare subtle and difficult to identify by other means. These \nnewly identified cascading attacks include the alerting \nattack, power-off/power-on attack, mixed identity attack, \ncall redirection attack, and missed calls attack. \n ● Alerting attack. In the following we detail aCAT’s \noutput, a cascading attack called the alerting attack, \nshown in Figure 12.11 . From goal nodes (Node A \nat Level 5, and Node C at Level 4) in the alerting \nattack, it can be inferred that the Page message has \nincorrect data item page type . The Page message is \nused to inform subscribers of the arrival of incom-\ning calls, and “ page type ” indicates the type of call. \n “ Page type ” must be compatible with the subscriber’s \nmobile station or else the subscriber is not alerted. \nFrom the goal node it may be inferred that Alice, a \nsubscriber of the system, is not alerted on the arrival \nof an incoming call and hence does not receive \nincoming calls. This attack is subtle to detect because \nnetwork administrators find that the network pro-\ncesses the incoming call correctly and that the sub-\nscriber is alerted correctly. They might not find that \nthis alerting pattern is incompatible with the mobile \nstation itself. \n Also, nodes at Level 0 indicate the origin of the \nattack as signaling messages SRI , PRN , the service \nnodes VLR, or the HLR. From the other levels it may \nbe inferred that the seed is the alerting pattern that the \nadversary corrupts in the SRI message and the attack \nspreads from the HLR to the VLR and from the VLR to \nthe MSC. For more details on these attacks, refer to [47] . \n Cellular Network Vulnerability Assessment \nToolkit for evaluation (eCAT) \n In this part of the chapter, we present eCAT an extension \nto aCAT. eCAT was developed to evaluate new security \nprotocols before their deployment. Though the design \ngoals and threat model of these security protocols are \ncommon knowledge, eCAT was designed to find (1) the \neffective protection coverage of these security protocols \nin terms of percentage of attacks prevented; (2) the other \nkinds of security schemes required to tackle the attacks \nthat can evade the security protocol under observation; \nand (3) the most vulnerable network areas (also called \n hotspots ) [48] . \n eCAT computes security protocol coverage using \nattack graphs generated by aCAT and Boolean probabili-\nties in a process called attack graph marking and quanti-\nfies the coverage using coverage measurement formulas \n(CMF). Attack graph marking also identifies network \nhotspots and exposes if the security protocol being eval-\nuated protects these hotspots. eCAT was also used to \nevaluate MAPSec, as it is a relatively new protocol, and \nevaluation results would aid network operators. \n ● Boolean probabilities. Boolean probabilities are used \nin attack graphs to distinguish between nodes elimi-\nnated (denoted by 0, or shaded node in attack graph) \nand nodes existing (denoted by 1, or unshaded node \nin attack graph) due to the security protocol under \nevaluation. By computing Boolean probabilities for \n" }, { "page_number": 233, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n200\neach node in the attack graph, eCAT can extract the \nattack effects that may be eliminated by the security \nprotocol under evaluation. \n ● Attack graph marking. To mark attack graphs, user input \nof Boolean probabilities must be provided for Layer 0 \nnodes. For example, if the security protocol under \nevaluation is MAPSec, then because MAPSec provides \nsecurity on links between nodes, it eliminates Level 2 \nphysical access. For example, consider the attack graph \ngenerated by eCAT shown in Figure 12.12 . Here, Node \n5 is set to 0, while all other nodes are set to 1. \n eCAT uses the input from Layer 0 nodes to com-\npute the Boolean probabilities for the rest of the nodes \n starting from Layer 1 and moving upward. For exam-\nple, the Boolean probability of the AND node (Node 18) \nis the product of all the nodes in the previous layer with \nthe same tree number. Because Node 5 has the same tree \nnumber as Node 18, and Node 5’s Boolean probability is \n0, Node 18’s Boolean probability is also 0. This process \nof marking attack graphs is continued until Boolean prob-\nability of all the nodes is computed till the topmost layer. \n ● Hotspots. Graph marking also marks the network \nhotspots in the attack graph. With respect to the \nattack graph, hotspots are the Layer 0 nodes with the \nhighest tree number count. For example in Figure \n12.12 , Node 3 and Node 4 are the hotspots. A high \n1,2. Action: Output\nmessage Page to BSS has\nincorrect data parameter\n‘Page Type’ [Node A]\n1,2. Action: Output message\nPRN to VLR has incorrect\ndata parameter ‘Alerting\nPattern’ [Node B]\n3. Action: Output message\nPRN to VLR has incorrect\ndata parameter ‘Alerting\nPattern’ [Node E]\n1,2. Action: Corrupt data\nparameter ‘Alerting Pattern’\ncorrupts data ‘Page Type’ in\nProcess ICH_VLR in VLR [Node D]\n1,2. Action: Output\nmessage PRN to VLR has\nincorrect data parameter\n‘Alerting Pattern’ [Node F]\n1. Action: Corrupt\ndata parameter\n‘Alerting Pattern’ \nOutput in message\nSRI [Node I]\n1. PA: Level 3\n[Node M]\n1. Vul: Data\nSource or service\nLogic [Node O]\n1. Tgt: GMSC\n[Node N]\n1. Tgt: HLR\n[Node P]\n2,3. PA: Level 2\n[Node R]\n3. Action: Incoming\nmessage PRN arriving\nat VLR [Node T]\n2. Action: Incoming\nmessage SRI arriving\nat HLR [Node Q]\n2,3. Vul:\nMessage\n[Node U]\n3. Tgt: VLR\n[Node S]\n2. Action: Corrupt\ndata parameter\n‘Alerting Pattern’\nin message SRI\n[Node J]\n3. Action: Corrupt data\nparameter ‘Alerting\nPattern’ in message\nPRN [Node K]\n3. Action:\nIncoming\nmessage SIFIC\narriving at VLR\n[Node L]\n1,2. Action:\nIncoming message\nSIFIC arriving at\nVLR [Node G]\n3. Action: Corrupt data parameter\n‘Alerting Pattern’ corrupts data\n‘Page Type’ in Process ICH_VLR\nin VLR [Node H]\n3. Action: Output message\nPage to BSS has incorrect\ndata parameter ‘Page Type’\n[Node C]\nA\nA\nA\nA\nA\nA\nA\nA\nA\nA\nA\nLayer 0\nLayer 1\nLayer 2\nLayer 3\nLayer 4\nLayer 5\n FIGURE 12.11 Attack graph for alerting attack. \n" }, { "page_number": 234, "text": "Chapter | 12 Cellular Network Security\n201\ntree number count indicates an increased attractive-\nness of the network location to adversaries. This is \nbecause by breaking into the network location indi-\ncated by the hotspot node, the adversary has a higher \nlikelihood of success and can cause the greatest \namount of damage. \n Extensive testing of eCAT on several of the network \nservices using MAPSec has revealed hotspots to be “ Data \nSources and Service Logic. ” This is because a corrupt \ndata source or service logic may be used by many dif-\nferent services and hence cause many varied cascading \neffects, spawning a large number of attacks (indicated by \nmultiple trees in attack graphs). Thus attacks that occur \ndue to exploiting service logic and data source vulnera-\nbilities constitute a major portion of the networkwide vul-\nnerabilities and so a major problem. In other words, by \nexploiting service logic and data sources, the likelihood \nof attack success is very high. Therefore data source and \nservice logic protection mechanisms must be deployed. It \nmust be noted that MAPSec protects neither service logic \nnor data sources; rather, it protects MAP messages. \n ● Coverage measurement formulas. The CMF com-\nprises the following set of three formulas to capture \nthe coverage of security protocols: (1) effective \ncoverage , to capture the average effective number \nof attacks eliminated by the security protocol; the \nhigher the value of Effective Coverage the greater \nthe protection the security protocol; (2) deployment \ncoverage , to capture the coverage of protocol deploy-\nments; and (3) attack coverage , to capture the attack \ncoverage provided by the security protocol; the \nhigher this value, the greater is the security solution’s \nefficacy in eliminating a large number of attacks on \nthe network. \n Extensive use of CMF on several of the network \nservices has revealed that MAPSec has an average net-\nworkwide attack coverage of 33%. This may be attrib-\nuted to the fact that message corruption has a low \nspawning effect. Typically a single message corruption \ncauses a single attack, since messages are typically used \nby a single service. Hence MAPSec is a solution to a \nsmall portion of the total network vulnerabilities. \n Finally, in evaluating MAPSec using eCAT, it was \nobserved that though MAPSec is 100% effective in pre-\nventing MAP message attacks, it cannot prevent a suc-\ncessfully launched attack from cascading. For MAPSec \nto be truly successful, every leg of the MAP message \ntransport must be secured using MAPSec. However, the \noverhead for deploying MAPSec can be high, in terms \nof both processing load and monetary investment. Also, \nas MAP messages travel through third-party networks en \nroute to their destinations, the risk level of attacks with-\nout MAPSec is very high. Hence, MAPSec is vital to \nprotect MAP messages. \n In conclusion, because MAPSec can protect against \nonly 33% of attacks, it alone is insufficient to protect the \nnetwork. A complete protection scheme for the network \nmust include data source and service logic protection. \n 6. DISCUSSION \n Next to the Internet, the cellular network is the most \nhighly used communication network. It is also the most \nvulnerable, with inadequate security measures making it \nT# 5. Action: Corrupt\ndata ‘SC_Addr’ in message\n‘MAP_MT_Fwd_SM_ACK’\narriving at MSC\n[Node 22] P\u00030\nLayer 1\nLayer 0\nT# 6. Action: Corrupt\ndata ‘SC_Addr’ in message\n‘MAP_MO_Fwd_SM’\narriving at I-MSC\n[Node 18] P\u00030\nT# 9. Action: Corrupt\ndata ‘PDU’ at MSC\n[Node 23] P\u00031\nT# 7,8. Action: Corrupt\ndata ‘PDU’ at I-MSC\n[Node 19] P\u00031\nT# 10. Action: Corrupt\ndata ‘PDU’ in message\n‘MAP_MT_Fwd_SM’\narriving at MSC\n[Node 24] P\u00030\nT# 4,7. Action: Message\n‘MAP_MO_Fwd_SM’\narriving at I-MSC\n[Node 13] P\u00031\nT# 3. Action: Message\n‘MAP_MT_Fwd_\nSM_ACK’ arriving\nat I-MSC\n[Node 12] P\u00031\nT# 1. Action:\nMessage ‘Short_\nMessage’ arriving\nat MSC [Node 10] P\u00031\nT# 1,2,3,4,7,8,9,\nPA: Level 3\n[Node 1] P\u00031\nT# 1,2,5,9,10.\nTgt: MSC\n[Node 2] P\u00031\nT# 1,2,3,4,7,8,9.\nVul: Data Source\nor Service Logic\n[Node 3] P\u00031\nT# 12.\nTgt: MS\n[Node 8] P\u00031\nT# 5,6,10,11,13.\nPA: Level 2\n[Node 5] P\u00030\nT# 12.\nPA: Level 1\n[Node 7] P\u00031\nT# 5,6,10,11,12,13.\nVul: Message\n[Node 6] P\u00031\nT# 2. Action: Message\n‘Short_ Message_ACK’\narriving at MSC\n[Node 11] P\u00031\nT# 3,4. Action: Corrupt\ndata ‘SC_Addr’ at\nI-MSC [Node 17] P\u00031\nT# 1,2. Action: Corrupt\ndata ‘SC_Addr’ at\nMSC [Node 16] P\u00031\nT# 8. Action:\nMessage ‘Short_\nMessage’ arriving\nat I-MSC\n[Node 14] P\u00031\nT# 11. Action: Corrupt\ndata ‘PDU’ in message\n‘MAP_MO_Fwd_SM’\narriving at I-MSC\n[Node 20] P\u00030\nT# 12. Action: Corrupt\ndata ‘PDU’ in message\n‘Short Message’\narriving at MS\n[Node 25] P\u00031\nT# 13. Action: Corrupt\ndata ‘PDU’ in message\n‘Short Message’\narriving at SC\n[Node 21] P\u00031\nA\nT# 9. Action: Message\n‘MAP_MT_Fwd_SM’\narriving at MSC\n[Node 15] P\u00031\nT# 13.\nTgt: SC\n[Node 9] P\u00031\nT# 3,4,6,7,8,11,13.\nTgt: I-MSC\n[Node 4] P\u00031\n FIGURE 12.12 Fragment of a marked attack graph generated by eCAT. \n" }, { "page_number": 235, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n202\na most attractive target to adversaries that want to cause \ncommunication outages during emergencies. As the cel-\nlular network is moving in the direction of the Internet, \nbecoming an amalgamation of several types of diverse \nnetworks, more attention must be paid to securing these \nnetworks. A push from government agencies requiring \nmandatory security standards for operating cellular net-\nworks would be just the momentum needed to securing \nthese networks. \n Of all the attacks discussed in this chapter, cascading \nattacks have the most potential to stealthily cause major \nnetwork misoperation. At present there is no standard-\nized scheme to protect from such attacks. EndSec is a \ngood solution for protecting from cascading attacks, \nsince it requires every data item to be signed by the \nsource service node. Because service nodes are unlikely \nto corrupt data items and they are to be accounted for \nby their signatures, the possibility of cascading attacks \nis greatly reduced. EndSec has the added advantage of \nproviding end-to-end security for all types of signaling \nmessages. Hence, standardizing EndSec and mandating \nits deployment would be a good step toward securing the \nnetwork. \n Both Internet and PSTN connectivity are the open \ngateways that adversaries can use to gain access and \nattack the network. Because the PSTN’s security is not \ngoing to be improved, at least its gateway to the core \nnetwork must be adequately secured. Likewise, since \nneither the Internet’s design nor security will be changed \nto suit the cellular network, at least its gateways to the \ncore network must be adequately secured. \n Finally, because the cellular network is an amalgama-\ntion of many diverse networks, it has too many vulnera-\nble points. Hence, the future design of the network must \nbe planned to reduce the number of vulnerable network \npoints and reduce the number of service nodes that par-\nticipate in servicing the subscriber, thereby reducing the \nnumber of points from which an adversary may attack. \n REFERENCES \n [1] 3GPP, architectural requirements, Technical Standard 3G TS \n23.221 V6.3.0, 3G Partnership Project, May 2004. \n [2] K. Murakami, O. Haase, J. Shin, T. F. LaPorta, Mobility manage-\nment alternatives for migration to mobile internet session-based \nservices, IEEE Journal on Selected Areas in Communications \n(J-SAC), special issue on Mobile Internet, 22 (June 2004) 834 – 848. \n [3] 3GPP, 3G security, Security threats and requirements, Technical \nStandard 3G TS 21.133 V3.1.0, 3G Partnership Project, \nDecember 1999. \n [4] 3GPP, network architecture, Technical Standard 3G TS 23.002 \nV3.3.0, 3G Partnership Project, May 2000. \n [5] V. Eberspacher , GSM Switching, Services and Protocols , John \nWiley & Sons , 1999 . \n [6] 3GPP, Basic call handling - technical realization, Technical Standard \n3GPP TS 23.018 V3.4.0, 3G Partnership Project, April 1999. \n [7] 3GPP, A guide to 3rd generation security, Technical Standard \n3GPP TR 33.900 V1.2.0, 3G Partnership Project, January 2001. \n [8] 3GPP, A guide to 3rd generation security, Technical Standard \n3GPP TR 33.900 V1.2.0, 3G Partnership Project, January 2001. \n [9] B. Chatras, C. Vernhes, Mobile application part design principles, \nin: Proceedings of XIII International Switching Symposium , vol. \n1, June 1990, pp. 35 – 42. \n [10] J.A. Audestad, The mobile application part (map) of GSM, tech-\nnical report, Telektronikk 3.2004, Telektronikk, March 2004. \n [11] 3GPP, Mobile Application part (MAP) specification, Technical \nStandard 3GPP TS 29.002 V3.4.0, 3G Partnership Project, April \n1999. \n [12] K. Boman , G. Horn , P. Howard , V. Niemi , Umts security , \n Electronics Communications Engineering Journal 14 ( 5 ) ( October \n2002 ) 191 – 204 Special issue security for mobility . \n [13] 3GPP, A guide to 3rd generation security, Technical Standard \n3GPP TR 33.900 V1.2.0, 3G Partnership Project, January \n2001. \n [14] K. Kotapati, P. Liu, T. F. LaPorta, Dependency relation-based \nvulnerability analysis of 3G networks: can it identify unforeseen \ncascading attacks? Special Issue of Springer Telecommunications \nSystems on Security, Privacy and Trust for Beyond 3G Networks, \nMarch 2007. \n [15] K. Kotapati , P. Liu , T. F. LaPorta , Evaluating MAPSec by mark-\ning attack graphs , ACM/Kluwer Journal of Wireless Networks \nJournal (WINET) ( March 2008 ) . \n [16] K. Kotapati, P. Liu, T. F. LaPorta, EndSec: An end-to-end message \nsecurity protocol for cellular networks, IEEE Workshop on Security, \nPrivacy and Authentication in Wireless Networks (SPAWN 2008) \nin IEEE International Symposium on a World of Wireless Mobile \nand Multimedia Networks (WOWMOM), June 2008. \n [17] K. Kotapati , P. Liu , T. F. LaPorta , Evaluating MAPSec by mark-\ning attack graphs , ACM/Kluwer Journal of Wireless Networks \nJournal (WINET) ( March 2008 ) . \n [18] K. Kotapati , P. Liu , T. F. LaPorta , Evaluating MAPSec by mark-\ning attack graphs , ACM/Kluwer Journal of Wireless Networks \nJournal (WINET) ( March 2008 ) . \n [19] D. Moore , V. Paxson , S. Savage , C. Shannon , S. Staniford , \n N. Weaver , Inside the slammer worm , IEEE Security and Privacy \n 1 ( 4 ) ( 2003 ) 33 – 39 . \n [20] W. Enck, P. Traynor, P. McDaniel, T. F. LaPorta, Exploiting \nopen functionality in sms-capable cellular networks, in: CCS \n‘05: Proceedings of the 12th ACM Conference on Computer and \nCommunications Security, ACM Press, 2005. \n [21] W. Enck, P. Traynor, P. McDaniel, T. F. LaPorta, Exploiting \nopen functionality in sms-capable cellular networks, in: CCS \n’05: Proceedings of the 12th ACM Conference on Computer and \nCommunications Security, ACM Press, 2005. \n [22] P. Traynor, W. Enck, P. McDaniel, T. F. LaPorta, Mitigating \nattacks on open functionality in sms-capable cellular networks, \nin: MobiCom ’06: Proceedings of the 12th Annual International \nConference on Mobile Computing and Networking, ACM Press, \n2006, pp. 182 – 193. \n [23] P. Traynor, P. McDaniel, T. F. LaPorta, On attack causality in \n internet-connected cellular networks, USENIX Security Symposium \n(SECURITY), August 2007. \n" }, { "page_number": 236, "text": "Chapter | 12 Cellular Network Security\n203\n [24] P. Traynor, W. Enck, P. McDaniel, T. F. LaPorta, Mitigating \nattacks on open functionality in SMS-capable cellular networks, \nin: MobiCom ’06: Proceedings of the 12th Annual International \nConference on Mobile Computing and Networking, ACM Press, \n2006, pp. 182 – 193. \n [25] P. Traynor, W. Enck, P. McDaniel, T. F. LaPorta, Mitigating \nattacks on open functionality in sms-capable cellular networks, \nin: MobiCom ’06: Proceedings of the 12th Annual International \nConference on Mobile Computing and Networking, ACM Press, \n2006, pp. 182 – 193. \n [26] P. Traynor, P. McDaniel, T. F LaPorta, On attack causality in \ninternet-connected \ncellular \nnetworks, \nUSENIX \nSecurity \nSymposium (SECURITY), August 2007. \n [27] T. Moore, T. Kosloff, J. Keller, G. Manes, S. Shenoi. Signaling \nsystem 7 (SS7) network security, in: Proceedings of the IEEE \n45th Midwest Symposium on Circuits and Systems, August \n2002. \n [28] G. Lorenz, T. Moore, G. Manes, J. Hale, S. Shenoi, Securing SS7 \ntelecommunications networks, in: Proceedings of the 2001 IEEE \nWorkshop on Information Assurance and Security, June 2001. \n [29] T. Moore, T. Kosloff, J. Keller, G. Manes, S. Shenoi, Signaling \nsystem 7 (SS7) network security, in: Proceedings of the IEEE \n45th Midwest Symposium on Circuits and Systems, August 2002. \n [30] K. Kotapati, P. Liu, Y. Sun, T. F. LaPorta, A taxonomy of cyber \nattacks on 3G networks, in: Proceedings IEEE International \nConference on Intelligence and Security Informatics , ISI, \nLecture Notes in Computer Science, Springer-Verlag, May 2005, \npp. 631 – 633. \n [31] K. Kotapati, P. Liu, Y. Sun, T. F. LaPorta, a taxonomy of cyber \nattacks on 3G networks, in: Proceedings IEEE International \nConference on Intelligence and Security Informatics , ISI, \nLecture Notes in Computer Science, Springer-Verlag, May 2005, \npp. 631 – 633. \n [32] K. Kotapati, Assessing Security of Mobile Telecommunication \nNetworks, Ph. D dissertation, Penn State University, August 2008. \n [33] Switch, 5ESS Switch, www.alleged.com/telephone/5ESS/ . \n [34] Telcoman, Central Offices, www.thecentraloffice.com/ . \n [35] V. Prevelakis , D. Spinellis , The Athens affair , IEEE Spectrum \n( July 2007 ) . \n [36] H. Hannu, Signaling Compression (SigComp) Requirements & \nAssumptions, RFC 3322 (Informational), January 2003. \n [37] W. Stallings , Cryptography and Network Security: Principles and \nPractice , Prentice Hall , 2000 . \n [38] K. Kotapati, Assessing Security of Mobile Telecommunication \nNetworks, Ph. D dissertation, Penn State University, August 2008. \n [39] K. Kotapati, P. Liu, T. F. LaPorta, CAT - a practical graph & \nSDL based toolkit for vulnerability assessment of 3G networks, \nin: Proceedings of the 21st IFIP TC-11 International Information \nSecurity Conference, Security and Privacy in Dynamic Enviro-\nnments, SEC 2006, May 2006. \n [40] K. Kotapati, P. Liu, T. F. LaPorta, Dependency relation-based \nvulnerability analysis of 3G networks: can it identify unforeseen \ncascading attacks? Special Issue of Springer Telecommunications \nSystems on Security, Privacy and Trust for Beyond 3G Networks, \nMarch 2007. \n [41] K. Kotapati , P. Liu , T. F. LaPorta , Evaluating MAPSec by mark-\ning attack graphs , ACM/Kluwer Journal of Wireless Networks \nJournal (WINET) 12 ( March 2008 ) . \n [42] 3GPP2 3GPP, Third Generation Partnership Project, www.3gpp.\norg /, 2006. \n [43] Telcoman, Central Offices, www.thecentraloffice.com /. \n [44] J. Ellsberger , D. Hogrefe , A. Sarma , SDL, Formal Object-\noriented Language for Communicating Systems , Prentice Hall , \n 1997 . \n [45] K. Kotapati, P. Liu, T. F. LaPorta, Dependency relation-based \nvulnerability analysis of 3G networks: can it identify unforeseen \ncascading attacks? Special Issue of Springer Telecommunications \nSystems on Security, Privacy and Trust for Beyond 3G Networks, \nMarch 2007. \n [46] K. Kotapati, P. Liu, T. F. LaPorta, Dependency relation-based \nvulnerability analysis of 3G networks: can it identify unforeseen \ncascading attacks? Special Issue of Springer Telecommunications \nSystems on Security, Privacy and Trust for Beyond 3G Networks, \nMarch 2007. \n [47] K. Kotapati, P. Liu, T. F. LaPorta, Dependency relation-based \nvulnerability analysis of 3G networks: can it identify unforeseen \ncascading attacks? Special Issue of Springer Telecommunications \nSystems on Security, Privacy and Trust for Beyond 3G Networks, \nMarch 2007. \n [48] K. Kotapati , P. Liu , T. F. LaPorta , Evaluating MAPSec by mark-\ning attack graphs , ACM/Kluwer Journal of Wireless Networks \nJournal (WINET) 12 ( March 2008 ) . \n" }, { "page_number": 237, "text": "This page intentionally left blank\n" }, { "page_number": 238, "text": "205\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n RFID Security \n Chunming Rong \n University of Stavanger \n Erdal Cayirci \n University of Stavanger \n Chapter 13 \n Radiofrequency identification (RFID) systems use RFID \ntags to annotate and identify objects. When objects are \nprocessed, an RFID reader is used to read information \nfrom the tags attached to the objects. The information will \nthen be used with the data stored in the back-end data-\nbases to support the handling of business transactions. \n 1. RFID INTRODUCTION \n Generally, an RFID system consists of three basic compo-\nnents: RFID tags, RFID readers, and a back-end database. \n ● RFID tags or RFID transponders. These are the \ndata carriers attached to objects. A typical RFID tag \ncontains information about the attached object, such \nas an identifier (ID) of the object and other related \nproperties of the object that may help to identify and \ndescribe it. \n ● The RFID reader or the RFID transceiver . These \ndevices can read information from tags and may \nwrite information into tags if the tags are rewritable. \n ● Back-end database. This is the data repository \nresponsible for the management of data related to \nthe tags and business transactions, such as ID, object \nproperties, reading locations, reading time, and so on. \n RFID System Architecture \n RFID systems ’ architecture is illustrated in Figure 13.1 . \nTags are attached to or embedded in objects to identify or \nannotate them. An RFID reader will send out signals to a \ntag for requesting information stored on the tag. The tag \nresponsed to the request by sending back the appropriate \ninformation. With the data from the back-end database, \napplications can then use the information from the tag to \nproceed with the business transaction related to the object. \n Next we describe the RFID tags, RFID reader, and \nback-end database in detail. \n Tags \n In RFID systems, objects are identified or described by \ninformation on RFID tags attached to the objects. An \nRFID tag basically consists of a microchip that is used for \ndata storage and computation and a coupling element for \ncommunicating with the RFID reader via radio frequency \ncommunication, such as an antenna. Some tags may also \nhave an on-board battery to supply a limited amount of \npower. \n RFID tags can respond to radio frequencies sent out \nby RFID readers. On receiving the radio signals from \na RFID reader, an RFID tag will either send back the \nrequested data stored on the tag or write the data into \nthe tag, if the tag is rewritable. Because radio signals are \nused, RFID tags do not require line of sight to connect \nwith the reader and precise positioning, as barcodes do. \nTags may also generate a certain amount of electronic \npower from the radio signals they receive, to power the \ncomputation and transmission of data. \n RFID tags can be classified based on four main crite-\nria: power source, type of memory, computational power, \nand functionality. \n A basic and important classification criterion of RFID \ntags is to classify tags based on power source. Tags can \nbe categorized into three classes: active, semiactive, and \npassive RFID tags. \n Active RFID tags have on-board power sources, such \nas batteries. Active RFID tags can proactively send radio \nsignals to an RFID reader and possibly to other tags [1] \n" }, { "page_number": 239, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n206\nas well. Compared with tags without on-board power, \nactive tags have longer transmission range and are more \nreliable. Active tags can work in the absence of an RFID \nreader. On the other hand, the on-board power supply \nalso increases the costs of active tags. \n Semiactive RFID tags also have on-board power \nsources for powering their microchips, but they use RFID \nreaders ’ energy field for actually transmitting their data \n [2] when responding to the incoming transmissions. \nSemiactive tags have the middle transmission range \nand cost. \n Passive RFID tags do not have internal power sources \nand cannot initiate any communications. Passive RFID tags \ngenerate power from radio signals sent out by an RFID \nreader in the course of communication. Thus passive RFID \ntags can only work in the presence of an RFID reader. \nPassive tags have the shortest transmission range and the \ncheapest cost. The differences of active tags, semiactive \ntags and passive tags are shown in Table 13.1 . \n RFID tags may have read-only memory, write-once/\nread-many memory, or fully rewritable memory. RFID \ntags can be classified into three categories according \nto the type of memory that a tag uses: read-only tags, \nwrite-once/read-many tags, and fully rewritable tags. \nThe information on read-only tags cannot be changed \nin the life-cycle of the tags. Write-once/read-many tags \ncan be initialized with application-specific information. \nThe information on fully rewritable tags can be rewritten \nmany times by an RFID reader. \n According to the computational power, RFID tags \ncan be classified into three categories: basic tags, sym-\nmetric-key tags, and public-key tags. Basic tags do not \nhave the ability to perform cryptography computation. \nSymmetric-key tags and public-key tags have the ability \nto perform symmetric-key and public – key cryptography \ncomputation, respectively. \n RFID tags can also be classified according to their \nfunctionality. The MIT Auto-ID Center defined five \nclasses of tags according to their functionality in 2003 [3] : \nClass 0, Class 1, Class 2, Class 3, and Class 4 tags. Every \nclass has different functions and different requirements for \ntag memory and power resource. Class 0 tags are passive \nand do not contain any memory. They only announce their \npresence and offer electronic article surveillance (EAS) \nfunctionality. Class 1 tags are typically passive. They have \nread-only or write-once/read-many memory and can only \noffer identification functionality. Class 2 tags are mostly \nsemiactive and active. They have fully rewritable memory \nand can offer data-logging functionality. Class 3 tags are \nsemiactive and active tags. They contain on-board environ-\nmental sensors that can record temperature, acceleration, \nmotion, or radiation and require fully rewritable memory. \nClass 4 tags are active tags and have fully rewritable mem-\nory. They can establish ad hoc wireless networks with \nother tags because they are equipped with wireless net-\nworking components. \n RFID Readers \n An RFID reader (transceiver) is a device used to read \ninformation from and possibly also write information \ninto RFID tags. An RFID reader is normally connected \nto a back-end database for sending information to that \ndatabase for further processing. \n An RFID reader consists of two key functional mod-\nules: a high-frequency (HF) interface and a control unit. \nBack-end\nDatabase\nRFID Reader\nTag\nTag\nTag\n... ...\nResponse\nRequest\n FIGURE 13.1 RFID system architecture. \n TABLE 13.1 Tags classified by power source \n Power Source \n Active \nTags \n Semiactive \nTags \n Passive \nTags \n On-board power \nsupply \n Yes \n Yes \n No \n Transmission \nrange \n Long \n Medium \n Short \n Communication \npattern \n Proactive \n Passive \n Passive \n Cost \n Expensive \n Medium \n Cheap \n" }, { "page_number": 240, "text": "Chapter | 13 RFID Security\n207\nThe HF interface can perform three functions: generating \nthe transmission power to activate the tags, modulating the \nsignals for sending requests to RFID tags, and receiving \nand demodulating signals received from tags. The control \nunit of an RFID reader has also three basic functions: con-\ntrolling the communication between the RFID reader and \nRFID tags, encoding and decoding signals, and commu-\nnicating with the back-end server for sending information \nto the back-end database or executing the commands from \nthe back-end server. The control unit can perform more \nfunctions in the case of complex RFID systems, such as \nexecuting anticollision algorithms in the cause of commu-\nnicating with multitags, encrypting requests sent by the \nRFID reader and decrypting responses received from tags, \nand performing the authentication between RFID readers \nand RFID tags [4]. \n RFID readers can provide high-speed tag scanning. \nHundreds of objects can be dealt with by a single reader \nwithin a second; thus it is scalable enough for applica-\ntions such as supply chain management, where a large \nnumber of objects need to be dealt with frequently. RFID \nreaders need only to be placed at every entrance and \nexit. When products enter or leave the designated area \nby passing through an entrance or exit, the RFID read-\ners can instantly identify the products and send the nec-\nessary information to the back-end database for further \nprocessing. \n Back-End Database \n The back-end database is in the back-end server that man-\nages the information related to the tags in an RFID sys-\ntem. Every object’s information can be stored as a record \nin the database, and the information on the tag attached to \nthe object can serve as a pointer to the record. \n The connection between an RFID reader and a back-\nend database can be assumed as secure, no matter via \nwireless link or TCP/IP, because constraints for readers \nare not very tight and security solutions such as SSL/\nTLS can be implemented for them [5] . \n RFID Standards \n Currently, as different frequencies are used for RFID sys-\ntems in various countries and many standards are adopted \nfor different kinds of application, there is no agreement on \na universal standard that’s accepted by all parties. Several \nkinds of RFID standards [6] are being used today. These \nstandards include contactless smart cards, item mana-\ngement tags, RFID systems for animal identification, and \nEPC tags. These standards specify the physical layer and \nthe link layer characteristics of RFID systems but do not \ncover the upper layers. \n Contactless smart cards can be classified into three types \naccording to the communication ranges. The ISO standards \nfor them are ISO 10536, ISO 14443, and ISO 15693. ISO \n10536 sets the standard for close-coupling smart cards, \nfor which the communication range is about 0 – 1 cm. ISO \n14443 sets the standard for proximity-coupling smart cards, \nwhich have a communication range of about 0 – 10 cm. ISO \n15693 specifies vicinity-coupling smart cards, which have a \ncommunication range of about 0 – 1 m. The proximity-cou-\npling and vicinity-coupling smart cards have already been \nimplemented with some cryptography algorithms such \nas 128-bit AES, triple DES, and SHA-1 and challenge-\nresponse authentication mechanisms to improve system \nsecurity [7] . \n Item management tag standards include ISO 15961, \nISO 15962, ISO 15963, and ISO 18000 series [8]. ISO \n15961 defines the host interrogator, tag functional com-\nmands, and other syntax features of item management. \nISO 15962 defines the data syntax of item management, \nand ISO 15963 is “ Unique Identification of RF tag and \nRegistration Authority to manage the uniqueness. ” For \nthe ISO 18000 standards series, part 1 describes the ref-\nerence architecture and parameters definition; parts 2, 3, \n4, 5, 6, and 7 define the parameters for air interface com-\nmunications below 135 kHz, at 13.56 MHz, at 2.45 GHz, \nat 860 MHz, at 960 MHz, and at 433 MHz, respectively. \n Standards for RFID systems for animal identification \ninclude ISO 11784, ISO 11785, and ISO 14223 [9] . ISO \n11784 and ISO 11784 define the code structure and tech-\nnical concepts for radiofrequency identification of ani-\nmals. ISO 14223 includes three parts: air interface, code \nand command structure, and applications. These kinds of \ntags use low frequency for communication and have lim-\nited protection for animal tracking [10] .\n The EPC standard was created by the MIT Auto-ID, \nwhich is an association of more than 100 companies and \nuniversity labs. The EPC system is currently operated by \nEPCglobal [11] . A typical EPC network has four parts \n [12] : the electronic product code, the identification sys-\ntem that includes RFID tags and RFID readers, the Savant \nmiddleware, and the object naming service (ONS). The \nfirst- and second-generation EPC tags cannot support \nstrong cryptography to protect the security of the RFID \nsystems due to the limitation of computational resources, \nbut both of them can provide a kill command to protect \nthe privacy of the consumer [13] .\n EPC tag encoding includes a Header field followed \nby one or more Value fields. The Header field defines the \n" }, { "page_number": 241, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n208\noverall length and format of the Value fields. There are \ntwo kinds of EPC format: EPC 64-bit format and EPC \n96-bit format. In the most recent version [14] , the 64-bit \nformat was removed from the standard. As shown in \n Table 13.2 , both the formats include four fields: a header \n(8 bits), an EPC manager number (28 bits), an object \nclass (24 bits), and a serial number (36 bits). The header \nand the EPC manager number are assigned by EPCglobal \n [15] , and the object class and the serial number are \nassigned by EPC manager owner. The EPC header identi-\nfies the length, type, structure version, and generation of \nthe EPC. The EPC manager number is the entity respon-\nsible for maintaining the subsequent partitions of the \nEPC. The object class identifies a class of objects. Serial \nnumber identifies the instance. \n RFID Applications \n Recently more and more companies and organizations \nhave begun to use RFID tags rather than traditional bar-\ncode because RFID systems have many advantages over \ntraditional barcode systems. First, the information stored \nin the RFID tags can be read by RFID readers with-\nout line of sight, whereas barcodes can only be scanned \nwithin the line of sight. Second, the distance between a \ntag and a reader is longer compared with the barcode sys-\ntem. For example, an RFID reader can read information \nfrom a tag at a distance as long as 300 feet, whereas the \nread range for a barcode is typically no more than 15 feet. \nThird, RFID readers can scan hundreds of tags in seconds. \nFourth, since today most RFID tags are produced using \nsilicon technology, more functions can be added to them, \nsuch as large memory for more information storage and \ncalculation ability to support various kinds of encryption \nand decryption algorithms, so privacy can be better pro-\ntected and the tags cannot be easily cloned by attackers. \nIn addition, the information stored in the barcode cannot \nbe changed after being imprinted on the barcode, whereas \nfor the RFID tags with rewritable memory the information \ncan be updated when needed. \n With these characteristics and advantages, RFID \nhas been widely adopted and deployed in various areas. \nCurrently, RFID can be used in passports, transportation \npayments, product tracking, lap scoring, animal identi-\nfication, inventory systems, RFID mandates, promotion \ntracking, human implants, libraries, schools and uni-\nversities, museums, and social retailing. These myriad \napplications of RFID can be classified into seven classes \naccording to the purpose of identifying items [16] . These \nclasses are asset management, tracking, authenticity ver-\nification, matching, process control, access control, and \nautomated payment. Table 13.3 lists the identification \npurposes of various application types. \n Asset management involves determining the presence \nof tagged items and helping manage item inventory. One \npossible application of asset management is electronic \narticle surveillance (EAS). For example, every good in \na supermarket is attached to an EAS tag, which will be \ndeactivated if it is properly checked out. Then RFID \nreaders at the supermarket exits can detect unpaid goods \nautomatically when they pass through. \n Tracking is used to identify the location of tagged \nitems. If the readers are fixed, a single reader can cover \nonly one area. To effectively track the items, a group of \nreaders is needed, together with a central system to deal \nwith the information from different readers. \n Authenticity verification methods are used to verify \nthe source of tagged items. For example, by adding a \ncryptography-based digital signature in the tag, the sys-\ntem can prevent tag replication to make sure that the \ngood is labeled with the source information. \n Matching is used to ensure that affiliated items are \nnot separated. Samples for matching applications include \nmothers and their newborn babies to match each other \nin the hospital and for airline passengers to match their \nchecked luggage and so prevent theft. \n Access control is used for person authentication. \nBuildings may use contactless RFID card systems to \n TABLE 13.2 EPC basic format \n Header \n EPC \nManager \nNumber \n Object Class \n Serial \nNumber \n TABLE 13.3 RFID application purpose \n Application Type \n Identification Purpose \n Asset management \n Determine item presence \n Tracking \n Determine item location \n Authenticity verification \n Determine item source \n Matching \n Ensure affiliated items are not \nseparated \n Process control \n Correlate item information for \ndecision making \n Access control \n Person authentication \n Automated payment \n Conduct financial transaction \n" }, { "page_number": 242, "text": "Chapter | 13 RFID Security\n209\nidentify authorized people. Only those authorized people \nwith the correct RFID card can authenticate themselves \nto the reader to open a door and enter a building. Using a \ncar key with RFID tags, a car owner can open his own car \nautomatically, another example of RFID’s application to \naccess control. \n Process control involves decision making by correlat-\ning tagged item information. For example, RFID readers \nin different parts of an assembly line can read the infor-\nmation on the products, which can be used to help pro-\nduction managers make suitable decisions. \n Automated payment is used to conduct financial \ntransactions. The applications include payment for toll \nexpressways and at gas stations. These applications can \nimprove the speed of payment to hasten the processing of \nthese transactions. \n 2. RFID CHALLENGES \n RFID systems have been widely deployed in some areas. \nPerhaps this happened before the expectations of RFID \nresearchers and RFID services providers were satisfied. \nThere are many limitations of the RFID technology that \nrestrain the deployment of RFID applications, such as the \nlack of universal standardization of RFID in the industry \nand the concerns about security and privacy problems \nthat may affect the privacy and security of individuals and \norganizations. The security and privacy issues pose a huge \nchallenge for RFID applications. Here we briefly summa-\nrize some of the challenges facing RFID systems. \n Counterfeiting \n As described earlier in the chapter, RFID tags can be \nclassified into three categories based on the equipped \ncomputation power: basic tags, symmetric-key tags, and \npublic-key tags. Symmetric-key and public-key tags can \nimplement cryptography protocols for authentication with \nprivate key, and public keys, respectively. Basic tags are \nnot capable of performing cryptography computation. \nAlthough they lack the capability to perform cryptography \ncomputation, they are most widely used for applications \nsuch as supply chain management and travel systems. \nWith the widespread application of fully writable or even \nreprogrammable basic tags, counterfeiters can easily forge \nbasic tags in real-world applications, and these counter-\nfeit tags can be used in multiple places at the same time, \nwhich can cause confusion. \n The counterfeiting of tags can be categorized into two \nareas based on the technique used for tampering with tag \ndata: modifying tag data and adding data to a blank tag. In \nreal-world applications, we face counterfeit threats such \nas the following [7] : \n ● The attacker can modify valid tags to make them \ninvalid or modify invalid tags to make them valid. \n ● The attacker can modify a high-priced object’s tag as \na low-priced object or modify a low-priced object’s \ntag as a high-priced object. \n ● The attacker can modify an object’s tag to be the \nsame as the tags attached to other objects. \n ● The attacker can create an additional tag for personal \nreasons by reading the data from an authorized tag \nand adding this data to a blank tag in real-world \napplications, such as in a passport or a shipment of \ngoods. \n Sniffing \n Another main issue of concern in deploying RFID sys-\ntems is the sniffing problem. It occurs when third parties \nuse a malicious and unauthorized RFID reader to read the \ninformation on RFID tags within their transmission range. \nUnfortunately, most RFID tags are indiscriminate in their \nresponses to reading requests transmitted by RFID readers \nand do not have access control functions to provide any \nprotection against an unauthorized reader. Once an RFID \ntag enters a sufficiently powered reader’s field, it receives \nthe reader’s requests via radio frequency. As long as the \nrequest is well formed, the tag will reply to the request \nwith the corresponding information on the tag. Then the \nholder of the unauthenticated reader may use this infor-\nmation for other purposes. \n Tracking \n With multiple RFID readers integrated into one system, \nthe movements of objects can be tracked by fixed RFID \nreaders [18] . For example, once a specific tag can be asso-\nciated with a particular person or object, when the tag \nenters a reader’s field the reader can obtain the specific \nidentifier of the tag, and the presence of the tag within the \nrange of a specific reader implies specific location infor-\nmation related to the attached person or object. With loca-\ntion information coming from multiple RFID readers, \nan attacker can follow movements of people or objects. \nTracking can also be performed without decrypting the \nencrypted messages coming from RFID readers [19] . \nGenerally, the more messages the attacker describes, the \nmore location or privacy information can be obtained from \nthe messages. \n" }, { "page_number": 243, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n210\n One way to track is to generate maps of RFID tags \nwith mobile robots [20] . A sensor model is introduced to \ncompute the likelihood of tag detections, given the rela-\ntive pose of the tag with respect to the robot. In this model \na highly accurate FastSLAM algorithm is used to learn \nthe geometrical structure of the environment around the \nrobots, which are equipped with a laser range scanner; \nthen it uses the recursive Bayesian filtering scheme to \nestimate the posterior locations of the RFID tags, which \ncan be used to localize robots and people in the environ-\nment with the geometrical structure of the environment \nlearned by the FastSLAM algorithm. \n There is another method to detect the motion of pas-\nsive RFID tags that are within a detecting antenna’s field. \nResponse rate at the reader is used to study the impact of \nfour cases of tag movements that can provide prompt and \naccurate detection and the influence of the environment. \nThe idea of multiple tags/readers is introduced to improve \nperformance. The movement-detection algorithms can be \nimproved and integrated into the RFID monitoring system \nto localize the position of the tags. The method does not \nrequire any modification of communication protocols nor \nthe addition of hardware. \n In real-world applications, there exists the following \ntracking threat: \n ● The attacker can track the potential victim by moni-\ntoring the movement of the person and performing \nsome illegal actions against the potential victim [21] .\n Denial of Service \n Denial of service (DoS) takes place when RFID read-\ners or back-end servers cannot provide excepted services. \nDoS attacks are easy to accomplish and difficult to guard \nagainst [22] . The following are nine DoS threats: \n ● Killing tags to make them disabled to disrupt read-\ners ’ normal operations. EPCglobal had proposed that \na tag have a “ kill ” command to destroy it and protect \nconsumer privacy. If an attacker knows the password \nof a tag, it can “ kill ” the tag easily in real-world \napplications. Now Class-0, Class-1 Generation-1, \nand Class-1 Generation-2 tags are all equipped with \nthe kill command. \n ● Carry a blocker tag that can disrupt the \ncommunication between an RFID reader and RFID \ntags. A blocker tag is a cheap, passive RFID device \nthat can simulate many basic RFID tags at one time \nand render specific zones private or public. An RFID \nreader can only communicate with a single RFID tag \nat any specific time. If more than one tag responds to \na request coming from the reader at the same time, \n “ collision ” happens. In this case, the reader cannot \nreceive the information sent by the tags, which \nmakes the system unavailable to authorized uses. \n ● Carry a special absorbent tag that can be tuned \nto the same radio frequencies used by legitimate \ntags. The absorbent tag can absorb the energy or \npower generated by radiofrequency signals sent \nby the reader, and the resulting reduction in the \nreader’s energy may make the reader unavailable to \ncommunicate with other tags. \n ● Remove, physically destroy, or erase the information \non tags attached to or embedded in objects. The \nreader will not communicate with the dilapidated \ntags in a normal way. \n ● Shield the RFID tags from scrutiny using a Faraday \ncage. A Faraday cage is a container made of a metal \nenclosure that can prevent reading radio signals from \nthe readers [23] .\n ● Carry a device that can actively broadcast more \npowerful return radio signals or noises than the \nsignals responded to by the tags so as to block or \ndisrupt the communication of any nearby RFID \nreaders and make the system unavailable to \nauthorized users. The power of the broadcast is so \nhigh that it could cause severe blockage or disruption \nof all nearby RFID systems, even those in legitimate \napplications where privacy is not a concern [24] .\n ● Perform a traditional Internet DoS attack and \nprevent the back-end servers from gathering EPC \nnumbers from the readers. The servers do not receive \nenough information from the readers and cannot \nprovide the additional services from the server. \n ● Perform a traditional Internet DoS attack against \nthe object-naming service (ONS). This can deny the \nservice. \n ● Send URL queries to a database and make the \ndatabase busy with these queries. The database may \nthen deny access to authorized users. \n Other Issues \n Besides the four basic types of attack — counterfeiting, sniff-\ning, tracking, and denial of service — in real-world applica-\ntions, there also exists some other threats of RFID systems. \n Spoofing \n Spoofing attacks take place when an attacker successfully \nposes as an authorized user of a system [25] . Spoofing \n" }, { "page_number": 244, "text": "Chapter | 13 RFID Security\n211\nattacks are different from counterfeiting and sniffing \nattacks, though they are all falsification types of attack. \nCounterfeiting takes place when an attacker forges the \nRFID tags that can be scanned by authorized readers. \nSniffing takes place when an attacker forges authorized \nreaders that can scan the authorized tags to obtain use-\nful information. But the forging object of spoofing is an \nauthorized user of a system. There exist the following \nspoofing threats in real-world applications [26] : \n ● The attacker can pose as an authorized EPC global \nInformation Service Object Naming Service (ONS) \nuser. If the attacker successfully poses as an author-\nized ONS user, he can send queries to the ONS to \ngather EPC numbers. Then, from the EPC numbers, \nthe attacker may easily obtain the location, identifi-\ncation, or other privacy information. \n ● The attacker can pose as an authorized database \nuser in an RFID system. The database stores \nthe complete information from the objects, such \nas manufacturer, product name, read time, read \nlocation, and other privacy information. If the \nattacker successfully poses as an authorized database \nuser and an authorized user of ONS, he can send \nqueries to the ONS for obtaining the EPC number \nof one object, then get the complete information \non the object by mapping the EPC number to the \ninformation stored in the database. \n ● The attacker can also pose as an ONS server. If \nthe attacker’s pose is successful, he can easily use \nthe ONS server to gather EPC numbers, respond \nto invalid requests, deny normal service, and even \nchange the data or write malicious data to the system. \n Repudiation \n Repudiation takes place when a user denies doing an \naction or no proof exists to prove that the action has \nbeen implemented [27] . There are two kinds of repudia-\ntion threats: \n ● The sender or the receiver denies performing the \nsend and receive actions. A nonrepudiation protocol \ncan be used to resolve this problem. \n ● The owner of the EPC number or the back-end server \ndenies that it has the information from the objects to \nwhich the tags are attached. \n Insert Attacks \n Insert attacks take place when an attacker inserts some \nsystem commands to the RFID system where data is \nnormally expected [28] . In real-world applications, there \nexists the following attack: \n ● A system command rather than valid data is carried \nby a tag in its data storage memory. \n Replay Attacks \n Replay attacks take place when an attacker intercepts the \ncommunication signals between an RFID reader and an \nRFID tag and records the tag’s response. Then the RFID \ntag’s response can be reused if the attacker detects that \nthe reader sends requests to the other tags for querying \n [29] . There exist the following two threats: \n ● The attacker can record the communications between \nproximity cards and a building access reader and \nplay it back to access the building. \n ● The attacker can record the response that an RFID \ncard in a car gives to an automated highway toll \ncollection system, and the response can be used \nwhen the car of the attacker wants to pass the \nautomated toll station. \n Physical Attacks \n Physical attacks are very strong attacks that physically \nobtain tags and have unauthorized physical operations \non the tags. But it is fortunate that physical attacks can-\nnot be implemented in public or on a widespread scale, \nexcept for Transient Electromagnetic Pulse Emanation \nStandard (TEMPEST) attacks. There exist the following \nphysical attacks [30,31] : \n ● Probe attacks. The attacker can use a probe directly \nattached to the circuit to obtain or change the infor-\nmation on tags. \n ● Material removal. The attacker can use a knife or \nother tools to remove the tags attached to objects. \n ● Energy attacks. The attacks can be either of the contact \nor contactless variety. It is required that contactless \nenergy attacks be close enough to the system. \n ● Radiation imprinting. The attacker can use an X-ray \nband or other radial bands to destroy the data unit \nof a tag. \n ● Circuit disruption. The attacker can use strong \nelectromagnetic interference to disrupt tag circuits. \n ● Clock glitch. The attacker can lengthen or shorten the \nclock pulses to a clocked circuit and destroy normal \noperations. \n Viruses \n Viruses are old attacks that threaten the security of all \ninformation systems, including RFID systems. RFID \n" }, { "page_number": 245, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n212\nviruses always target the back-end database in the server, \nperhaps destroying and revealing the data or information \nstored in the database. There exist the following virus \nthreats: \n ● An RFID virus destroys and reveals the data or infor-\nmation stored in the database. \n ● An RFID virus disturbs or even stops the normal \nservices provided by the server. \n ● An RFID virus threatens the security of the \ncommunications between RFID readers and RFID \ntags or between back-end database and RFID readers. \n Social Issues \n Due to the security challenges in RFID, many people \ndo not trust RFID technologies and fear that they could \nallow attackers to purloin their privacy information. \n Weis [32] presents two main arguments. These argu-\nments make some people choose not to rely on RFID \ntechnology and regard RFID tags as the “ mark of the \nbeast. ” However, security issues cannot prevent the suc-\ncess of RFID technology. \n The first argument is that RFID tags are regarded \nas the best replacement for current credit cards and all \nother ways of paying for goods and services. But RFID \ntags can also serve as identification. The replacement of \ncurrent ways of paying by RFID tag requires that people \naccept RFID tags instead of credit cards, and they cannot \nsell or buy anything without RFID tags. \n There is a second argument [33] : “ Since RFID tags \nare also used as identification, they should be implanted \nto avoid losing the ID or switching it with someone. \nCurrent research has shown that the ideal location for \nthe implant is indeed the forehead or the hand, since \nthey are easy to access and unlike most other body parts \nthey do not contain much fluid, which interferes with the \nreading of the chip. ” \n Comparison of All Challenges \n Previously in this chapter we introduced some of the \nchallenges that RFID systems are facing. Every challenge \nor attack can have a different method or attack goal, and \nthe consequences of the RFID system after an attack may \nalso be different. In this part of the chapter, we briefly \nanalyze the challenges according to attack methods, \nattack goals, and the consequences of RFID systems after \nattacks (see Table 13.4 ). \n The first four challenges are the four basic challenges \nin RFID systems that correspond to the four basic use \ncases. Counterfeiting happens when counterfeiters forge \nRFID tags by copying the information from a valid tag \nor adding some well-formed format information to a new \ntag in the RFID system. Sniffing happens when an unau-\nthorized reader reads the information from a tag, and the \ninformation may be utilized by attackers. Tracking hap-\npens when an attacker who holds some readers unlaw-\nfully monitors the movements of objects attached by an \nRFID tag that can be read by those readers. Denial of \nservice happens when the components of RFID systems \ndeny the RFID service. \n The last seven challenges or attacks can always hap-\npen in RFID systems (see Table 13.4 ). Spoofing hap-\npens when an attacker poses as an authorized user of an \nRFID system on which the attacker can perform invalid \noperations. Repudiation happens when a user or compo-\nnent of an RFID system denies the action it performed \nand there is no proof that the user did perform the action. \n Insert attacks happen when an attacker inserts some \ninvalid system commands into the tags and some opera-\ntions may be implemented by the invalid command. \n Replay attacks happen when an attacker intercepts \nthe response of the tag and reuses the response for another \ncommunication. Physical attacks happen when an attacker \ndoes some physical operations on RFID tags and these \nattacks disrupt communications between the RFID read-\ners and tags. A virus is the security challenge of all infor-\nmation systems; it can disrupt the operations of RFID \nsystems or reveal the information in those systems. Social \nissues involve users ’ psychological attitudes that can influ-\nence the users ’ adoption of RFID technologies for real-\nworld applications. \n 3. RFID PROTECTIONS \n According to their computational power, RFID tags can be \nclassified into three categories: basic tags, symmetric-key \ntags, and public-key tags. In the next part of the chapter, \nwe introduce some protection approaches for these three \nkinds of RFID tags. \n Basic RFID System \n Prices have been one of the biggest factors to be con-\nsidered when we’re making decisions on RFID deploy-\nments. Basic tags are available for the cheapest price, \ncompared with symmetric-key tags and public-key \ntags. Due to the limited computation resources built into \na basic tag, basic tags are not capable of performing \n" }, { "page_number": 246, "text": "Chapter | 13 RFID Security\n213\ncryptography computations. This imposes a huge chal-\nlenge to implement protections on basic tags; cryptography \nhas been one of the most important and effective methods \nto implement protection mechanisms. Recently several \napproaches have been proposed to tackle this issue. \n Most of the approaches to security protection for basic \ntags focus on protecting consumer privacy. A usual method \nis by tag killing, proposed by EPCglobal. In this approach, \nwhen the reader wants to kill a tag, it sends a kill message \nto the tag to permanently deactivate it. Together with the \nkill message, a 32-bit tag-specific PIN code is also sent \nto the object tag, to avoid killing other tags. On receiving \nthis kill message, a tag will deactivate itself, after which \nthe tag will become inoperative. Generally, tags are killed \nwhen the tagged items are checked out in shops or super-\nmarkets. This is very similar to removing the tags from \nthe tagged items when they are purchased. It is an effi-\ncient method of protecting the privacy of consumers, since \na killed tag can no longer send out information. \n The disadvantage of this approach is that it will reduce \nthe post-purchase benefits of RFID tags. In some cases, \nRFID tags need to be operative only temporarily. For exam-\nple, RFID tags used in libraries and museums for tagging \nbooks and other items need to work at all times and should \nnot be killed or be removed from the tagged items. In these \ncases, instead of being killed or removed, tags can be made \ntemporarily inactive. When a tag needs to be reawoken, an \nRFID reader can send a wake message to the tag with a \n32-bit tag-specific PIN code, which is sent to avoid waking \nup other tags. This also results in the management of PIN \ncodes for tags, which brings some inconvenience. \n Another approach to protecting privacy is tag relabe-\nling, which was first proposed by Sarma et al [34] . In this \nscheme, to protect consumers ’ privacy, identifiers of RFID \ntags are effaced when tagged items are checked out, but \nthe information on the tags will be kept for later use. Inoue \nand Yasuuran [35] proposed that consumers can store the \nidentifiers of the tags and give each tag a new identifier. \nWhen needed, people can reactivate the tags with the new \nidentifiers. This approach allows users to manage tagged \nitems throughout the items ’ life cycle. A third approach is \nto allocate each tag a new random number at each check-\nout; thus attackers cannot rely on the identifiers to collect \ninformation about customers [36] . This method does not \nsolve the problem of tracking [37] . To prevent tracking, \nrandom numbers need to be refreshed frequently, which \n TABLE 13.4 Comparison of all challenges or attacks in RFID systems \n Challenge or Attack \n Attack Method \n Attack Goal \n Direct Consequence \n Counterfeiting \n Forge tags \n Tag \n Invalid tags \n Sniffing \n Forge readers \n Reader \n Reveals information \n Tracking \n Monitor the movement of objects \n Objects of an RFID system \n Tracks the movement of object \n Denial of service \n RF jamming, kill normal command, \nphysical destroy, and so on \n Reader, back-end database \nor server \n Denies normal services \n Spoofing \n Pose as an authorized user \n User \n Invalid operations by invalid user \n Repudiation \n Deny action or no proof that the \naction was implemented \n Tag, reader, back-end \ndatabase or server \n Deniable actions \n Insert attacks \n Insert invalid command \n Tag \n Invalid operations by invalid \ncommands \n Replay attacks \n Reuse the response of tags \n Communication between \nRFID tags and readers \n Invalid identification \n Physical attacks \n Physical operations on tag \n Tag \n Disrupts or destroys \ncommunication between RFID \ntags and readers \n Virus \n Insert invalid data \n Back-end database or server \n Destroys the data or service of \nsystem \n Social issues \n Social attitude \n Psychology of potential user \n Restricts the widespread \napplication \n" }, { "page_number": 247, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n214\nwill increase the burden on consumers. Juels proposed a \nsystem called the minimalist system [38] , in which every \ntag has a list of pseudonyms, and for every reader query, \nthe tag will respond with a different pseudonym from the \nlist and return to the beginning of the list when this list \nis exhausted. It is assumed that only authorized readers \nknow all these tag pseudonyms. Unauthorized readers \nthat do not know these pseudonyms cannot identify the \ntags correctly. To prevent unauthorized readers getting the \npseudonyms list by frequent query, the tags will response \nto an RFID reader’s request with a relatively low rate, \nwhich is called pseudonym throttling . Pseudonym throt-\ntling is useful, but it cannot provide a high level of privacy \nfor consumers, because with the tag’s small memory, the \nnumber of pseudonyms in the list is limited. To tackle this \nproblem, the protocol allows an authorized RFID reader \nto refresh a tag’s pseudonyms list. \n Juels and Pappu [39] proposed to protect consum-\ners ’ privacy by using tagged banknotes. The proposed \nscheme used public-key cryptography to protect the serial \nnumbers of tagged banknotes. The serial number of a \ntagged banknote is encrypted using a public key to gen-\nerate a ciphertext, which is saved in the memory of the \ntag. On receiving a request of the serial number, the tag \nwill respond with this ciphertext. Only law enforcement \nagencies know the related private key and can decrypt \nthis ciphertext to recover the banknote’s serial number. To \nprevent tracking of banknotes, the ciphertext will be reen-\ncrypted periodically. To avoid the ciphertext of a banknote \nbeing reencrypted by an attacker, the tagged banknote \ncan use an optical write – access key. A reader that wants \nto reencrypt this ciphertext needs to scan the write – access \nkey first. In this system only one key pair, a public key \nand a private key, is used. But this is not enough for the \ngeneral RFID system. Using multiple key pairs will \nimpair the privacy of RFID systems, since if the reader \nwants to reencrypt the ciphertext, it needs to know the \ncorresponding public key of this tag. \n So, a universal reencryption algorithm has been intro-\nduced [40] . In this approach, an RFID reader can reen-\ncrypt the ciphertext without knowing the corresponding \npublic key of a tag. The disadvantage of this approach \nis that attackers can substitute the ciphertext with a new \nciphertext, so the integrity of the ciphertext cannot be pro-\ntected. By signing the ciphertext with a digital signature, \nthis problem can be solved [41] , since only the authenti-\ncated reader can access the ciphertext. \n Floerkemeier et al. [42] introduced another approach \nto protect consumer privacy by using a specially designed \nprotocol. In their approach, they first designed the \ncommunication protocol between RFID tags and RFID \nreaders. This protocol requires an RFID reader to provide \ninformation about the purpose and the collection type for \nthe query. In addition, a privacy-enforcing device called a \n watchdog tag is used in the system. This watchdog tag is \na kind of sophisticated RFID tag that is equipped with a \nbattery, a small screen, and a long-range communication \nchannel. A watchdog tag can be integrated into a PDA or \na cell phone and can decode the messages from an RFID \nreader and display them on the screen for the user to read. \nWith a watchdog tag, a user can know not only the infor-\nmation from the RFID readers in the vicinity of the tag \nbut also the ID, the query purpose, and the collection type \nof the requests sent by the RFID readers. With this infor-\nmation, the user is able to identify the unwanted commu-\nnications between tags and an RFID reader, making this \nmethod useful for users to avoid the reader ID spoofing \nattack. \n Rieback, Crispo, and Tanebaum [43] proposed another \nprivacy-enforcing device called RFID Guardian, which is \nalso a battery-powered RFID tag that can be integrated \ninto a PDA or a cell phone to protect user privacy. RFID \nGuardian is actually a user privacy protection platform \nin RFID systems. It can also work as an RFID reader to \nrequest information from RFID tags, or it can work like \na tag to communicate with a reader. RFID Guardian has \nfour different security properties: auditing, key manage-\nment, access control, and authentication. It can audit \nRFID readers in its vicinity and record information about \nthe RFID readers, such as commands, related parameters, \nand data, and provide these kinds of information to the \nuser. Using this information, the user can sufficiently \nidentify illegal scanning. In some cases, a user might not \nknow or could forget the tags in his vicinity. With the \nhelp of RFID Guardian, the user can detect all the tags \nwithin radio range. Then the user can deactivate the tags \naccording to his choice. \n For RFID tags that use cryptography methods to pro-\nvide security, one important issue is key management. \nRFID Guardian can perform two-way RFID communica-\ntions and can generate random values. These features are \nvery useful for key exchange and key refresh. Using the \nfeatures of coordination of security primitives, context \nawareness, and tag-reader mediation, RFID Guardian \ncan provide access control for RFID systems [44] . Also, \nusing two-way RFID communication and standard chal-\nlenge-response algorithms, RFID Guardian can provide \noff-tag authentication for RFID readers. \n Another approach for privacy protecting is proposed \nby Juels, Rivest, and Szydlo [45] . In this approach, a \ncheap, passive RFID tag is used as the blocker tag. Since \nthis blocker tag can simulate many RFID tags at the same \n" }, { "page_number": 248, "text": "Chapter | 13 RFID Security\n215\ntime, it is very difficult for an RFID reader to identify \nthe real tag carried by the user. The blocker tag can both \nsimulate all the possible RFID tags and simulate only a \nselect set of the tags, making it convenient for the user to \nmanage the RFID tags. For example, the user can tell the \nblocker tag to block only the tags that belong to a certain \ncompany. Another advantage of this approach is that if the \nuser wants to reuse these RFID tags, unlike the “ killed ” \ntags that need to be activated by the user, the user need \nonly remove the blocker tag. Since the blocker tag can \nshield the serial numbers of the tags from being read by \nRFID readers, it can also be used by attackers to disrupt \nproper operation of an RFID system. A thief can also use \nthe blocker tag to shield the tags attached to the commod-\nities in shops and take them out without being detected. \n RFID System Using Symmetric-Key \nCryptography \n Symmetric-key cryptography, also called secret-key cryp-\ntography or single-key cryptography, uses a single key to \nperform both encryption and decryption. Due to the lim-\nited amount of resources available on an RFID chip, most \navailable symmetric-key cryptographs are too costly to \nbe implemented on an RFID chip. For example, a typical \nimplementation of Advanced Encryption Standard (AES) \nneeds about 2000 – 3000 gates. This is not appropriate \nfor low-cost RFID tags. It is only possible to implement \nAES in high-end RFID tags. A successful case of imple-\nmenting a 128-bit AES on high-end RFID tags has been \nreported [46] . \n Using the Symmetric Key to Provide \nAuthentication and Privacy \n Symmetric-key cryptography can be applied to pre-\nvent tag cloning in RFID systems using a challenge and \nresponse protocol. For example, if a tag shares a secret \nkey K with a reader and the tag wants to authenticate \nitself to the reader, it will first send its identity to the \nreader. The reader will then generate a nonce N and send \nit to the tag. The tag will use this nonce and the secret K \nto generate a hash code H \u0003 h(K,N) and send this hash \ncode to the reader. The reader can also generate a hash \ncode H \u0006 \u0003 h(K,N) and compare these two codes to verify \nthis tag. Using this scheme, it is difficult for an attacker \nto clone the tags without knowing the secret keys. \n Different kinds of symmetric-key cryptography \nprotocol-based RFID tags have been used recently in \ndaily life. For example, an RFID device that uses this \nsymmetric-key challenge-response protocol, called a dig-\nital signature transponder, has been introduced by Texas \nInstruments. This transponder can be built into cars to pre-\nvent car theft and can be implemented into wireless pay-\nment devices used in filling stations. \n One issue of RFID systems that use symmetric-key \ncryptography is key management. To authenticate itself \nto an RFID reader, each tag in the system should share a \ndifferent secret key with the reader, and the reader needs \nto keep all the keys of these tags. When a tag wants to \nauthenticate itself to an RFID reader, the reader needs to \nknow the secret key shared between them. If the reader \ndoes not know the identification of the tag in advance, it \ncannot determine which key can be used to authenticate \nthis tag. If the tag sends its identification to the reader \nbefore the authentication for the reader to search the secret \nkey, the privacy of the tag cannot be protected, since other \nreaders can also obtain the identification of this tag. \n To tackle this problem, one simple method is key \nsearching . The reader will search all the secret keys in its \nmemory to find the right key for the tag before authen-\ntication. There are some protocols proposed for the key \nsearch for RFID tags. One general kind of key search \nscheme [47] has been proposed. In this approach, the \ntag first generates a random nonce N and hashes this N \nusing its secret key K to generate the hash code. Then it \nsends both this hash code and N to the reader. Using this \nnonce N , the reader will generate the hash code with all \nthe secret keys and compare them with the received hash \ncode from the tag. If there is a match, it means it found \nthe right key. In this scheme, since the nonce N is gener-\nated randomly every time, the privacy of the tag can be \nprotected. \n The problem with this approach is that if there are \na large number of tags, the key searching will be very \ncostly. To reduce the cost of the key searching, a modifi-\ncation of this scheme was proposed [48] ; in [20], in this \napproach, a scheme called tree of secret is used. Every \ntag is assigned to a leaf in the tree, and every node in the \ntree has its secret key. This way the key search cost for \neach tag can be reduced, but it will add some overlap to \nthe sets of keys for each tag. \n Another approach to reduce the cost of key searching \nis for the RFID tags and RFID reader to keep synchroni-\nzation with each other. In this kind of approach, every tag \nwill maintain a counter for the reader query times. For \neach reader’s query, the tag should respond with a differ-\nent value. The reader will also maintain counters for all \nthe tags ’ responses and maintain a table of all the possi-\nble response values. Then, if the reader and tags can keep \nsynchronization, the reader can know the approximate \n" }, { "page_number": 249, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n216\ncurrent counter number of the tags. When the reader \nreceives a response from a tag, it can search the table and \nquickly identify this tag. \n Other Symmetric-Key Cryptography-Based \nApproaches \n In addition to the basic symmetric-key challenge-response \nprotocol, \nsome \nsymmetric-key \ncryptography-based \napproaches have been proposed recently to protect the \nsecurity and privacy of RFID systems. \n One approach is called YA-TRAP: Yet Another T \nrivial RFID Authentication Protocol, proposed by Tsudik \n [49] . In this approach, a technique for the inexpensive \nuntraceable identification of RFID tags is introduced. \nHere untraceable means it is computationally difficult \nto gather the information about the identity of RFID tags \nfrom the interaction with them. In YA-TRAP, for the \npurpose of authentication, only minimal communication \nbetween the reader and tags is needed, and the computa-\ntional burden on the back-end server is very small. \n The back-end server in the system is assumed to \nbe secure and maintains all tag information. Each tag \nshould be initialized with three values: K i , T 0 , and T max . \n K i is both the identifier and the cryptographic key for \nthis tag. The size of K i depends on the number of tags \nand the secure authentication requirement; in practice, \n160 bits is enough. T 0 is the initial timestamp of this tag. \nThe value of T 0 of each tag does not need to vary. This \nmeans that a group of tags can have the same T 0 . T max is \nthe maximum value of T 0 , and a group of tags can also \nhave the same T max value. In addition, each tag has a \nseeded pseudorandom number generator. \n YA-TRAP works as follows: First, each tag should \nstore a timestamp T t in its memory. When an RFID reader \nwants to interrogate a RFID tag, it will send the current \ntimestamp T r to this tag. Receiving T r , the tag will com-\npare T r with the timestamp value it stores and with T max . \nIf T r \u0004 T t or T r \u0005 T max , this tag will respond to the reader \nwith a random value generated by the seeded pseudo \nrandom number generator. Otherwise, the tag will replace \n T t with T r and calculate H r \u0003 HMACK i ( T t ), and then send \n H r to the reader. Then the reader will send T r and H r to \nthe back-end server. The server will look up its database \nto find whether this tag is a valid tag. If it’s not, the server \nwill send a tag-error message to the reader. If this is a \nvalid tag, the server will send the meta-ID of this tag or the \nvalid message to the reader, according to different applica-\ntion requirements. Since the purpose of this protocol is to \nminimize the interaction between the reader and tags and \nminimize the computation burden of the back-end server, \nit has some vulnerability. One of them is that the adver-\nsary can launch a DoS attack to the tag. For example, the \nattack can send a timestamp t \u0004 T max , but this t is wildly \ninaccurate with the current time. In this case, the tag will \nupdate its timestamp with the wrong time and the legal \nreader cannot get access to this tag. \n In Ref. [33] , another approach called determinis-\ntic hash locks [50] was proposed. In this scheme, the \nsecurity of RFID systems is based on the one-way hash \nfunction. During initialization, every tag in the sys-\ntem will be given a meta-ID, which is the hash code of \na random key. This meta-ID will be stored in the tag’s \nmemory. Both the meta-ID and the random key are also \nstored in the back-end server. After initialization, all the \ntags will enter the locked state. When they stay in the \nlocked state, tags will respond only with the meta-ID \nwhen interrogated by an RFID reader. When a legitimate \nreader wants to unlock a tag, as shown in Figure 13.2 , \nit will first send a request to the tag. After receiving the \nmeta-ID from the tag, the reader will send this meta-ID \nto the back-end server. The back-end server will search \nin its database using this meta-ID to get the random key. \nThen it will send this key to the reader, and the reader \nwill send it to the tag. Using this random key, the tag \nwill hash this key and compare the hash code with its \nmeta-ID. The tag will unlock itself and send its actual \nidentification to the reader if these two values match. \nThen the tag will return to the locked state to prevent \nhijacking of illegal readers. Since the illegal reader can-\nnot contact the back-end server to get the random key, it \ncannot get the actual identification of the tag. \n One problem with deterministic hash locks is that \nwhen the tag is queried, it will respond with its meta-\nID. Since the meta-ID of the tag is a static one and can-\nnot change, the tag can be tracked easily. To solve this \nproblem, Weis, Sarma, Rivest, and Engels proposed the \nRandomized Hash Locks protocol to prevent tracking \nof the tag. In this protocol, each tag is equipped with \nnot only the one-way hash function but also a random \nnumber generator. When the tag is requested by a reader, \nit will use the random number generator to generate a ran-\ndom number and will hash this random number together \nwith its identification. The tag will respond with this hash \nTag\nReader\nBack-end\nServer\nRequest\nMeta-ID\nKey\nMeta-ID\nKey\nIdentification\n FIGURE 13.2 Tag unlock. \n" }, { "page_number": 250, "text": "Chapter | 13 RFID Security\n217\ncode and the random number to the reader. After receiv-\ning this response from the tag, the reader will get all iden-\ntifications of the tags from the back-end server. Using \nthese identifications, the reader will perform brute-force \nsearch by hashing the identification of each tag together \nwith the random number and compare the hash code. If \nthere is a match, the reader can know the identification of \nthe tag. In this approach, the tag response to the reader is \nnot dependent on the request of the reader, which means \nthat the tag is vulnerable to replay attack. To avoid this, \nJuels and Weis proposed a protocol called Improved \nRandomized Hash-Locks [51] .\n RFID System Using Public-key \nCryptography \n Symmetric-key cryptography can provide security \nfor RFID systems, but it is more suitable to be imple-\nmented in a closed environment. If the shared secret keys \nbetween them are leaked, it will impose a big problem \nfor the security of RFID systems. Public-key cryptogra-\nphy is more suitable for open systems, since both RFID \nreaders and tags can use their public keys to protect the \nsecurity of RFID systems. In addition, using public-key \ncryptography can not only prevent leakage of any infor-\nmation to the eavesdropper attack during communication \nbetween reader and tags, it also can provide digital signa-\ntures for both the readers and tags for authentication. In \nthe public-key cryptography system, an RFID reader does \nnot need to keep all the secret keys for each tag and does \nnot need to search the appropriate key for each tag as it \ndoes in a symmetric-key cryptography system. This will \nreduce the system burden for key management. Although \npublic-key cryptography has some advantages over \nsymmetric-key cryptography, it is commonly accepted \nthat public-key cryptography is computationally more \nexpensive than symmetric-key cryptography. Because of \nthe limitations of memory and computational power of the \nordinary RFID tags, it is difficult for the public-key \ncrypto graphy to be implemented in RFID systems. In \nrecent years, some research shows that some kinds of \npublic key-based cryptographies such as elliptic curve \ncryptography and hyperelliptic curve cryptography are \nfeasible to be implemented in high-end RFID tags [52] .\n Authentication with Public-Key Cryptography \n Basically, there are two different kinds of RFID tag \nauthentication methods using public-key cryptogra-\nphy: one is online authentication and the other is offline \nauthentication [53] .\n For the authentication of RFID tags in an online situ-\nation, the reader is connected with a database server. The \ndatabase server stores a large number of challenge-response \npairs for each tag, making it difficult for the attacker to \ntest all the challenge-response pairs during a limited time \nperiod. During the challenge-response pairs enrollment \nphase, the physical uncloneable function part of RFID \nsystems will be challenged by a Certification Authority \nwith a variety of challenges, and accordingly it will gener-\nate responses for these challenges. The physical unclone-\nable function is embodied in a physical object and can give \nresponses to the given challenges [54] . Then these gener-\nated challenge-response pairs will be stored in the database \nserver. \n In the authentication phase, when a reader wants to \nauthenticate a tag, first the reader will send a request to \nthe tag for its identification. After getting the ID of the tag, \nthe reader will search the database server to get a chal-\nlenge-response pair for this ID and send the challenge to \nthe tag. After receiving the challenge from the reader, the \ntag will challenge its physical uncloneable function to get \na response for this challenge and then send this response to \nthe reader. The reader will compare this received response \nwith the response stored in the database server. If the dif-\nference between these two responses is less than a certain \npredetermined threshold, the tag can pass the authentica-\ntion. Then the database server will remove this challenge-\nresponse pair for this ID. \n One paper [55] details how the authentication of RFID \ntags works in an offline situation using public key cryp-\ntography. To provide offline authentication for the tags, \na PUF-Certificate-Identify-based identification scheme \nis proposed. In this method, a standard identification \nscheme and a standard signature scheme are used. Then \nthe security of RFID systems depends on the security \nof the PUF, the standard identification scheme, and the \nstandard signature scheme. For the standard identification \nscheme, an elliptic curve discrete log based on Okamoto’s \nIdentification protocol [56] is used. This elliptic curve \ndiscrete log protocol is feasible to be implemented in the \nRFID tags. \n Identity-Based Cryptography Used \nin the RFID Networks \n An identity-based cryptographic scheme is a kind of \npublic-key-based approach that was first proposed by \nShamir [57] in 1984. To use identity-based cryptogra-\nphy in RFID systems, since both the RFID tags and the \nreader have their identities, it is convenient for them to \nuse their own identities to generate their public keys. \n" }, { "page_number": 251, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n218\nAn RFID system based on identity-based cryptogra-\nphy should be set up with the help of a PKG. When the \nreader and tags enter the system, each of them is allo-\ncated a unique identity stored in their memory. The \nprocess of key generation and distribution in the RFID \nsystem that uses identity-based cryptography is shown \nin Figure 13.3 and is outlined here: \n 1. PKG generates a “ master ” public key PU pkg and a \nrelated “ master ” private key PR pkg and saves them in \nits memory. \n 2. The RFID reader authenticates itself to the PKG with \nits identity ID re . \n 3. If the reader can pass the authentication, PKG gen-\nerates a unique private key PR re for the reader and \nsends this private key together with PU pkg to reader. \n 4. When an RFID tag enters the system, it authenticates \nitself to the PKG with its identity ID ta . \n 5. If the tag can pass the authentication, PKG generates \na unique private key PR ta for the tag and sends PR ta \ntogether with PU pkg and the identity of the reader \n ID re to the tag. \n After this process, the reader can know its private key \n PR re and can use PU pkg and its identity to generate its \npublic key. Every tag entered into the system can know \nits own private key and can generate a public key of its \nown and a public key of the reader. \n If an RFID tag is required to transmit messages to the \nreader in security, since the tag can generate the read-\ner’s public key PU re , it can use this key PU re to encrypt \nthe message and transmit this encrypted message to the \nreader. As shown in Figure 13.4 , after receiving the mes-\nsage from the tag, the reader can use its private key PR re \nto decrypt the message. Since only the reader can know \nits private key PR re , the security of the message can be \nprotected. \n Figure 13.5 illustrates the scheme for the reader to cre-\nate its digital signature and verify it. First, the reader will \nuse the message and the hash function to generate a hash \ncode, and then it uses its private key PR re to encrypt this \nhash code to generate the digital signature and attach it to \nthe original message and send both the digital signature \nand message to the tag. After receiving them, the RFID \ntag can use the public key of the reader PU re to decrypt \nthe digital signature to recover the hash code. By compar-\ning this hash code with the hash code generated from the \nmessage, the RFID tag can verify the digital signature. \n Figure 13.6 illustrates the scheme for the RFID tag to \ncreate its digital signature and verify it. In RFID systems, \nthe reader cannot know the identity of the tag before read-\ning it from the tag. The reader cannot generate the public \nkey of the tag, so the general protocol used in identity-\nbased networks cannot be used here. In our approach, \nfirst, the tag will use its identity and its private key PR ta \nto generate a digital signature. When the tag needs to \nauthenticate itself to the reader, it will add this digital sig-\nnature to its identity, encrypt it with the public key of the \nreader PU re , and send to the reader; only the reader can \ndecrypt this ciphertext and get the identity of the tag and \nthe digital signature. Using the tag identity, the reader can \ngenerate the tag’s public key PU ta . Then the reader can \nuse this public key to verify the digital signature. \n As mentioned, the most important problem for the \nsymmetric-key approach in RFID systems is the key man-\nagement. The RFID tags need a great deal of memory to \nstore all the secret keys related with each tag in the system \nfor message decryption. Also, if the RFID reader receives \na message from a tag, it cannot know which tag this mes-\nsage is from and therefore cannot know which key it can \nuse to decrypt the message. The reader needs to search all \nthe keys until it finds the right one. In RFID systems using \nidentity-based cryptography, every tag can use the public \nkey of the reader to generate the ciphertext that can be \ndecrypted using the reader’s private key, so the reader does \nnot need to know the key of the tags; all it needs to keep is \nits own private key. \n In some RFID applications such as epassports and \nvisas, tag authentication is required. However, the sym-\nmetric-key approach cannot provide digital signatures for \nRFID tags to authenticate them to RFID readers. By using \nan identity-based scheme, the tags can generate digital sig-\nnatures using their private keys and store them in the tags. \nIDta\nIDre\nAuthentication\nAuthentication\nPUpkg, PRta, IDre\nPUpkg, PRre\nPKG\n(4)\n(2)\n(3)\n(1)\nGenerate PUpkg and PRpkg\n(5)\nTag\nReader\n FIGURE 13.3 Key generation and distribution. \nM\nE\nPUre\nCiphertext\nPRre\nReader\nTag\nD\nM\n FIGURE 13.4 Message encryption. \n" }, { "page_number": 252, "text": "Chapter | 13 RFID Security\n219\nWhen they need to authenticate themselves to RFID read-\ners, they can transmit these digital signatures to the reader, \nand the reader can verify them using the tags ’ public keys. \n In identity-based cryptography RFID systems, since \nthe identity of the tags and reader can be used to generate \npublic keys, the PKG does not need to keep the key direc-\ntory, so it can reduce the resource requirements. Another \nadvantage of using identity-based cryptography in RFID \nsystems is that the reader does not need to know the pub-\nlic keys of the tags in advance. If the reader wants to ver-\nify the digital signature of an RFID tag, it can read the \nidentity of the tag and use the public key generated from \nthe identity to verify the digital signature. \n An inherent weakness of identity-based cryptography \nis the key escrow problem. But in RFID systems that use \nidentity-based cryptography, because all the devices can \nbe within one company or organization, the PKG can be \nhighly trusted and protected, and the chance of key escrow \ncan be reduced. \n Another problem of identity-based cryptography is \nrevocation. For example, people always use their public \ninformation such as their names or home addresses to \ngenerate their public key. If their private keys are com-\npromised by an attacker, since their public information \ncannot be changed easily, this will make it difficult to \nregenerate their new public keys. In contrast, in RFID \nsystems the identity of the tag is used to generate the \npublic key. If the private key of one tag has been com-\npromised, the system can allocate a new identity to the \ntag and use this new identity to effortlessly create a new \nprivate key to the tag. \n REFERENCES \n [1] S.A. \nWeis, \nSecurity \nand \nPrivacy \nin \nRadio-Frequency \nIdentification Devices. \n [2] M. Langheinrich, RFID and Privacy. \n [3] Auto-ID Center, Draft Protocol Specification for a Class 0 Radio \nFrequency Identification Tag, February 2003. \n [4] K. \nFinkenzeller, \nRFID \nHandbook: \nFundamentals \nand \nApplications in Contactless Smart Cards and Identification. \n [5] P. Peris-Lopez, J.C. Hernandez-Castro, J. Estevez-Tapiador, \nA. Ribagorda, RFID systems: A survey on security threats and pro-\nposed solutions, in: 11th IFIP International Conference on Personal \nWireless Communications – PWC06, Volume 4217 of Lecture \nNotes in Computer Science, Springer-Verlag, September 2006, \npp. 159 – 170. \n [6] J. Wiley & Sons, RFID Handbook, second ed. \n [7] T. Phillips , T. Karygiannis , R. Huhn , Security standards for the \nRFID market , IEEE Security & Privacy ( November/December \n2005 ) 85 – 89 . \n [8] RFID Handbook, second ed., J. Wiley & Sons. \n [9] RFID Handbook, second ed., J. Wiley & Sons. \n [10] T. Phillips , T. Karygiannis , R. Huhn , Security standards for the \nRFID market , IEEE Security & Privacy ( November/December \n2005 ) 85 – 89 . \n [11] EPCglobal, www.epcglobalinc.org /, June 2005. \nM\nH\nE\nD\nH\nPRre\nReader\nTag\nCompare\nPUre\n FIGURE 13.5 A digital signature from a reader. \nM\nH\nE\nE\nD\nD\nH\nG\nPRtag\nPUre\nCiphertext\nPRre\nTag\nReader\nCompare\nPUtag\n FIGURE 13.6 A digital signature from a tag. \n" }, { "page_number": 253, "text": "PART | I Overview of System and Network Security: A Comprehensive Introduction\n220\n [12] P. Peris-Lopez, J.C. Hernandez-Castro, J. Estevez-Tapiador, \nA. Ribagorda, RFID systems: a survey on security threats and pro-\nposed solutions, in: 11th IFIP International Conference on Personal \nWireless Communications – PWC06, Volume 4217 of Lecture \nNotes in Computer Science, Springer-Verlag, September 2006, \npp. 159 – 170. \n [13] T. Phillips , T. Karygiannis , R. Huhn , Security standards for the \nRFID market , IEEE Security & Privacy ( 2005 ) 85 – 89 . \n [14] EPCglobal Tag Data Standards, Version 1.3. \n [15] EPCglobal, www.epcglobalinc.org /, June 2005. \n [16] Guidelines for Securing Radio Frequency Identification (RFID) \nSystems, Recommendations of the National Institute of Standards \nand Technology, NIST Special Publication 800 – 98. \n [17] D.R. Thompson, N. Chaudhry, C.W. Thompson, RFID Security \nThreat Model. \n [18] S. Weis, S. Sarma, R. Rivest, D. Engels, Security and privacy \naspects of low-cost radio frequency identification systems, in: \nW. Stephan, D. Hutter, G. Muller, M. Ullmann (Eds.), International \nConference on Security in Pervasive computing-SPC 2003, vol. \n2802, Springer-Verlag, 2003, pp. 454 – 469. \n [19] P. Peris-Lopez, J.C. Hernandez-Castro, J. Estevez-Tapiador, \nA. Ribagorda, RFID systems: a survey on security threats and pro-\nposed solutions, in: 11th IFIP International Conference on Personal \nWireless Communications – PWC06, Volume 4217 of Lecture \nNotes in Computer Science, Springer-Verlag, September 2006, \npp. 159 – 170. \n [20] D. Haehnel, W. Burgard, D. Fox, K. Fishkin, M. Philipose, \nMapping and localization with WID technology, International \nConference on Robotics & Automation, 2004. \n [21] D.R. Thompson, N. Chaudhry, C.W. Thompson, RFID Security \nThreat Model. \n [22] D.R. Thompson, N. Chaudhry, C.W. Thompson, RFID Security \nThreat Model. \n [23] A. Juels, R.L. Rivest, M. Syzdlo, The blocker tag: selec-\ntive blocking of RFID tags for consumer privacy, in: V. Atluri \n(Ed.), 8th ACM Conference on Computer and Communications \nSecurity, 2003, pp. 103 – 111. \n [24] A. Juels, R.L. Rivest, M. Syzdlo, The blocker tag: selec-\ntive blocking of RFID tags for consumer privacy, in: V. Atluri \n(Ed.), 8th ACM Conference on Computer and Communications \nSecurity, 2003, pp. 103 – 111. \n [25] D.R. Thompson, N. Chaudhry, C.W. Thompson, RFID Security \nThreat Model. \n [26] D.R. Thompson, N. Chaudhry, C.W. Thompson, RFID Security \nThreat Model. \n [27] D.R. Thompson, N. Chaudhry, C.W. Thompson, RFID Security \nThreat Model. \n [28] F. Thornton, B. Haines, A.M. Das, H. Bhargava, A. Campbell, \nJ. Kleinschmidt, RFID Security. \n [29] C. Jechlitschek, A Survey Paper on Radio Frequency Identification \n(RFID) Trends. \n [30] S.H. Weingart, Physical Security Devices for Computer \nSubsystems: A Survey of Attacks and Defenses. \n [31] S.A. Weis, Security and Privacy in Radio-Frequency Identification \nDevices. \n [32] C. Jechlitschek, A Survey Paper on Radio Frequency \nIdentification (RFID) Trends. \n [33] C. Jechlitschek, A Survey Paper on Radio Frequency \nIdentification (RFID) Trends. \n [34] S.E. Sarma, S.A. Weis, D.W. Engels, RFID systems security and \nprivacy implications, Technical Report, MITAUTOID-WH-014, \nAutoID Center, MIT, 2002. \n [35] S. Inoue, H. Yasuura, RFID privacy using user-controllable \nuniqueness, in: RFID Privacy Workshop, MIT, November \n2003. \n [36] N. Good, J. Han, E. Miles, D. Molnar, D. Mulligan, L. Quilter, \nJ. Urban, D. Wagner, Radio frequency ID and privacy with infor-\nmation goods, in: Workshop on Privacy in the Electronic Society \n(WPES), 2004. \n [37] N. Good, J. Han, E. Miles, D. Molnar, D. Mulligan, L. Quilter, J. \nUrban, D. Wagner, Radio frequency ID and privacy with infor-\nmation goods, in: Workshop on Privacy in the Electronic Society \n(WPES), 2004. \n [38] A. Juels, Minimalist cryptography for low-cost RFID tags, in: \nC. Blundo, S. Cimato (Eds.), The Fourth International Conference \non Security in Communication Networks – SCN 2004, Vol. 3352 \nof Lecture Notes in Computer Science, Springer-Verlag, 2004, \npp. 149 – 164. \n [39] A. Juels, R. Pappu, Squealing euros: privacy protection in RFID-\nenabled banknotes. in: R. Wright (Ed.), Financial Cryptography \n ’ 03, vol. 2742, Springer-Verlag, 2003, pp. 103 – 121. \n [40] P. Golle, M. Jakobsson, A. Juels, P. Syverson, Universal \nre-encryption for mixnets, in: T. Okamoto (Ed.), RSA Conference-\nCryptographers ’ Track (CT-RSA), vol. 2964, 2004, pp. 163 – 178. \n [41] G. Ateniese, J. Camenisch, B. de Madeiros, Untraceable RFID \ntags via insubvertible encryption, in: 12th ACM Conference on \nComputer and Communication Security, 2005. \n [42] C. Floerkemeier, R. Schneider, M. Langheinrich, Scanning with \na Purpose Supporting the Fair Information Principles in RFID \nProtocols, 2004. \n [43] M.R. Rieback, B. Crispo, A. Tanenbaum, RFID Guardian: a \nbattery-powered mobile device for RFID privacy management, in: \nC. Boyd, J.M. Gonz´alez Nieto (Eds.), Australasian Conference \non Information Security and Privacy – ACISP 2005, Vol. 3574 \nof Lecture Notes in Computer Science, Springer-Verlag, 2005, \npp. 184 – 194. \n [44] M.R. Rieback, B. Crispo, A. Tanenbaum, RFID guardian: a \nbattery-powered mobile device for RFID privacy management, in: \nC. Boyd, J.M. Gonz´alez Nieto (Eds.), Australasian Conference \non Information Security and Privacy – ACISP 2005, vol. 3574 \nof Lecture Notes in Computer Science, Springer-Verlag, 2005, \npp. 184 – 194. \n [45] A. Juels, R.L. Rivest, M. Syzdlo, The blocker tag: selec-\ntive blocking of RFID tags for consumer privacy, in: V. Atluri \n(Ed.), 8th ACM Conference on Computer and Communications \nSecurity, 2003, pp. 103 – 111. \n [46] M. Feldhofer, S. Dominikus, J. Wolkerstorfer, Strong authentica-\ntion for RFID systems using the AES algorithm, in: M. Joye, J.-J. \nQuisquater (Eds.), Workshop on Cryptographic Hardware and \nEmbedded Systems CHES 04, Vol. 3156 of Lecture Notes in \nComputer Science, Springer-Verlag, 2004, pp. 357 – 370. \n [47] S. Weis, S. Sarma, R. Rivest, D. Engels, Security and privacy aspects \nof low-cost radio frequency identification systems, in: W. Stephan, \nD. Hutter, G. Muller, M. Ullmann (Eds.), International Conference \non Security in Pervasive computing-SPC 2003, vol. 2802, Springer-\nVerlag, 2003, pp. 454 – 469. \n [48] D. Molnar, D. Wagner, Privacy and security in library RFID: issues, \npractices, and architectures, in: B. Pfitzmann, P. McDaniel (Eds.), \n" }, { "page_number": 254, "text": "Chapter | 13 RFID Security\n221\nACM Conference on Communications and Computer Security, \nACM Press, 2004, pp. 210 – 219. \n [49] G. Tsudik, YA-TRAP: Yet another trivial RFID authentication proto-\ncol, in: Fourth Annual IEEE International Conference on Pervasive \nComputing and Communications Workshops (PERCOMW’06), \n2006, pp. 640 – 643. \n [50] S. Weis, S. Sarma, R. Rivest, D. Engels, Security and privacy aspects \nof low-cost radio frequency identification systems, in: W. Stephan, \nD. Hutter, G. Muller, M. Ullmann (Eds.), International Conference \non Security in Pervasive computing-SPC 2003, vol. 2802, Springer-\nVerlag, 2003, pp. 454 – 469. \n [51] A. Juels, S. Weis, Defining strong privacy for RFID, in: Pervasive \nComputing and Communications Workshops, 2007. \n [52] P. Tuyls , L. Batina , RFID tags for anticounterfeiting , in: \n D. Pointcheval (Ed.), Topics in Cryptology-CT-RSA 2006 , \n Springer-Verlag , 2006 . \n [53] L. Batina, J. Guajardo, T. Kerins, N. Mentens, P. Tuyls, \nI. Verbauwhede, Public-key cryptography for RFID-tags. in: \nPrinted handout of Workshop on RFID Security, ” RFIDSec06, \n2006, pp. 61 – 76. \n [54] P. Tuyls , L. Batina , RFID tags for anticounterfeiting , in: \n D. Pointcheval (Ed.), Topics in Cryptology-CT-RSA 2006 , \n Springer-Verlag , 2006 . \n [55] P. Tuyls , L. Batina , RFID tags for anticounterfeiting , in: \n D. Pointcheval (Ed.), Topics in Cryptology-CT-RSA 2006 , \n Springer-Verlag , 2006 . \n [56] T, Okamoto, (1992). Provably secure and practical identification \nschemes and corresponding signature schemes. In: E.F. Brickell \n(Ed.), Advances in Cryptology | CRYPTO’92, Vol. 740 of LNCS, \nSpringer-Verlag, 1992, pp. 31 – 53. \n [57] A. Shamir, Identity-based cryptosystems and signature scheme, \nAdvances in Cryptology: Proceedings of CRYPTO 84, LNCS, \n1984, pp. 47 – 53. \n" }, { "page_number": 255, "text": "This page intentionally left blank\n" }, { "page_number": 256, "text": " Managing Information \nSecurity \n Part II \n CHAPTER 14 Information Security Essentials for IT Managers: \nProtecting Mission-Critical Systems \n Albert Caballero \n CHAPTER 15 Security Management Systems \n Joe Wright and Jim Harmening \n CHAPTER 16 Information Technology Security Management \n Rahul Bhasker and Bhushan Kapoor \n CHAPTER 17 Identity Management \n Dr. Jean-Marc Seigneur and Dr. Tewfiq El Malika \n CHAPTER 18 Intrusion Prevention and Detection Systems \n Christopher Day \n CHAPTER 19 Computer Forensics \n Scott R. Ellis \n CHAPTER 20 Network Forensics \n Yong Guan \n CHAPTER 21 Firewalls \n Dr. Errin W. Fulp \n CHAPTER 22 Penetration Testing \n Sanjay Bavisi \n CHAPTER 23 What is Vulnerability Assessment? \n Almantas Kakareka \n" }, { "page_number": 257, "text": "This page intentionally left blank\n" }, { "page_number": 258, "text": "225\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Information Security Essentials for IT \nManagers: Protecting Mission-Critical \nSystems \n Albert Caballero \n Terremark Worldwide, Inc. \n Chapter 14 \n Information security involves the protection of organiza-\ntional assets from the disruption of business operations, \nmodification of sensitive data, or disclosure of propri-\netary information. The protection of this data is usually \ndescribed as maintaining the confidentiality, integrity, \nand availability (CIA) of the organization’s assets, oper-\nations, and information. \n 1. INFORMATION SECURITY ESSENTIALS \nFOR IT MANAGERS, OVERVIEW \n Information security management as a field is ever \nincreasing in demand and responsibility because most \norganizations spend increasingly larger percentages \nof their IT budgets in attempting to manage risk and \nmitigate intrusions, not to mention the trend in many \nenterprises of moving all IT operations to an Internet-\nconnected infrastructure, known as enterprise cloud com-\nputing. 1 For information security managers, it is crucial \nto maintain a clear perspective of all the areas of busi-\nness that require protection. Through collaboration with \nall business units, security managers must work security \ninto the processes of all aspects of the organization, from \nemployee training to research and development. Security \nis not an IT problem, it is a business problem. \n Information security means protecting information and \ninformation systems from unauthorized access, use, disclo-\nsure, disruption, modification, or destruction. 2 \n Scope of Information Security Management \n Information security is a business problem in the sense \nthat the entire organization must frame and solve security \nproblems based on its own strategic drivers, not solely on \ntechnical controls aimed to mitigate one type of attack. As \nidentified throughout this chapter, security goes beyond \ntechnical controls and encompasses people, technology, pol-\nicy, and operations in a way that few other business objec-\ntives do. The evolution of a risk-based paradigm, as opposed \nto a technical solution paradigm for security, has made it \nclear that a secure organization does not result from secur-\ning technical infrastructure alone. Furthermore, securing \nthe organization’s technical infrastructure cannot provide \nthe appropriate protection for these assets, nor will it protect \nmany other information assets that are in no way dependent \non technology for their existence or protection. Thus, the \norganization would be lulled into a false sense of security if \nit relied on protecting its technical infrastructure alone. 3 \n CISSP Ten Domains of Information Security \n In the information security industry there have been sev-\neral initiatives to attempt to define security management \nand how and when to apply it. The leader in certifying \ninformation security professionals is the Internet Security \nConsortium, with its CISSP (see sidebar, “ CISSP Ten \nDomains: Common Body of Knowledge ” ) certification. 4 \n 1 “ Cloud computing, the enterprise cloud, ” Terremark Worldwide Inc. \nWebsite, http://www.theenterprisecloud.com/ \n 2 “ Defi nition of information security, ” Wikipedia, http://en.wikipedia.\norg/wiki/Information_security \n 3 Richard A. Caralli, William R. Wilson, “The challenges of security \nmanagement,” Survivable Enterprise Management Team, Networked \nSystems Survivability Program, Software Engineering Institute, http://\nwww.cert.org/archive/pdf/ESMchallenges.pdf \n 4 “ CISSP Ten domains ” ISC2 Web site https://www.isc2.org/cissp/\ndefault.aspx \n" }, { "page_number": 259, "text": "PART | II Managing Information Security\n226\n 6 “ ISO 17799 security standards, ” ISO Web site, http://www.iso.org/\niso/support/faqs/faqs_widely_used_standards/widely_used_standards_\nother/information_security.htm \n 7 Saad Saleh AlAboodi, A New Approach for Assessing the Maturity \nof Information Security , CISSP \n 5 Micki, Krause, Harold F. Tipton, Information Security Management \nHandbook sixth edition CRC Press LLC \nIn defining required skills for information security \nmanagers, the ISC has arrived at an agreement on ten \ndomains of information security that is known as the \n Common Body of Knowledge (CBK). Every security \nmanager must understand and be well versed in all areas \nof the CBK. 5 \n In addition to individual certification there must be \nguidelines to turn these skills into actionable items that \ncan be measured and verified according to some inter-\nnational standard or framework. The most widely used \nstandard for maintaining and improving information \nsecurity is ISO/IEC 17799:2005. ISO 17799 (see Figure \n14.1 ) establishes guidelines and principles for initiating, \nimplementing, maintaining, and improving information \nsecurity management in an organization. 6 \n A new and popular framework to use in conjunc-\ntion with the CISSP CBK and the ISO 17799 guidelines \nis ISMM. ISMM is a framework (see Figure 14.2 ) that \ndescribes a five-level evolutionary path of increasingly \norganized and systematically more mature security lay-\ners. It is proposed for the maturity assessment of infor-\nmation security management and the evaluation of the \nlevel of security awareness and practice at any organi-\nzation, whether public or private. Furthermore, it helps \nus better understand where, and to what extent, the three \nmain processes of security (prevention, detection, and \nrecovery) are implemented and integrated. \n ISMM helps us better understand the application of \ninformation security controls outlined in ISO 17799. \n Figure 14.3 shows a content matrix that defines the scope \nof applicability between various security controls men-\ntioned in ISO 17799’s ten domains and the correspond-\ning scope of applicability on the ISMM Framework. 7 \n ● Access control. Methods used to enable administra-\ntors and managers to define what objects a subject can \naccess through authentication and authorization, pro-\nviding each subject a list of capabilities it can perform \non each object. Important areas include access control \nsecurity models, identification and authentication tech-\nnologies, access control administration, and single sign-\non technologies. \n ● Telecommunications and network security. Examination \nof internal, external, public, and private network com-\nmunication systems, including devices, protocols, and \nremote access. \n ● Information security and risk management. Including \nphysical, technical, and administrative controls surround-\ning organizational assets to determine the level of protec-\ntion and budget warranted by highest to lowest risk. The \ngoal is to reduce potential threats and money loss. \n ● Application security. Application security involves the con-\ntrols placed within the application programs and operating \nsystems to support the security policy of the organization \nand measure its effectiveness. Topics include threats, appli-\ncations development, availability issues, security design \nand vulnerabilities, and application/data access control. \n ● Cryptography. The use of various methods and tech-\nniques such as symmetric and asymmetric encryption \nto achieve desired levels of confidentiality and integrity. \nImportant areas include encryption protocols and appli-\ncations and Public Key Infrastructures. \n ● Security architecture and design. This area covers the \nconcepts, principles, and standards used to design and \nimplement secure applications, operating systems, and \nall platforms based on international evaluation criteria \nsuch as Trusted Computer Security Evaluation Criteria \n(TCSEC) and Common Criteria. \n ● Operations security . Controls over personnel, hardware \nsystems, and auditing and monitoring techniques such as \nmaintenance of AV, training, auditing, and resource pro-\ntection; preventive, detective, corrective, and recovery \ncontrols; and security and fault-tolerance technologies. \n ● Business continuity and disaster recovery planning. The \nmain purpose of this area is to preserve business opera-\ntions when faced with disruptions or disasters. Important \naspects are to identify resource values, perform a busi-\nness impact analysis, and produce business unit priori-\nties, contingency plans, and crisis management. \n ● Legal, \nregulatory, \ncompliance, \nand \ninvestigations. \nComputer crime, government laws and regulations, and \ngeographic locations will determine the types of actions \nthat constitute wrongdoing, what is suitable evidence, \nand what type of licensing and privacy laws your organ-\nization must abide by. \n ● Physical (environmental) security. Concerns itself with \nthreats, risks, and countermeasures to protect facili-\nties, hardware, data, media, and personnel. Main topics \ninclude restricted areas, authorization models, intrusion \ndetection, fire detection, and security guards. \n CISSP Ten Domains: Common Body of Knowledge: \n" }, { "page_number": 260, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n227\nSecurity\nPolicy\nPhysical Aspects\nTechnical\nAspects\nOrganizational\nAspects\nOrganizational\nSecurity\nSystem Development\nand Maintenance\nCommunications and\nOperations Management\nBusiness Continuity\nManagement\nPersonal Security\nPhysical and Environmental\nSecurity\nAsset Classification\nand Control\nAccess Control\nCompliance\n FIGURE 14.1 ISO 17799:2005 security model. 8 \nDefinite\nSecurity\nComprehensive Security\nAwareness\nBack-end System Security\nFront-end System Security\nPhysical and Environmental Security\nSophistication\nIncreases\nVisibility\nIncreases\nLevel 5:\nLevel 4:\nLevel 3:\nLevel 2:\nLevel 1:\nPrevention\n...\nDetection\n...\nRecovery\n FIGURE 14.2 ISMM framework. 9 \n What is a Threat? \n Threats to information systems come in many flavors, \nsome with malicious intent, others with supernatural \npowers or unexpected surprises. Threats can be deliber-\nate acts of espionage, information extortion, or sabotage, \nas in many targeted attacks between foreign nations; \nhowever, more often than not it happens that the biggest \nthreats can be forces of nature (hurricane, flood) or acts \nof human error or failure. It is easy to become consumed \nin attempting to anticipate and mitigate every threat, but \nthis is simply not possible. Threat agents are threats only \nwhen they are provided the opportunity to take advan-\ntage of a vulnerability, and ultimately there is no guar-\nantee that the vulnerability will be exploited. Therefore, \ndetermining which threats are important can only be \ndone in the context of your organization. The process by \nwhich a threat can actually cause damage to your infor-\nmation assets is as follows: A threat agent gives rise to \n 8 “ ISO 17799 security standards, ” ISO Web site, http://www.iso.org/\niso/support/faqs/faqs_widely_used_standards/widely_used_standards_\nother/information_security.htm \n 9 Saad Saleh AlAboodi, A New Approach for Assessing the Maturity \nof Information Security , CISSP \n" }, { "page_number": 261, "text": "PART | II Managing Information Security\n228\na threat that exploits a vulnerability and can lead to a \nsecurity risk that can damage your assets and cause an \nexposure. This can be counter-measured by a safeguard \nthat directly affects the threat agent. Figure 14.4 shows \nthe building blocks of the threat process. \n Common Attacks \n Threats are exploited with a variety of attacks, some tech-\nnical, others not so much. Organizations that focus on \nthe technical attacks and neglect items such as policies \nand procedures or employee training and awareness are \nsetting information security up for failure. The mantra \nthat the IT department or even the security department, \nby themselves, can secure an organization is as anti-\nquated as black-and-white television. Most threats today \nare a mixed blend of automated information gathering, \nsocial engineering, and combined exploits, giving the \nperpetrator endless vectors through which to gain access. \nExamples of attacks vary from a highly technical remote \nexploit over the Internet, social-engineering an adminis-\ntrative assistant to reset his password, or simply walking \nright through an unprotected door in the back of your \nbuilding. All scenarios have the potential to be equally \ndevastating to the integrity of the organization. Some \nof the most common attacks are briefly described in the \nsidebar, “ Common Attacks. ” 10 \nISO 17799\nISMM\n(Scope of Applicability)\nLayer \n1\nDomain\nNumber\nDomain Name\nSecurity policy\nOrganizational security\nAsset classification\nand control\nPersonnel security\nPhysical and\nenvironmental security\nCommunications and\noperations management\nAccess control\nSystem development\nand maintenance\nBusiness continuity\nmanagement\nCompliance\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\nDomain Subname\nN/A\nInformation security infrastructure\nSecurity of third-party access\nOutsourcing\nAccountability for assets\nInformation classification\nSecurity in job definition and resourcing\nUser training\nResponding to security incidents/malfunctions\nSecure areas\nEquipment security\nGeneral controls\nOperational procedures and responsibilities\nSystem planning and acceptance\nProtection against malicious software\nHousekeeping\nNetwork management\nMedia handling and security\nExchange of information and software\nBusiness requirement for access control\nUser access management\nUser responsibilities\nNetwork access control\nOperating system access control\nApplication access control\nMonitoring system access and use\nMobile computing and teleworking\nSecurity requirement of systems\nSecurity in application systems\nCryptographic controls\nSecurity of system files\nSecurity in development and support processes\nCompliance with legal requirements\nReview of security policy and compliance\nSystem audit considerations\nN/A\nLayer \n2\nLayer \n3\nLayer \n4\nLayer \n5\n FIGURE 14.3 A content matrix for ISO 17799 and its scope of applicability. \n 10 Symantec Global Internet, Security Threat Report, Trends for July –\n December 07, Volume XII, Published April 2008 http://eval.symantec.\ncom/mktginfo/enterprise/white_papers/b-whitepaper_internet_secu-\nrity_threat_report_xiii_04-2008.en-us.pdf \n" }, { "page_number": 262, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n229\nRisk\nSecurity\nCan Damage\nAsset\nAnd Causes an\nExposure\nCounter-\nMeasured by\nSafeguard\nDirectly Affects\nThreat Agent\nGives Rise to\nThreat\nExploits\nVulnerability\nLeads to\n FIGURE 14.4 The threat process. \n ● Malicious code (malware). Malware is a broad cate-\ngory; however, it is typically software designed to infil-\ntrate or damage a computer system without the owner’s \ninformed consent. As shown in Figure 14.5 , the most \ncommonly identifiable types of malware are viruses, \nworms, backdoors, and Trojans. Particularly difficult to \nidentify are root kits, which alter the kernel of the oper-\nating system. \n ● Social engineering . The art of manipulating people into \nperforming actions or divulging confidential information. \nSimilar to a confidence trick or simple fraud, the term \ntypically applies to trickery to gain information or compu-\nter system access; in most cases, the attacker never comes \nface to face with the victim. \n ● Industrial espionage. Industrial espionage describes activ-\nities such as theft of trade secrets, bribery, blackmail, and \n Common Attacks \nTrojan\n0\n10\n20\n30\nPercentage of top 50 by potential infections\n40\n50\n60\n70\n80\nBack door\nWorm\nVirus\nType\n5%\n10%\n15%\n37%\n22%\n22%\n11%\n8%\n13%\n60%\n73%\n71%\nJul–Dec 2006\nJul–Dec 2007\nJan–Jun 2007\n FIGURE 14.5 Infections by malicious code type, CSI/FBI report, 2008 11 \n 11 Robert Richardson, “2008 CSI Computer Crime & Security \nSurvey,” (The latest results from the longest-running project of its kind) \n http://i.cmpnet.com/v2.gocsi.com/pdf/CSIsurvey2008.pdf \n" }, { "page_number": 263, "text": "PART | II Managing Information Security\n230\n100,000\n90,000\n80,000\n70,000\n60,000\n50,000\n40,000\n30,000\n20,000\n10,000\n0\nJul 3, 2006\nOct 2, 2006\nJan 1, 2007\nApr 2, 2007\nDate\nActive bot-infected computers\nJul 2, 2007\nOct 1, 2007\nDec 31, 2007\nMedian daily\nactive bots\n2 per. moving\naverage\n FIGURE 14.7 Botnet activity, CSI/FBI report, 2008 13 \n 12 Robert Richardson, “2008 CSI Computer Crime & Security Survey,” \n(The latest results from the longest-running project of its kind) http://\ni.cmpnet.com/v2.gocsi.com/pdf/CSIsurvey2008.pdf \n 13 Robert Richardson, “2008 CSI Computer Crime & Security Survey,” \n(The latest results from the longest-running project of its kind) http://\ni.cmpnet.com/v2.gocsi.com/pdf/CSIsurvey2008.pdf \ntechnological surveillance as well as spying on commer-\ncial organizations and sometimes governments. \n ● Spam, phishing, and hoaxes. Spamming and phish-\ning (see Figure 14.6 ), although different, often go hand \nin hand. Spamming is the abuse of electronic messag-\ning systems to indiscriminately send unsolicited bulk \nmessages, many of which contain hoaxes or other \n undesirable contents such as links to phishing sites. \nPhishing is the criminally fraudulent process of attempt-\ning to acquire sensitive information such as usernames, \npasswords, and credit-card details by masquerading as a \ntrustworthy entity in an electronic communication. \n ● Denial-of-service (DoS) and distributed denial-of-service \n(DDoS). These are attempts to make a computer resource \nunavailable to its intended users. Although the means to \ncarry out, motives for, and targets of a DoS attack may vary, \nit generally consists of the concerted, malevolent efforts of a \nperson or persons to prevent an Internet site or service from \nfunctioning efficiently or at all, temporarily or indefinitely. \n ● Botnets . The term botnet (see Figure 14.7 ) can be used \nto refer to any group of bots, or software robots, such as \nIRC bots, but this word is generally used to refer to a col-\nlection of compromised computers (called zombies) run-\nning software, usually installed via worms, Trojan horses, \nor backdoors, under a common command-and-control \ninfrastructure. The majority of these computers are run-\nning Microsoft Windows operating systems, but other \noperating systems can be affected. \nRetail 4%\nInternet\nCommunity 2%\nISP 8%\nInsurance 2%\n0.9%\n1%\nFinancial 80%\n1%\n0.1%\n1%\nComputer hardware 1%\nGovernment 1%\nTransportation 1%\nComputer software 0.9%\nComputer consulting 0.1%\n FIGURE 14.6 Unique brands phished by industry sectors, CSI/FBI report, 2008. 12 \n" }, { "page_number": 264, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n231\n Impact of Security Breaches \n The impact of security breaches on most organizations \ncan be devastating; however, it’s not just dollars and \ncents that are at stake. Aside from the financial burden \nof having to deal with a security incident, especially if \nit leads to litigation, other factors could severely dam-\nage an organization’s ability to operate, or damage the \nreputation of an organization beyond recovery. Some \nof the preliminary key findings from the 2008 CSI/FBI \nSecurity Report 14 (see Figure 14.8 ) include: \n ● Financial fraud cost organizations the most, with an \naverage reported loss of close to $500,000. \n ● The second most expensive activity was dealing \nwith bots within the network, reported to cost \norganizations an average of nearly $350,000. \n ● Virus incidents occurred most frequently, respond-\nents said — at almost half (49%) of respondent \norganizations. \n Some things to consider: \n ● How much would it cost your organization if your \necommerce Web server farm went down for 12 hours? \n ● What if your mainframe database that houses your \nreservation system was not accessible for an entire \nafternoon? \n ● What if your Web site was defaced and rerouted all \nyour customers to a site infected with malicious Java \nscripts? \n ● Would any of these scenarios significantly impact \nyour organization’s bottom line? \n3500\n3000\n2500\n2000\n1500\n1000\n500\n0\n1999\n764\n983\n3149\n2063\n804\n526\n204\n168\n345\n289\n2000\n2001\n2002\n2003\n2004\n2005\n2006\n2007\n2008\nLosses in Thousands of Dollars\n2008: 144 Respondents\n FIGURE 14.8 2008 CSI/FBI Security Survey results. 15 \n 15 Robert Richardson, “2008 CSI Computer Crime & Security Survey,” \n(The latest results from the longest-running project of its kind) http://\ni.cmpnet.com/v2.gocsi.com/pdf/CSIsurvey2008.pdf \n 14 Robert Richardson, “2008 CSI Computer Crime & Security Survey,” \n(The latest results from the longest-running project of its kind) http://\ni.cmpnet.com/v2.gocsi.com/pdf/CSIsurvey2008.pdf \n 2 . PROTECTING MISSION-CRITICAL \nSYSTEMS \n The IT core of any organization is its mission-critical \nsystems. These are systems without which the mission \nof the organization, whether building aircraft carriers \nfor the U.S. military or packaging Twinkies to deliver to \nfood markets, could not operate. The major components \nto protecting these systems are detailed throughout this \nchapter; however, with special emphasis on the big pic-\nture an information security manager must keep in mind, \nthere are some key components that are crucial for the \nsuccess and continuity of any organization. These are \ninformation assurance, information risk management, \ndefense in depth, and contingency planning. \n Information Assurance \n Information assurance is achieved when information \nand information systems are protected against attacks \nthrough the application of security services such as \navailability, integrity, authentication, confidentiality, and \nnonrepudiation. The application of these services should \nbe based on the protect, detect, and react paradigm. This \nmeans that in addition to incorporating protection mech-\nanisms, organizations need to expect attacks and include \nattack detection tools and procedures that allow them to \nreact to and recover from these unexpected attacks. 16 \n Information Risk Management \n Risk is, in essence, the likelihood of something going \nwrong and damaging your organization or information \n 16 “Defense in Depth: A practical strategy for achieving Information \nAssurance in today’s highly networked environments,” \n" }, { "page_number": 265, "text": "PART | II Managing Information Security\n232\nassets. Due to the ramifications of such risk, an organiza-\ntion should try to reduce the risk to an acceptable level. \nThis process is known as information risk management . \nRisk to an organization and its information assets, simi-\nlar to threats, comes in many different forms. Some of \nthe most common risks and/or threats are: \n ● Physical damage. Fire, water, vandalism, power loss \nand natural disasters. \n ● Human interaction. Accidental or intentional action \nor inaction that can disrupt productivity. \n ● Equipment malfunctions. Failure of systems and \nperipheral devices. \n ● Internal or external attacks. Hacking, cracking, and \nattacking. \n ● Misuse of data . Sharing trade secrets; fraud, \nespionage, and theft. \n ● Loss of data. Intentional or unintentional loss of \ninformation through destructive means. \n ● Application error. Computation errors, input errors, \nand buffer overflows. \n The idea of risk management is that threats of any \nkind must be identified, classified, and evaluated to cal-\nculate their damage potential. 17 This is easier said than \ndone. \n Administrative, Technical, and Physical \nControls \n For example, administrative, technical, and physical con-\ntrols, are as follows: \n ● Administrative controls consist of organizational \npolicies and guidelines that help minimize the expo-\nsure of an organization. They provide a framework \nby which a business can manage and inform its \npeople how they should conduct themselves while \nat the workplace and provide clear steps employees \ncan take when they’re confronted with a potentially \nrisky situation. Some examples of administrative \ncontrols include the corporate security policy, pass-\nword policy, hiring policies, and disciplinary policies \nthat form the basis for the selection and implementa-\ntion of logical and physical controls. Administrative \ncontrols are of paramount importance because tech-\nnical and physical controls are manifestations of the \nadministrative control policies that are in place. \n ● Technical controls use software and hardware \nresources to control access to information and \ncomputing systems, to help mitigate the potential \nfor errors and blatant security policy violations. \nExamples of technical controls include passwords, \nnetwork- and host-based firewalls, network intrusion \ndetection systems, and access control lists and data \nencryption. Associated with technical controls is the \n Principle of Least Privilege , which requires that an \nindividual, program, or system process is not granted \nany more access privileges than are necessary to \nperform the task. \n ● Physical controls monitor and protect the physical \nenvironment of the workplace and computing \nfacilities. They also monitor and control access to \nand from such facilities. Separating the network and \nworkplace into functional areas are also physical \ncontrols. An important physical control is also \nseparation of duties, which ensures that an individual \ncannot complete a critical task by herself. \n Risk Analysis \n During risk analysis there are several units that can help \nmeasure risk. Before risk can be measured, though, \nthe organization must identify the vulnerabilities and \nthreats against its mission-critical systems in terms of \nbusiness continuity. During risk analysis, an organiza-\ntion tries to evaluate the cost for each security control \nthat helps mitigate the risk. If the control is cost effec-\ntive relative to the exposure of the organization, then the \ncontrol is put in place. The measure of risk can be deter-\nmined as a product of threat, vulnerability, and asset val-\nues — in other words: \n \n Risk\nAsset\nThreat\nVulnerability\n\u0003\n\u0007\n\u0007\n \n \n There are two primary types of risk analysis: quanti-\ntative and qualitative. Quantitative risk analysis attempts \nto assign meaningful numbers to all elements of the risk \nanalysis process. It is recommended for large, costly \nprojects that require exact calculations. It is typically per-\nformed to examine the viability of a project’s cost or time \nobjectives. Quantitative risk analysis provides answers to \nthree questions that cannot be addressed with determin-\nistic risk and project management methodologies such as \ntraditional cost estimating or project scheduling 18 : \n ● What’s the probability of meeting the project objec-\ntive, given all known risks? \n 17 Shon Harris, All in One CISSP Certifi cation Exam Guide 4th \nEdition , McGraw Hill Companies \n 18 Lionel Galway, Quantitative Risk Analysis for Project Management, \nA Critical Review , WR-112-RC, February 2004, http://www.rand.org/\npubs/working_papers/2004/RAND_WR112.pdf \n" }, { "page_number": 266, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n233\n ● How much could the overrun or delay be, and \ntherefore how much contingency do we need for the \norganization’s desired level of certainty? \n ● Where in the project is the most risk, given the \nmodel of the project and the totality of all identified \nand quantified risks? \n Qualitative risk analysis does not assign numeri-\ncal values but instead opts for general categorization \nby severity levels. Where little or no numerical data is \navailable for a risk assessment, the qualitative approach \nis the most appropriate. The qualitative approach does \nnot require heavy mathematics; instead, it thrives more \non the people participating and their backgrounds. \nQualitative analysis enables classification of risk that \nis determined by people’s wide experience and knowl-\nedge captured within the process. Ultimately it is not an \nexact science, so the process will count on expert opin-\nions for its base assumptions. The assessment process \nuses a structured and documented approach and agreed \nlikelihood and consequence evaluation tables. It is also \nquite common to calculate risk as a single loss expect-\nancy (SLE) or annual loss expectancy (ALE) by project \nor business function. \n Defense in Depth \n The principle of defense in depth is that layered security \nmechanisms increase security of a system as a whole. If \nan attack causes one security mechanism to fail, other \nmechanisms may still provide the necessary security to \nprotect the system. 19 This is a process that involves peo-\nple, technology, and operations as key components to \nits success; however, those are only part of the picture. \nThese organizational layers are difficult to translate into \nspecific technological layers of defenses, and they leave \nout areas such as security monitoring and metrics. Figure \n14.9 shows a mind map that organizes the major catego-\nries from both the organizational and technical aspects \nof defense in depth and takes into account people, poli-\ncies, monitoring, and security metrics. \n Contingency Planning \n Contingency planning is necessary in several ways for \nan organization to be sure it can withstand some sort of \nsecurity breach or disaster. Among the important steps \nrequired to make sure an organization is protected and \nable to respond to a security breach or disaster are busi-\nness impact analysis, incident response planning, disas-\nter recovery planning, and business continuity planning. \nThese contingency plans are interrelated in several ways \nand need to stay that way so that a response team can \nchange from one to the other seamlessly if there is a \nneed. Figure 14.10 shows the relationship between the \nfour types of contingency plans with the major catego-\nries defined in each. \n Business impact analysis must be performed in every \norganization to determine exactly which business pro-\ncess is deemed mission-critical and which processes \nwould not seriously hamper business operations should \nthey be unavailable for some time. An important part of \na business impact analysis is the recovery strategy that is \nusually defined at the end of the process. If a thorough \nbusiness impact analysis is performed, there should be a \nclear picture of the priority of each organization’s high-\nest-impact, therefore risky, business processes and assets \nas well as a clear strategy to recover from an interruption \nin one of these areas. 20 \n An Incident Response (IR) Plan\n It is a detailed set of processes and procedures that antic-\nipate, detect, and mitigate the impact of an unexpected \nevent that might compromise information resources \nand assets. Incident response plans are composed of six \nmajor phases: \n 1. Preparation. Planning and readying in the event of a \nsecurity incident. \n 2. Identification. To identify a set of events that have \nsome negative impact on the business and can be \nconsidered a security incident. \n 3. Containment. During this phase the security incident \nhas been identified and action is required to mitigate \nits potential damage. \n 4. Eradication. After it’s contained, the incident must \nbe eradicated and studied to make sure it has been \nthoroughly removed from the system. \n 5. Recovery. Bringing the business and assets involved \nin the security incident back to normal operations. \n 6. Lessons learned. A thorough review of how the \nincident occurred and the actions taken to respond \nto it where the lessons learned get applied to future \nincidents \n 19 OWASP Defi nition of Defense in Depth http://www.owasp.org/\nindex.php/Defense_in_depth \n 20 M. E. Whitman, H. J. Mattord, Management of Information Security , \nCourse Technology, 2nd Edition, March 27, 2007. \n" }, { "page_number": 267, "text": "PART | II Managing Information Security\n234\nSystem Security\nAdministration\nPhysical Security\nTraining & Awareness\nPolicies & Procedures\nFacilities\nCountermeasures\nPersonal Security\nIA Architecture\nIA Criteria (security, \nInteroperability, PKI)\nAcquisition/Integration of \nEvaluated Products\nSystem Risk\nAssessment\nSecurity Policy\nCertification and \nAccreditation\nKey Management\nReadiness Assessments\nA5W&R\nRecovery\n& Reconstitution\nSecurity Management\nMeasuring Effectiveness\nAutomating Metrics\nDesigning Security\nScorecards\nVisualization\nAnalysis Techniques\nTechnical Controls\nPhysical Controls\nRisk Management\nSecurity Metrics\nWeb and \nApplication Security\nWeb Communication\nWireless Security\nBusiness\nCommunications\nSecurity\nNetwork-based Security\nPublic and Private\nInternet Connections\nIntranet and Extranet\nCommunications\nVirtual Private Networks\n(VPNs)\nSystems and\nNetwork Security\nData Security\nPhysical Security\nFacility Requirements\nIdentification\nAuthentication\nAuthorization\nOS Hardening\nPatch Management\nAntivirus\nDuty Encryption\nSoftware\nIntrusion Detection\nSystems\nBackup and Restore\nCapabilities\nSystem Event\nLogging\nFirewalls\nSniffers and Packet\nRecording Tools\nIntrusion Detection\nSystems\nAnomaly Detection\nSystems\nFirewalls\nApplication Layer\nFirewalls\nAlert Correlation and\nAutomation\nIntrusion Prevention\nSystems\nCommon Protocols\nSecurity Issues\nCommon Topologies\nEnhancing Security\nControls\nSatellite\nCommunications\nAssessing Wireless\nSecurity\nPhysical Access Control\nData Center Security\nPersonnel Practices\nMobile Security\nData Classification\nAccess Control Models\nRoles and\nResponsibilities\nHost-based Security\nWeb Application\nDefenses\nApplication Security\nWeb Security\nProtocols\nWeb Security\nActive Content\nAdministrative Controls\nSecurity Monitoring\nand Effectiveness\nPeople\nTechnology\nOperations\nSecurity Monitoring\nMechanisms\nIR and Forensics\nIntelligent Outsourcing\nValidating Security\nEffectiveness\nDefense in Depth\n FIGURE 14.9 Defense-in-depth mind map. \n" }, { "page_number": 268, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n235\n When a threat becomes a valid attack, it is classified \nas an information security incident if 21 : \n ● It is directed against information assets \n ● It has a realistic chance of success \n ● It threatens the confidentiality, integrity, or \navailability of information assets \n Business Continuity Planning (BCP) \n It ensures that critical business functions can continue dur-\ning a disaster and is most properly managed by the CEO \nof the organization. The BCP is usually activated and \nexecuted concurrently with disaster recovery planning \n(DRP) when needed and reestablishes critical functions at \nalternate sites (DRP focuses on reestablishment at the pri-\nmary site). BCP relies on identification of critical business \nfunctions and the resources to support them using several \ncontinuity strategies, such as exclusive-use options like \nhot, warm, and cold sites or shared-use options like time-\nshare, service bureaus, or mutual agreements. 22 \n Disaster recovery planning is the preparation for and \nrecovery from a disaster. Whether natural or manmade, \nit is an incident that has become a disaster because the \norganization is unable to contain or control its impact, \nor the level of damage or destruction from the incident \nis so severe that the organization is unable to recover \nquickly. The key role of DRP is defining how to reestab-\nlish operations at the site where the organization is usu-\nally located. 23 Key points in a properly designed DRP: \n ● Clear delegation of roles and responsibilities \n ● Execution of alert roster and notification of key \npersonnel \n ● Clear establishment of priorities \n ● Documentation of the disaster \n ● Action steps to mitigate the impact \nBusiness Impact\nAnalysis (BIA)\nContingency planning\nIncident Response\nPlanning\nDisaster Recovery\nPlanning\nBusiness\nContinuity Planning\nThreat Attack\nIdentification and\nPrioritization\nPreparation\nDisaster Recovery\nPlans\nEstablish\nContinuity\nStrategies\nBusiness Unit\nAnalysis\nIdentification\nAttack Success\nScenario\nDevelopment\nContainment\nPotential Damage\nAssessment\nEradication\nRecovery\nSubordinate Plan\nClassification\nRecovery\nOperations\nCrisis\nManagement\nContinuity\nManagement\nPlan for Continuity\nof Operations\n FIGURE 14.10 The relationship between the four types of contingency plans. \n 21 M. E. Whitman, H. J. Mattord, Management of Information Security , \nCourse Technology, 2nd Edition, March 27, 2007. \n 22 M. E. Whitman, H. J. Mattord, Management of Information Security , \nCourse Technology, 2nd Edition, March 27, 2007. \n 23 M. E. Whitman, H. J. Mattord, Management of Information Security , \nCourse Technology, 2nd Edition, March 27, 2007. \n" }, { "page_number": 269, "text": "PART | II Managing Information Security\n236\n ● Alternative implementations for various systems \ncomponents \n ● DRP must be tested regularly \n 3. INFORMATION SECURITY FROM \nTHE GROUND UP \n The core concepts of information security manage-\nment and protecting mission-critical systems have been \nexplained. Now, how do you actually apply these con-\ncepts to your organization from the ground up? You \nliterally start at the ground (physical) level and work \nyourself up to the top (application) level. This model can \nbe applied to many IT frameworks, ranging from net-\nworking models such as OSI or TCP/IP stacks to oper-\nating systems or other problems such as organizational \ninformation security and protecting mission-critical \nsystems. \n There are many areas of security, all of which are \ninterrelated. You can have an extremely hardened system \nrunning your ecommerce Web site and database; how-\never, if physical access to the system is obtained by the \nwrong person, a simple yanking of the right power plug \ncan be game over. In other words, to think that any of \nthe following components is not important to the over-\nall security of your organization is to provide malicious \nattackers the only thing they need to be successful — that \nis, the path of least resistance. The next parts of this \nchapter each contain an overview of the technologies \n(see Figure 14.11 ) and processes of which information \nsecurity managers must be aware to successfully secure \nthe assets of any organization: \n ● Physical security \n ● Data security \n ● Systems and network security \n ● Business communications security \n ● Wireless security \n ● Web and application security \n ● Security policies and procedures \n ● Security employee training and awareness \n Physical Security \n Physical security as defined earlier concerns itself with \nthreats, risks, and countermeasures to protect facili-\nties, hardware, data, media and personnel. Main topics \ninclude restricted areas, authorization models, intrusion \ndetection, fire detection, and security guards. Therefore \nphysical safeguards must be put in place to protect the \norganization from damaging consequences. The security \nrule defines physical safeguards as “ physical measures, \npolicies, and procedures to protect a covered entity’s \nelectronic information systems and related buildings \nand equipment, from natural and environmental hazards, \nand unauthorized intrusion. ” 24 A brief description of the \nbaseline requirements to implement these safeguards at \nyour facility follow. \n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n97\n94\n85\n80\n71\n53\n53\n34\nEndpoint/NAC\nApp firewalls\nFile encryption (storage)\nEncryption in transit\nAnti-spyware\nVPN\nFirewalls\nAnti-virus\n2008: 521 Respondents\n2006\n2007\n2008\n FIGURE 14.11 Security technologies used by organizations, CSI/FBI report, 2008. \n 24 45 C.F.R. § 164.310 Physical safeguards, http://law.justia.\ncom/us/cfr/title45/45-1.0.1.3.70.3.33.5.html \n" }, { "page_number": 270, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n237\n Facility Requirements \n Entering and accessing information systems to any \ndegree within any organization must be controlled. \nWhat’s more, it is necessary to understand what is \nallowed and what’s not; if those parameters are clearly \ndefied, the battle is half won. Not every building is a \nhigh-security facility, so it ’ s understandable that some \nof the following items might not apply to your organiza-\ntion; however, there should be a good, clear reason as to \nwhy they don’t. Sample questions to consider: 25 \n ● Are policies and procedures developed and imple-\nmented that address allowing authorized and limiting \nunauthorized physical access to electronic informa-\ntion systems and the facility or facilities in which \nthey are housed? \n ● Do the policies and procedures identify individuals \n(workforce members, business associates, \ncontractors, etc.) with authorized access by title and/\nor job function? \n ● Do the policies and procedures specify the methods \nused to control physical access, such as door locks, \nelectronic access control systems, security officers, \nor video monitoring? \n The facility access controls standard has four imple-\nmentation specifications 26 : \n ● Contingency operations . Establish (and implement \nas needed) procedures that allow facility access in \nsupport of restoration of lost data under the disaster \nrecovery plan and emergency mode operations plan \nin the event of an emergency. \n ● Facility security plan . Implement policies and \nprocedures to safeguard the facility and the \nequipment therein from unauthorized physical \naccess, tampering, and theft. \n ● Access control and validation procedures. Implement \nprocedures to control and validate a person’s access \nto facilities based on her role or function, including \nvisitor control and control of access to software \nprograms for testing and revision. \n ● Maintenance records . Implement policies and \nprocedures to document repairs and modifications to \nthe physical components of a facility that are related \nto security (for example, hardware, walls, doors, and \nlocks). \n Administrative, Technical, and Physical \nControls \n Understanding what it takes to secure a facility is the \nfirst step in the process of identifying exactly what type \nof administrative, technical, and physical controls will be \nnecessary for your particular organization. Translating \nthe needs for security into tangible examples, here are \nsome of the controls that can be put in place to enhance \nsecurity: \n ● Administrative controls . These include human \nresources exercises for simulated emergencies such \nas fire drills or power outages as well as security \nawareness training and security policies. \n ● Technical controls . These include physical intrusion \ndetection systems and access control equipment such \nas biometrics. \n ● Physical controls . These include video cameras, \nguarded gates, man traps, and car traps. \n Data Security \n Data security is at the core of what needs to be protected \nin terms of information security and mission-critical sys-\ntems. Ultimately it is the data that the organization needs \nto protect in many cases, and usually data is exactly \nwhat perpetrators are after, whether trade secrets, cus-\ntomer information, or a database of Social Security num-\nbers — the data is where it’s at! \n To be able to properly classify and restrict data, the \nfirst thing to understand is how data is accessed. Data \nis accessed by a subject , whether that is a person, pro-\ncess, or another application, and what is accessed to \nretrieve the data is called an object . Think of an object \nas a cookie jar with valuable information in it, and only \nselect subjects have the permissions necessary to dip \ntheir hands into the cookie jar and retrieve the data or \ninformation that they are looking for. Both subjects and \nobjects can be a number of things acting in a network, \ndepending on what action they are taking at any given \nmoment, as shown in Figure 14.12 . \n Data Classification \n Various data classification models are available for dif-\nferent environments. Some security models focus on the \nconfidentiality of the data (such as Bell-La Padula) and \nuse different classifications. For example, the U.S. mili-\ntary uses a model that goes from most confidential (Top \nSecret) to least confidential (Unclassified) to classify \nthe data on any given system. On the other hand, most \n 25 45 C.F.R. § 164.310 Physical safeguards, http://law.justia.\ncom/us/cfr/title45/45-1.0.1.3.70.3.33.5.html \n 26 45 C.F.R. § 164.310 Physical safeguards, http://law.justia.\ncom/us/cfr/title45/45-1.0.1.3.70.3.33.5.html \n" }, { "page_number": 271, "text": "PART | II Managing Information Security\n238\ncorporate entities prefer a model whereby they classify \ndata by business unit (HR, Marketing, R & D … ) or use \nterms such as Company Confidential to define items \nthat should not be shared with the public. Other security \nmodels focus on the integrity of the data (for example, \nBipa); yet others are expressed by mapping security pol-\nicies to data classification (for example, Clark-Wilson). \nIn every case there are areas that require special atten-\ntion and clarification. \n Access Control Models \n Three main access control models are in use today: \nRBAC, DAC, and MAC. In Role-Based Access Control \n(RBAC), the job function of the individual determines the \ngroup he is assigned to and determines the level of access \nhe can attain on certain data and systems. The level of \naccess is usually defined by IT personnel in accordance \nwith policies and procedures. In Discretionary Access \nControl (DAC), the end user or creator of the data object \nis allowed to define who can and who cannot access \nthe data; this has become less popular in recent history. \nMandatory Access Control (MAC) is more of a militant \nstyle of applying permissions, where permissions are the \nsame across the board to all members of a certain level \nor class within the organization. \n The following are data security “ need to knows ” : \n ● Authentication versus authorization . It’s crucial to \nunderstand that simply because someone becomes \nauthenticated does not mean that they are authorized \nto view certain data. There needs to be a means by \nwhich a person, after gaining access through authen-\ntication, is limited in the actions they are author-\nized to perform on certain data (such as read-only \npermissions). \n ● Protecting data with cryptography is important \nfor the security of both the organization and its \ncustomers. Usually the most important item that \nan organization needs to protect, aside from trade \nsecrets, is its customers ’ personal data. If there \nis a security breach and the data that is stolen \nor compromised was previously encrypted, the \norganization can feel more secure in that the \ncollateral damage to their reputation and customer \nbase will be minimized. \n ● Data leakage prevention and content management \nis an up-and-coming area of data security that has \nproven extremely useful in preventing sensitive \ninformation from leaving an organization. With this \nrelatively new technology, a security administrator \ncan define the types of documents, and further define \nthe content within those documents, that cannot \nleave the organization and quarantine them for \ninspection before they hit the public Internet. \n ● Securing email systems is one of the most important \nand overlooked areas of data security. With access \nto the mail server, an attacker can snoop through \nanyone’s email, even the company CEO’s! Password \nSubjects\nPrograms\nProcesses\nObjects\nPrograms\nProcesses\n FIGURE 14.12 Subjects access objects. \n" }, { "page_number": 272, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n239\nfiles, company confidential documents, and contacts \nfor all address books are only some of the things \nthat a compromised mail server can reveal about \nan organization, not to mention root/administrator \naccess to a system in the internal network. \n Systems and Network Security \n Systems and network security 27 is at the core of infor-\nmation security. Though physical security is extremely \nimportant and a breach could render all your systems \nand network security safeguards useless, without hard-\nened systems and networks, anyone from the comfort of \nher own living room can take over your network, access \nyour confidential information, and disrupt your opera-\ntions at will. Data classification and security are also \nquite important, if for nothing else to be sure that only \nthose who need to access certain data can and those who \ndo not need access cannot; however, that usually works \nwell for people who play by the rules. In many cases \nwhen an attacker gains access to a system, the first order \nof business is escalation of privileges. This means that \nthe attacker gets in as a regular user and attempts to find \nways to gain administrator or root privileges. \n The following are brief descriptions of each of the \ncomponents that make for a complete security infrastruc-\nture for all host systems and network connected assets. \n Host-Based Security \n The host system is the core of where data sits and is \naccessed, so it is therefore also the main target of many \nintruders. Regardless of the operating system platform \nthat is selected to run certain applications and data-\nbases, the principles of hardening systems are the same \nand apply to host systems as well as network devices, as \nwe will see in the upcoming sections. Steps required to \nmaintain host systems in as secure a state as possible are \nas follows: \n 1. OS hardening. Guidelines by which a base operat-\ning system goes through a series of checks to make \nsure no unnecessary exposures remain open and that \nsecurity features are enabled where possible. There \nis a series of organizations that publish OS hardening \nGuides for various platforms of operating systems. \n 2. Removing unnecessary services. In any operating \nsystem there are usually services that are enabled \nbut have no real business need. It is necessary to \ngo through all the services of your main corporate \nimage, on both the server side and client side, to \ndetermine which services are required and which \nwould create a potential vulnerability if left enabled. \n 3. Patch management. All vendors release updates for \nknown vulnerabilities on some kind of schedule. \nPart of host-based security is making sure that all \nrequired vendor patches, at both the operating system \nand the application level, are applied as quickly as \nbusiness operations allow on some kind of regular \nschedule. There should also be an emergency patch \nprocedure in case there is an outbreak and updates \nneed to be pushed out of sequence. \n 4. Antivirus. Possibly more important than patches are \nantivirus definitions, specifically on desktop and \nmobile systems. Corporate antivirus software should \nbe installed and updated frequently on all systems in \nthe organization. \n 5. Intrusion detection systems (IDSs). Although many \nseem to think IDSs are a network security function, \nthere are many good host-based IDS applications, \nboth commercial and open source, that can signifi-\ncantly increase security and act as an early warning \nsystem for possibly malicious traffic and/or files for \nwhich the AV does not have a definition. \n 6. Firewalls. Host-based firewalls are not as popular \nas they once were because many big vendors such \nas Symantec, McAfee, and Checkpoint have moved \nto a host-based client application that houses all \nsecurity functions in one. There is also another \ntrend in the industry to move toward application-\nspecific host-based firewalls like those specifically \ndesigned to run on a Web or database server, for \nexample. \n 7. Data encryption software. One item often overlooked \nis encryption of data while it is at rest. Many solu-\ntions have recently come onto the market that offer \nthe ability to encrypt sensitive data such as credit \ncard and Social Security numbers that sit on your file \nserver or inside the database server. This is a huge \nprotection in the case of information theft or data \nleakage. \n 8. Backup and restore capabilities. Without the abil-\nity to back up and restore both servers and clients in \na timely fashion, an issue that could be resolved in \nshort order can quickly turn into a disaster. Backup \nprocedures should be in place and restored on a regu-\nlar basis to verify their integrity. \n 9. System event logging. Event logs are significant \nwhen you’re attempting to investigate the root cause \n 27 “ GSEC, GIAC Security Essentials Outline, ” SANS Institute, https://\nwww.sans.org/training/description.php?tid=672 \n" }, { "page_number": 273, "text": "PART | II Managing Information Security\n240\nof an issue or incident. In many cases, logging is not \nturned on by default and needs to be enabled after \nthe core installation of the host operating system. \nThe OS hardening guidelines for your organization \nshould require that logging be enabled. \n Network-Based Security \n The network is the communication highway for eve-\nrything that happens between all the host systems. All \ndata at one point or another passes over the wire and \nis potentially vulnerable to snooping or spying by the \nwrong person. The controls implemented on the network \nare similar in nature to those that can be applied to host \nsystems; however, network-based security can be more \neasily classified into two main categories: detection and \nprevention. We will discuss security monitoring tools in \nanother section; for now the main functions of network-\nbased security are to either detect a potential incident \nbased on a set of events or prevent a known attack. \n Most network-based security devices can perform \ndetect or protect functions in one of two ways: signa-\nture-based or anomaly-based. Signature-based detection \nor prevention is similar to AV signatures that look for \nknown traits of a particular attack or malware. Anomaly-\nbased systems can make decisions based on what is \nexpected to be “ normal ” on the network or per a cer-\ntain set of standards (for example, RFC), usually after a \nperiod of being installed in what is called “ learning ” or \n “ monitor ” mode. \n Intrusion Detection \n Intrusion detection is the process of monitoring the events \noccurring in a computer system or network and analyzing \nthem for signs of possible incidents that are violations or \nimminent threats of violation of computer security poli-\ncies, acceptable-use policies, or standard security prac-\ntices. Incidents have many causes, such as malware (e.g., \nworms, spyware), attackers gaining unauthorized access \nto systems from the Internet, and authorized system users \nwho misuse their privileges or attempt to gain additional \nprivileges for which they are not authorized. 28 The most \ncommon detection technologies and their security func-\ntions on the network are as follows: \n ● Packet sniffing and recording tools. These tools are \nused quite often by networking teams to troubleshoot \nconnectivity issues; however, they can be a security \nprofessional’s best friend during investigations and \nroot-cause analysis. When properly deployed and main-\ntained, a packet capture device on the network allows \nsecurity professionals to reconstruct data and reverse-\nengineer malware in a way that is simply not possible \nwithout a full packet capture of the communications. \n ● Intrusion detection systems. In these systems, \nappliances or servers monitor network traffic and run \nit through a rules engine to determine whether it is \nmalicious according to its signature set. If the traffic \nis deemed malicious, an alert will fire and notify the \nmonitoring system. \n ● Anomaly detection systems. Aside from the \nactual packet data traveling on the wire, there \nare also traffic trends that can be monitored on \nthe switches and routers to determine whether \nunauthorized or anomalous activity is occurring. \nWith Net-flow and S-flow data that can be sent \nto an appliance or server, aggregated traffic on \nthe network can be analyzed and can alert a \nmonitoring system if there is a problem. Anomaly \ndetection systems are extremely useful when there \nis an attack for which the IDS does not have a \nsignature or if there is some activity occurring that \nis suspicious. \n Intrusion Prevention \n Intrusion prevention is a system that allows for the active \nblocking of attacks while they are inline on the network, \nbefore they even get to the target host. There are many \nways to prevent attacks or unwanted traffic from coming \ninto your network, the most common of which is known \nas a firewall. Although a firewall is mentioned quite com-\nmonly and a lot of people know what a firewall is, there are \nseveral different types of controls that can be put in place in \naddition to a firewall that can seriously help protect the net-\nwork. Here are the most common prevention technologies: \n ● Firewalls. The purpose of a firewall is to enforce an \norganization’s security policy at the border of two \nnetworks. Typically most firewalls are deployed \nat the edge between the internal network and the \nInternet (if there is such a thing) and are configured \nto block (prevent) any traffic from going in or out \nthat is not allowed by the corporate security policy. \nThere are quite a few different levels of protection a \nfirewall can provide, depending on the type of fire-\nwall that is deployed, such as these: \n ● Packet filtering . The most basic type of firewalls per-\nform what is called stateful packet filtering , which \n 28 Karen Scarfone and Peter Mell, NIST Special Publication 800-94: \n“Guide to Intrusion Detection and Prevention Systems (IDPS),” Recom-\nmendations of the National Institute of Standards and Technology, \n http://csrc.nist.gov/publications/nistpubs/800-94/SP800-94.pdf \n" }, { "page_number": 274, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n241\nmeans that they can remember which side initiated \nthe connection, and rules (called access control lists, \nor ACLs) can be created based not only on IPs and \nports but also depending on the state of the connec-\ntion (meaning whether the traffic is going into or out \nof the network). \n ● Proxies. The main difference between proxies and \nstateful packet-filtering firewalls is that proxies have \nthe ability to terminate and reestablish connections \nbetween two end hosts, acting as a proxy for all \ncommunications and adding a layer of security and \nfunctionality to the regular firewalls. \n ● Application layer firewalls. The app firewalls have \nbecome increasingly popular; they are designed \nto protect certain types of applications (Web or \ndatabase) and can be configured to perform a level \nof blocking that is much more intuitive and granular, \nbased not only on network information but also \napplication-specific variables so that administrators \ncan be much more precise in what they are blocking. \nIn addition, app firewalls can typically be loaded \nwith server-side SSL certificates, allowing the \nappliance to decrypt encrypted traffic, a huge benefit \nto a typical proxy or stateful firewall. \n ● Intrusion prevention systems. An intrusion \nprevention system (IPS) is software that has all the \ncapabilities of an intrusion detection system and can \nalso attempt to stop possible incidents using a set of \nconditions based on signatures or anomalies. \n Business Communications Security \n Businesses today tend to communicate with many other \nbusiness entities, not only over the Internet but also \nthrough private networks or guest access connections \ndirectly to the organization’s network, whether wired \nor wireless. Business partners and contractors conduct-\ning business communications obviously tend to need a \nhigher level of access than public users but not as exten-\nsive as permanent employees, so how does an organiza-\ntion handle this phenomenon? External parties working \non internal projects are also classed as business partners. \nSome general rules for users to maintain security control \nof external entities are shown in Figure 14.13 . \n General Rules for Self-Protection \n The general rules for self-protection are as follows: \n ● Access to a user’s own IT system must be protected \nin such a way that system settings (e.g., in the BIOS) \ncan only be changed subject to authentication. \n ● System start must always be protected by requiring \nappropriate authentication (e.g., requesting the \nboot password). Exceptions to this rule can \napply if: \n ● Automatic update procedures require this, and the \nsystem start can only take place from the built-in \nhard disk. \n ● The system is equipped for use by a number of \npersons with their individual user profiles, and \nsystem start can only take place from the built-in \nhard disk. \n ● Unauthorized access to in-house resources including \ndata areas (shares, folders, mailboxes, calendar, \netc.) must be prevented in line with their need for \nprotection. In addition, the necessary authorizations \nfor approved data access must be defined. \n ● Users are not permitted to operate resources without \nfirst defining any authorizations (such as no global \nsharing). This rule must be observed particularly by \nthose users who are system managers of their own \nresources. \n ● Users of an IT system must lock the access links they \nhave opened (for example, by enabling a screensaver \nor removing the chip card from the card reader), \neven during short periods of absence from their \nworkstations. \n ● When work is over, all open access links must be \nproperly closed or protected against system/data \naccess (such as if extensive compilation runs need to \ntake place during the night). \n ● Deputizing rules for access to the user’s own system \nor data resources must be made in agreement with \nthe manager and the acting employee. \nInternet\nIntranet\nExtranet\nBusiness Partners\nExternal\nOrganizations\nExternal\nPublic Assets\nCore\n FIGURE 14.13 The business communications cloud. \n" }, { "page_number": 275, "text": "PART | II Managing Information Security\n242\n Handling Protection Resources \n The handling of protection resources are as follows: \n ● Employees must ensure that their protection \nresources cannot be subject to snooping while data \nrequired for authentication is being entered (e.g., \npassword entry during login). \n ● Employees must store all protection resources and \nrecords in such a way that they cannot be subjected \nto snooping or stolen. \n ● Personal protection resources must never be made \navailable to third parties. \n ● In the case of chip cards, SecurID tokens, or other \nprotection resources requiring a PIN, the associated \nPIN (PIN letter) must be stored separately. \n ● Loss, theft, or disclosure of protection resources is to \nbe reported immediately. \n ● Protection resources subject to loss, theft, or \nsnooping must be disabled immediately. \n Rules for Mobile IT Systems \n In addition to the general rules for users, the following \nrules may also apply for mobile IT systems: \n ● Extended self-protection. \n ● A mobile IT system must be safeguarded against \ntheft (that is, secured with a cable lock, locked away \nin a cupboard). \n ● The data from a mobile IT system using corporate \nproprietary information must be safeguarded as \nappropriate (e.g., encryption). In this connection, \nCERT rules in particular are to be observed. \n ● The software provided by the organization for \nsystem access control may only be used on the \norganization’s own mobile IT systems. \n Operation on Open Networks \n Rules for operation on open networks are as follows: \n ● The mobile IT system must be operated in open net-\nwork environments using a personal firewall. \n ● The configuration of the personal firewall must be in \naccordance with the corporate policy or, in the case \nof other personal firewall systems, must be subject to \nrestrictive settings. \n ● A mobile IT system must be operated in an \nunprotected open network only for the duration of a \nsecure access link to the organization’s own network. \nThe connection establishment for the secure access \nlink must be performed as soon as possible, at least \nwithin five minutes. \n ● Simultaneous operation on open networks (protected \nor unprotected) and the organization’s own networks \nis forbidden at all times. \n ● Remote access to company internal resources \nmust always be protected by means of strong \nauthentication. \n ● For the protection of data being transferred via a \nremote access link, strong encryption must always be \nused. \n Additional Business Communications \nGuidelines \n Additional business communications guidelines should \nbe defined for the following: \n ● External IT systems may not be connected directly \nto the intranet. Transmission of corporate proprietary \ndata to external systems should be avoided wherever \npossible, and copies of confidential or strictly con-\nfidential data must never be created on external IT \nsystems. \n ● Unauthorized access to public data areas (shares, \nfolders, mailboxes, calendars, etc.) is to be \nprevented. The appropriate authentication checks and \nauthorization requirements must be defined and the \noperation of resources without such requirements is \nnot permitted (e.g., no global sharing). \n ● Remote data access operations must be effected \nusing strong authentication and encryption, and \nmanagers must obtain permission from the owner of \nthe resources to access. \n ● For secure remote maintenance by business partners, \ninitialization of the remote maintenance must \ntake place from an internal system, such as via an \nInternet connection protected by strong encryption. \nAn employee must be present at the system \nconcerned during the entire remote maintenance \nsession to monitor the remote maintenance in \naccordance with the policy, and the date, nature, and \nextent of the remote maintenance must be logged at \na minimum. \n Wireless Security \n Wireless networking enables devices with wireless \ncapabilities to use information resources without being \nphysically connected to a network. A wireless local \narea network (WLAN) is a group of wireless network-\ning nodes within a limited geographic area that is capa-\nble of radio communications. WLANs are typically \nused by devices within a fairly limited range, such as \n" }, { "page_number": 276, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n243\nan office building or building campus, and are usually \nimplemented as extensions to existing wired local area \nnetworks to provide enhanced user mobility. Since the \nbeginning of wireless networking, many standards and \ntechnologies have been developed for WLANs. One \nof the most active standards organizations that address \nwireless networking is the Institute of Electrical and \nElectronics Engineers (IEEE), as outlined in Figure \n14.14 . 29 Like other wireless technologies, WLANs typi-\ncally need to support several security objectives. This \nis intended to be accomplished through a combination \nof security features built into the wireless networking \nstandard. \n The most common security objectives for WLANs \nare as follows: \n ● Access control . Restrict the rights of devices or indi-\nviduals to access a network or resources within a \nnetwork. \n ● Confidentiality . Ensure that communication cannot \nbe read by unauthorized parties. \n ● Integrity . Detect any intentional or unintentional \nchanges to data that occur in transit. \n ● Availability . Ensure that devices and individuals can \naccess a network and its resources whenever needed. \n Access Control \n Typically there are two means by which to validate the \nidentities of wireless devices attempting to connect to \na WLAN: open-system authentication and shared-key \nauthentication. Neither of these alternatives is secure. \nThe security provided by the default connection means \nis unacceptable; all it takes for a host to connect to \nyour system is a Service Set Identifier (SSID) for the \nAP (which is a name that is broadcast in the clear) \nand, optionally, a MAC Address. The SSID was never \nintended to be used as an access control feature. \n A MAC address is a unique 48-bit value that is perma-\nnently assigned to a particular wireless network interface. \nMany implementations of IEEE 802.11 allow administra-\ntors to specify a list of authorized MAC addresses; the \nAP will permit devices with those MAC addresses only \nto use the WLAN. This is known as MAC address filter-\ning . However, since the MAC address is not encrypted, it \nis simple to intercept traffic and identify MAC addresses \nthat are allowed past the MAC filter. Unfortunately, \nalmost all WLAN adapters allow applications to set the \nMAC address, so it is relatively trivial to spoof a MAC \naddress, meaning that attackers can easily gain unauthor-\nized access. Additionally, the AP is not authenticated to \nthe host by open-system authentication. Therefore, the \nhost has to trust that it is communicating to the real AP \nand not an impostor AP that is using the same SSID. \nTherefore, open system authentication does not provide \nreasonable assurance of any identities and can easily be \nmisused to gain unauthorized access to a WLAN or to \ntrick users into connecting to a malicious WLAN. 31 \n Confidentiality \n The WEP protocol attempts some form of confidential-\nity by using the RC4 stream cipher algorithm to encrypt \n802.11\nIEEE\nStandard or\nAmendment\nMaximum\nData Rate\nTypical\nRange\nFrequency\nBand\nComments\n2 Mbps\n2.4 GHz\n2.4 GHz\n2.4 GHz\n5 GHz\nNot compatible with 802.11b\nEquipment based on 802.11b has been the\ndominant WLAN technology\nBackward compatible with 802.11b\n50–100 \nmeters\n50–100 \nmeters\n50–100 \nmeters\n50–100 \nmeters\n54 Mbps\n54 Mbps\n11 Mbps\n802.11a\n802.11b\n802.11g\n FIGURE 14.14 IEEE Common Wireless Standards: NIST SP800-97. 30 \n 30 Sheila Frankel, Bernard Eydt, Les Owens, Karen Scarfone, \nNIST Special Publication 800-97: “Establishing Wireless Robust \nSecurity Networks: A Guide to IEEE 802.11i,” Recommendations of \nthe National Institute of Standards and Technology, http://csrc.nist.\ngov/publications/nistpubs/800-97/SP800-97.pdf \n 29 Sheila Frankel, Bernard Eydt, Les Owens, Karen Scarfone, \nNIST Special Publication 800-97: “Establishing Wireless Robust \nSecurity Networks: A Guide to IEEE 802.11i,” Recommendations of \nthe National Institute of Standards and Technology, http://csrc.nist.\ngov/publications/nistpubs/800-97/SP800-97.pdf \n 31 Sheila Frankel, Bernard Eydt, Les Owens, Karen Scarfone, \nNIST Special Publication 800-97: “Establishing Wireless Robust \nSecurity Networks: A Guide to IEEE 802.11i,” Recommendations of \nthe National Institute of Standards and Technology, http://csrc.nist.\ngov/publications/nistpubs/800-97/SP800-97.pdf \n" }, { "page_number": 277, "text": "PART | II Managing Information Security\n244\nwireless communications. The standard for WEP speci-\nfies support for a 40-bit WEP key only; however, many \nvendors offer nonstandard extensions to WEP that sup-\nport key lengths of up to 128 or even 256 bits. WEP also \nuses a 24-bit value known as an initialization vector (IV) \nas a seed value for initializing the cryptographic key-\nstream. Ideally, larger key sizes translate to stronger pro-\ntection, but the cryptographic technique used by WEP \nhas known flaws that are not mitigated by longer keys. \nWEP is not the secure alternative you’re looking for. \n A possible threat against confidentiality is network \ntraffic analysis. Eavesdroppers might be able to gain \ninformation by monitoring and noting which parties \ncommunicate at particular times. Also, analyzing traf-\nfic patterns can aid in determining the content of com-\nmunications; for example, short bursts of activity might \nbe caused by terminal emulation or instant messaging, \nwhereas steady streams of activity might be generated by \nvideoconferencing. More sophisticated analysis might \nbe able to determine the operating systems in use based \non the length of certain frames. Other than encrypting \ncommunications, IEEE 802.11, like most other network \nprotocols, does not offer any features that might thwart \nnetwork traffic analysis, such as adding random lengths \nof padding to messages or sending additional messages \nwith randomly generated data. 32 \n Integrity \n Data integrity checking for messages transmitted between \nhosts and APs exists and is designed to reject any mes-\nsages that have been changed in transit, such as by a man-\nin-the-middle attack. WEP data integrity is based on a \nsimple encrypted checksum — a 32-bit cyclic redundancy \ncheck (CRC-32) computed on each payload prior to trans-\nmission. The payload and checksum are encrypted using \nthe RC4 keystream, and then transmitted. The receiver \ndecrypts them, recomputes the checksum on the received \npayload, and compares it with the transmitted checksum. \nIf the checksums are not the same, the transmitted data \nframe has been altered in transit, and the frame is dis-\ncarded. Unfortunately, CRC-32 is subject to bit-flipping \nattacks, which means that an attacker knows which CRC-\n32 bits will change when message bits are altered. WEP \nattempts to counter this problem by encrypting the CRC-\n32 to produce an integrity check value (ICV). WEP’s \ncreators believed that an enciphered CRC-32 would be \nless subject to tampering. However, they did not real-\nize that a property of stream ciphers such as WEP’s \nRC4 is that bit flipping survives the encryption pro-\ncess — the same bits flip whether or not encryption is \nused. Therefore, the WEP ICV offers no additional pro-\ntection against bit flipping. 33 \n Availability \n Individuals who do not have physical access to the \nWLAN infrastructure can cause a denial of service for \nthe WLAN. One threat is known as jamming, which \ninvolves a device that emits electromagnetic energy on \nthe WLAN’s frequencies. The energy makes the frequen-\ncies unusable by the WLAN, causing a denial of service. \nJamming can be performed intentionally by an attacker \nor unintentionally by a non-WLAN device transmitting \non the same frequency. Another threat against avail-\nability is flooding, which involves an attacker sending \nlarge numbers of messages to an AP at such a high rate \nthat the AP cannot process them, or other STAs can-\nnot access the channel, causing a partial or total denial \nof service. These threats are difficult to counter in any \nradio-based communications; thus, the IEEE 802.11 \nstandard does not provide any defense against jamming \nor flooding. Also, as described in Section 3.2.1, attack-\ners can establish rogue APs; if STAs mistakenly attach \nto a rogue AP instead of a legitimate one, this could \nmake the legitimate WLAN effectively unavailable to \nusers. Although 802.11i protects data frames, it does not \noffer protection to control or management frames. An \nattacker can exploit the fact that management frames are \nnot authenticated to deauthenticate a client or to disas-\nsociate a client from the network. 34 \n Enhancing Security Controls \n The IEEE 802.11i amendment allows for enhanced secu-\nrity features beyond WEP and the simple IEEE 802.11 \nshared-key challenge-response authentication. The amend-\nment introduces the concepts of Robust Security Networks \n 33 Sheila Frankel, Bernard Eydt, Les Owens Karen, Scarfone, \nNIST Special Publication 800-97: “Establishing Wireless Robust \nSecurity Networks: A Guide to IEEE 802.11i,” Recommendations of \nthe National Institute of Standards and Technology, http://csrc.nist.\ngov/publications/nistpubs/800-97/SP800-97.pdf \n 34 Sheila Frankel, Bernard Eydt, Les Owens Karen, Scarfone, \nNIST Special Publication 800-97: “Establishing Wireless Robust \nSecurity Networks: A Guide to IEEE 802.11i,” Recommendations of \nthe National Institute of Standards and Technology, http://csrc.nist.\ngov/publications/nistpubs/800-97/SP800-97.pdf \n 32 Sheila Frankel, Bernard Eydt, Les Owens, Karen Scarfone, \nNIST Special Publication 800-97: “Establishing Wireless Robust \nSecurity Networks: A Guide to IEEE 802.11i,” Recommendations of \nthe National Institute of Standards and Technology, http://csrc.nist.\ngov/publications/nistpubs/800-97/SP800-97.pdf \n" }, { "page_number": 278, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n245\n(RSNs) (see Figure 14.15 ) and Robust Security Network \nAssociations (RSNAs). There are two RSN data confiden-\ntiality and integrity protocols defined in IEEE 802.11i —\n Temporal Key Integrity Protocol (TKIP) and Counter Mode \nwith Cipher-Block Chaining Message Authentication Code \nProtocol (CCMP). \n At a high level, RSN includes IEEE 802.1x port-\nbased access control, key management techniques, and \nthe TKIP and CCMP data confidentiality and integrity \nprotocols. These protocols allow for the creation of sev-\neral diverse types of security networks because of the \nnumerous configuration options. RSN security is at the \nlink level only, providing protection for traffic between \na wireless host and its associated AP or between one \nwireless host and another. It does not provide end-to-end \napplication-level security, such as between a host and an \nemail or Web server, because communication between \nthese entities requires more than just one link. For infra-\nstructure mode, additional measures need to be taken to \nprovide end-to-end security. \n The IEEE 802.11i amendment defines an RSN as \na wireless network that allows the creation of RSN \nAssociations (RSNAs) only. An RSNA is a security \nrelationship established by the IEEE 802.11i 4-Way \nHandshake. The 4-Way Handshake validates that the \nparties to the protocol instance possess a pairwise mas-\nter key (PMK), synchronize the installation of temporal \nkeys, and confirm the selection of cipher suites. The \nPMK is the cornerstone of a number of security fea-\ntures absent from WEP. Complete robust security is \nconsidered possible only when all devices in the net-\nwork use RSNAs. In practice, some networks have a \nmix of RSNAs and non-RSNA connections. A network \nthat allows the creation of both pre-RSN associations \n(pre-RSNA) and RSNAs is referred to as a Transition \nSecurity Network (TSN). A TSN is intended to be an \ninterim means to provide connectivity while an organiza-\ntion migrates to networks based exclusively on RSNAs. \nRSNAs enable the following security features for IEEE \n802.11 WLANs: \n ● Enhanced user authentication mechanisms \n ● Cryptographic key management \n ● Data confidentiality \n ● Data origin authentication and integrity \n ● Replay protection \n An RSNA relies on IEEE 802.1x to provide an \nauthentication framework. To achieve the robust security \nof RSNAs, the designers of the IEEE 802.11i amend-\nment used numerous mature cryptographic algorithms \nand techniques. These algorithms can be categorized as \nbeing used for confidentiality, integrity (and data origin \nauthentication), or key generation. All the algorithms \nspecifically referenced in the IEEE 802.11 standard (see \n Figure 14.16 ) are symmetric algorithms, which use the \nsame key for two different steps of the algorithm, such \nas encryption and decryption. \n TKIP is a cipher suite for enhancing WEP on pre-\nRSN hardware without causing significant performance \nPre-Robust\nSecurity Networks\nIEEE 802.11 Security\nRobust Security\nNetworks\nWEP\nConfidentiality\nOpen System\nShared Key\nAuthentication\nAuthentication\nand Key\nGeneration\nConfidentiality, Data\nOrigin Authentication,\nand Integrity and\nReplay Protection\nIEEE 802.1x\nPort-based\nAccess Control\nAccess Control\nEAP\nTKIP\nCCMP\n FIGURE 14.15 High-level taxonomy of the major pre-RSN and RSN security mechanisms. 35 \n 35 Sheila Frankel, Bernard Eydt, Les Owens Karen, Scarfone, \nNIST Special Publication 800-97: “Establishing Wireless Robust \nSecurity Networks: A Guide to IEEE 802.11i,” Recommendations of \nthe National Institute of Standards and Technology, http://csrc.nist.\ngov/publications/nistpubs/800-97/SP800-97.pdf \n" }, { "page_number": 279, "text": "PART | II Managing Information Security\n246\ndegradation. TKIP works within the processing con-\nstraints of first-generation hosts and APs and therefore \nenables increased security without requiring hardware \nreplacement. TKIP provides the following fundamental \nsecurity features for IEEE 802.11 WLANs: \n ● Confidentiality protection using the RC4 \nalgorithm 38 \n ● Integrity protection against several types of attacks 39 \nusing the Michael message digest algorithm (through \ngeneration of a message integrity code [MIC]) 40 \n ● Replay prevention through a frame-sequencing \ntechnique \n ● Use of a new encryption key for each frame to \nprevent attacks, such as the Fluhrer-Mantin-Shamir \n(FMS) attack, which can compromise WEP-based \nWLANs 41 \n ● Implementation of countermeasures whenever the \nSTA or AP encounters a frame with a MIC error, \nwhich is a strong indication of an active attack \n Web and Application Security \n Web and application security has come to center stage \nrecently because Web sites and other public-facing \napplications have had so many vulnerabilities reported \nthat it is often trivial to find some part of the application \nthat is vulnerable to one of the many exploits out there. \nWhen an attacker compromises a system at the applica-\ntion level, often it is too trivial to take advantage of all \nthe capabilities said application has to offer, including \nquerying the back-end database or accessing propri-\netary information. In the past it was not necessary to \nimplement security during the development phase of an \napplication, and since most security professionals are \nnot programmers, that worked out just fine; however, \ndue to factors such as rushing software releases and a \ncertain level of complacency where end users expect \nbuggy software and apply patches, the trend of inserting \nsecurity earlier in the development process is catching \nsteam. \n Web Security \n Web security is unique to every environment; any appli-\ncation and service that the organization wants to deliver \nto the customer will have its own way of perform-\ning transactions. Static Web sites with little content or \nsearchable areas of course pose the least risk, but they \nalso offer the least functionality. Who wants a Web site \nthey can’t sell anything from? Implementing some-\nthing like a shopping cart or content delivery on your \nsite opens up new, unexpected aspects of Web security. \nAmong the things that need to be considered are whether \nit is worth developing the application in-house or buy-\ning one off the shelf and rely on someone else for the \nmaintenance ad patching. With some of these thoughts \nin mind, here are some of the biggest threats associated \nwith having a public-facing Web site: \n ● Vandalism \n ● Financial fraud \n ● Privileged access \n ● Theft of transaction information \n ● Theft of intellectual property \n ● Denial-of-service (DoS) attacks \n ● Input validation errors \nCryptographic Algorithms\nConfidentiality\nIntegrity\nKey Generation\nTKIP\n(RC4)\nWEP\n(RC4)\nCCM\n(AES-\nCTR)\nNIST\nKey\nWrap\nHMAC-\nSHA-1\nHMAC-\nMD5\nTKIP\n(Michael\nMIC)\nCCM\n(AES-\nCBC-\nMAC)\nHMAC-\nSHA-1\nRFC\n1750\nProprietary\n FIGURE 14.16 Taxonomy of the cryptographic algorithms included in the IEEE 802.11 standard. 36 \n 36 Sheila Frankel, Bernard Eydt, Les Owens, Karen Scarfone, \nNIST Special Publication 800-97: “Establishing Wireless Robust \nSecurity Networks: A Guide to IEEE 802.11i,” Recommendations of \nthe National Institute of Standards and Technology, http://csrc.nist.\ngov/publications/nistpubs/800-97/SP800-97.pdf \n" }, { "page_number": 280, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n247\n ● Path or directory traversal \n ● Unicode encoding \n ● URL encoding \n Some Web application defenses that can be imple-\nmented have already been discussed; they include: \n ● Web application firewalls \n ● Intrusion prevention systems \n ● SYN proxies on the firewall \n Application Security \n An integrated approach to application security (see \n Figure 14.17 ) in the organization is required for success-\nful deployment and secure maintenance of all applica-\ntions. A corporate initiative to define, promote, assure, \nand measure the security of critical business applications \nwould greatly enhance an organization’s overall secu-\nrity. Some of the biggest obstacles, as mentioned in the \nprevious section, are that security professionals are not \ntypically developers, so this means that often application \nsecurity is left to IT or R & D personnel, which can lead \nto gaping holes. Components of an application security \nprogram consist of 37 : \n ● People. Security architects, managers, technical \nleads, developers and testers. \n ● Policy . Integrate security steps into your SDLC \nand ADLC; have security baked in, not bolted on. \nFind security issues early so that they are easier \nand cheaper to fix. Measure compliance; are the \nprocesses working? Inventory and categorize your \napplications. \n ● Standards . Which controls are necessary, and when \nand why? Use standard methods to implement each \ncontrol. Provide references on how to implement and \ndefine requirements. \n ● Assessments. Security architecture/design reviews, \nsecurity code reviews, application vulnerability tests, \nrisk acceptance review, external penetration test of \nproduction applications, white-box philosophy. Look \ninside the application, and use all the advantages \nyou have such as past reviews, design documents, \ncode, logs, interviews, and so on. Attackers have \nadvantages over you; don’t tie your hands. \n ● Training. Take awareness and training seriously. All \ndevelopers should be performing their own input \nvalidation in their code and need to be made aware \nof the security risks involved in sending unsecure \ncode into production. \n Security Policies and Procedures \n A quality information security program begins and \nends with the correct information security policy (see \n Figure 14.18 ). Policies are the least expensive means \nof control and often the most difficult to implement. \nPeriod\nJan–Jun 2007\nJul–Dec 2007\n100%\n90%\n960\n(39%)\n889\n(42%)\n1,501\n(61%)\n1,245\n(58%)\n80%\n70%\n60%\n50%\n40%\n30%\nPercentage of vulnerabilities\n20%\n10%\n0%\nNon-Web application vulnerabilities\nWeb-application vulnerabilities\n FIGURE 14.17 Symantec Web application vulnerabilities by share. \n 37 “ AppSec2005DC-Anthony Canike-Enterprise AppSec Program \nPowerPoint \nPresentation, ” \nOWASP, \n http://www.owasp.org/index.\nphp/Image:AppSec2005DC-Anthony_Canike-Enterprise_AppSec_\nProgram.ppt \n" }, { "page_number": 281, "text": "PART | II Managing Information Security\n248\nAn information security policy is a plan that influences \nand determines the actions taken by employees who are \npresented with a policy decision regarding information \nsystems. Other components related to a security policy \nare practices, procedures, and guidelines, which attempt \nto explain in more detail the actions that are to be taken \nby employees in any given situation. For policies to be \neffective, they must be properly disseminated, read, \nunderstood, and agreed to by all employees as well as \nbacked by upper management. Without upper manage-\nment support, a security policy is bound to fail. Most \ninformation security policies should contain at least: \n ● An overview of the corporate philosophy on security \n ● Information about roles and responsibilities for \nsecurity shared by all members of the organization \n ● Statement of purpose \n ● Information technology elements needed to define \ncertain controls or decisions \n ● The organization’s security responsibilities defining \nthe security organization structure \n ● References to IT standards and guidelines, such as \nGovernment Policies and Guidelines, FISMA, http://\niase.disa.mil/policy-guidance/index.html#FISMA \nand NIST Special Publications (800 Series), and \n http://csrc.nist.gov/publications/PubsSPs.html . \n Some basic rules must be followed when you’re \nshaping a policy: \n ● Never conflict with the local or federal law. \n ● Your policy should be able to stand up in court. \n ● It must be properly supported and administered by \nmanagement. \n ● It should contribute to the success of the \norganization. \n ● It should involve end users of information systems \nfrom the beginning. \n Security Employee Training and Awareness \n The Security Employee Training and Awareness (SETA) \nprogram is a critical component of the information secu-\nrity program. It is the vehicle for disseminating security \ninformation that the workforce, including managers, \nneed to do their jobs. In terms of the total security solu-\ntion, the importance of the workforce in achieving infor-\nmation security goals and the importance of training as \na countermeasure cannot be overstated. Establishing \nand maintaining a robust and relevant information \nsecurity awareness and training program as part of the \noverall information security program is the primary con-\nduit for providing employees with the information and \ntools needed to protect an agency’s vital information \nresources. These programs will ensure that personnel at \nall levels of the organization understand their informa-\ntion security responsibilities to properly use and pro-\ntect the information and resources entrusted to them. \nAgencies that continually train their workforces in \norganizational security policy and role-based security \nresponsibilities will have a higher rate of success in pro-\ntecting information. 38 \n As cited in audit reports, periodicals, and conference \npresentations, people are arguably the weakest element \nin the security formula that is used to secure systems and \nnetworks. The people factor, not technology, is a critical \none that is often overlooked in the security equation. It \nis for this reason that the Federal Information Security \nManagement Act (FISMA) and the Office of Personnel \nManagement (OPM) have mandated that more and bet-\nter attention must be devoted to awareness activities \nand role-based training, since they are the only security \ncontrols that can minimize the inherent risk that results \nfrom the people who use, manage, operate, and maintain \ninformation systems and networks. Robust and enter-\nprisewide awareness and training programs are needed \nto address this growing concern. 39 \n 38 Pauline Bowen, Joan Hash and Mark Wilson, NIST Special \nPublication 800-100: Information Security Handbook: A Guide for \nManagers. Recommendations of the National Institute of Standards and \nTechnology, http://csrc.nist.gov/publications/nistpubs/800-100/SP800-\n100-Mar07-2007.pdf \n 39 Pauline Bowen, Joan Hash and Mark Wilson, NIST Special \nPublication 800-100: Information Security Handbook: A Guide for \nManagers. Recommendations of the National Institute of Standards and \nTechnology, http://csrc.nist.gov/publications/nistpubs/800-100/SP800-\n100-Mar07-2007.pdf \nFormal policy\nestablished - 67%\nFormal policy being\ndeveloped - 18%\nInformal policy - 12%\nNo policy - 1%\nOther - 2%\n2008: 512 Respondents\n FIGURE 14.18 Information security policy within your organization, \nCSI/FBI report, 2008. \n" }, { "page_number": 282, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n249\n The Ten Commandments of SETA \n The Ten Commandments of SETA consist of the following: \n 1. Information security is a people, rather than a tech-\nnical, issue. \n 2. If you want them to understand, speak their \nlanguage. \n 3. If they cannot see it, they will not learn it. \n 4. Make your point so that you can identify it and so \ncan they. \n 5. Never lose your sense of humor. \n 6. Make your point, support it, and conclude it. \n 7. Always let the recipients know how the behavior \nthat you request will affect them. \n 8. Ride the tame horses. \n 9. Formalize your training methodology. \n 10. Always be timely, even if it means slipping sched-\nules to include urgent information. \n Depending on the level of targeted groups within \nthe organization, the goal is first awareness, then train-\ning, and eventually the education of all users as to what \nis acceptable security. Figure 14.19 shows a matrix of \nteaching methods and measures that can be implemented \nat each level. \n Targeting the right people and providing the right \ninformation are crucial when you’re developing a secu-\nrity awareness program. Therefore, some of the items \nthat must be kept in mind are focusing on people, not so \nmuch on technologies; refraining from using technical \njargon; and using every available venue, such as news-\nletters or memos, online demonstrations, and in-person \nclassroom sessions. By not overloading users and helping \nthem understand their roles in information security, you \ncan establish a program that is effective, identifies target \naudiences, and defines program scope, goals, and objec-\ntives. Figure 14.20 presents a snapshot according to the \n2008 CSI/FBI Report, showing where the SETA program \nstands in 460 different U.S. organizations. \n 4. SECURITY MONITORING AND \nEFFECTIVENESS \n Security monitoring and effectiveness are the next evolu-\ntions to a constant presence of security-aware personnel \nwho actively monitor and research events in real time. \nA substantial number of suspicious events occur within \nmost enterprise networks and computer systems every \nday and go completely undetected. Only with an effec-\ntive security monitoring strategy, an incident response \nplan, and security validation and metrics in place will an \noptimal level of security be attained. The idea is to auto-\nmate and correlate as much as possible between both \nevents and vulnerabilities and to build intelligence into \nsecurity tools so that they alert you if a known bad set of \nevents has occurred or a known vulnerability is actually \nbeing attacked. \n To come full circle; You need to define a security \nmonitoring and log management strategy, an integrated \nincident response plan, validation, and penetration exer-\ncises against security controls and security metrics to \nhelp measure whether there has been improvement in \nyour organization’s handling of these issues. \n“What”\n“How”\n“Why”\nAttribute:\nLevel:\nObjective:\nInformation\nKnowledge\nInsight\nSkill\nUnderstanding\nRecognition\nMedia\nPractical Instruction\nTheoretical Instruction\nTrue/False\nMultiple Choice\n(identify learning)\nShort-term\nVideos\nNewsletters\nPosters, etc.\nLecture\nCase study workshop\nHands-on practice\nProblem Solving\n(apply learning)\nEssay\n(interpret learning)\nLong-term\nIntermediate\nDiscussion Seminar\nBackground reading\nTeaching Method:\nTest Measure:\nImpact Timeframe:\nAwareness\nTraining\nEducation\n FIGURE 14.19 Matrix of security teaching methods and measures that can be implemented. \n" }, { "page_number": 283, "text": "PART | II Managing Information Security\n250\n Security Monitoring Mechanisms \n Security monitoring involves real-time or near-real-time \nmonitoring of events and activities happening on all \nyour organization’s important systems at all times. To \nproperly monitor an organization for technical events \nthat can lead to an incident or an investigation, usually \nan organization uses a security information and event \nmanagement (SIEM) and/or log management tool. These \ntools are used by security analysts and managers to filter \nthrough tons of event data and to identify and focus on \nonly the most interesting events. \n Understanding the regulatory and forensic impact of \nevent and alert data in any given enterprise takes planning \nand a thorough understanding of the quantity of data the \nsystem will be required to handle. The better logs can be \nstored, understood, and correlated, the better the possibil-\nity of detecting an incident in time for mitigation. In this \ncase, what you don’t know will hurt you. Responding to \nincidents, identifying anomalous or unauthorized behav-\nior, and securing intellectual property has never been more \nimportant. Without a solid log management strategy it \nbecomes nearly impossible to have the necessary data to \nperform a forensic investigation, and without monitoring \ntools, identifying threats and responding to attacks against \nconfidentiality, integrity, or availability become much \nmore difficult. For a network to be compliant and an inci-\ndent response or forensics investigation to be successful, it \nis critical that a mechanism be in place to do the following: \n ● Securely acquire and store raw log data for as long as \npossible from as many disparate devices as possible \nwhile providing search and restore capabilities of \nthese logs for analysis. \n ● Monitor interesting events coming from all important \ndevices, systems, and applications in as near real \ntime as possible. \n ● Run regular vulnerability scans on your hosts \nand devices and correlate these vulnerabilities to \n intrusion detection alerts or other interesting events, \nidentifying high-priority attacks as they happen and \nminimizing false positives. \n SIEM and log management solutions in general can \nassist in security information monitoring (see Figure \n14.21 ) as well as regulatory compliance and incident \nresponse by: \n ● Aggregating and normalizing event data from unre-\nlated network devices, security devices, and applica-\ntion servers into usable information. \n ● Analyze and correlate information from various \nsources such as vulnerability scanners, IDS/IPS, \nfirewalls, servers, and so on, to identify attacks as \nsoon as possible and help respond to intrusions more \nquickly. \n ● Conduct network forensic analysis on historical or \nreal-time events through visualization and replay of \nevents. \n ● Create customized reports for better visualization of \nyour organizational security posture. \nOther\n3%\n18%\n2008\n2007\n32%\n18%\n17%\n23%\n14%\n36%\nDon’t use awareness training\nDon’t measure effectiveness\nVolume and type of incidents\nVolume and type of help-desk issues\nStaff reports of experiences\nSocial engineering testing\nMandatory written/digital test\n0\n5\n10\n15\n20\n25\n30\n35\n40\n2008: 460 Respondents\n FIGURE 14.20 Awareness training metrics. \nSecure Log\nManagement\nCorrelation\nand Alerting\nForensic\nAnalysis\nSecurity\nMonitoring\nCompliance\nReporting\n FIGURE 14.21 Security monitoring. \n" }, { "page_number": 284, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n251\n ● Increase the value and performance of existing \nsecurity devices by providing a consolidated event \nmanagement and analysis platform. \n ● Improve the effectiveness and help focus IT risk \nmanagement personnel on the events that are \nimportant. \n ● Meet regulatory compliance and forensics \nrequirements by securely storing all event data on a \nnetwork for long-term retention and enabling instant \naccessibility to archived data. \n Incidence Response and Forensic \nInvestigations \n Network forensic investigation is the investigation and \nanalysis of all the packets and events generated on any \ngiven network in hope of identifying the proverbial nee-\ndle in a haystack. Tightly related is incident response, \nwhich entails acting in a timely manner to an identified \nanomaly or attack across the system. To be successful, \nboth network investigations and incident response rely \nheavily on proper event and log management techniques. \nBefore an incident can be responded to there is the chal-\nlenge of determining whether an event is a routine sys-\ntem event or an actual incident. This requires that there be \nsome framework for incident classification (the process of \nexamining a possible incident and determining whether or \nnot it requires a reaction). Initial reports from end users, \nintrusion detection systems, host- and network-based \nmalware detection software, and systems administrators \nare all ways to track and detect incident candidates. 40 \n As mentioned in earlier sections, the phases of an \nincident usually unfold in the following order: prepara-\ntion, identification (detection), containment, eradication, \nrecovery and lessons learned. The preparation phase \nrequires detailed understanding of information systems \nand the threats they face; so to perform proper planning \nan organization must develop predefined responses that \nguide users through the steps needed to properly respond \nto an incident. Predefining incident responses enables \nrapid reaction without confusion or wasted time and \neffort, which can be crucial for the success of an incident \nresponse. Identification occurs once an actual incident \nhas been confirmed and properly classified as an incident \nthat requires action. At that point the IR team moves \nfrom identification to containment. In the containment \nphase, a number of action steps are taken by the IR team \nand others. These steps to respond to an incident must \noccur quickly and may occur concurrently, including \nnotification of key personnel, the assignment of tasks, \nand documentation of the incident. Containment strate-\ngies focus on two tasks: first, stopping the incident from \ngetting any worse, and second, recovering control of the \nsystem if it has been hijacked. \n Once the incident has been contained and system \ncontrol regained, eradication can begin, and the IR team \nmust assess the full extent of damage to determine what \nmust be done to restore the system. Immediate deter-\nmination of the scope of the breach of confidentiality, \nintegrity, and availability of information and informa-\ntion assets is called incident damage assessment . Those \nwho document the damage must be trained to collect and \npreserve evidence in case the incident is part of a crime \ninvestigation or results in legal action. \n At the moment that the extent of the damage has been \ndetermined, the recovery process begins to identify and \nresolve vulnerabilities that allowed the incident to occur in \nthe first place. The IR team must address the issues found \nand determine whether they need to install and/or replace/\nupgrade the safeguards that failed to stop or limit the \nincident or were missing from system in the first place. \nFinally, a discussion of lessons learned should always be \nconducted to prevent future similar incidents from occur-\nring and review what could have been done differently. 41 \n Validating Security Effectiveness \n The process of validating security effectiveness com-\nprises making sure that the security controls that you \nhave put in place are working as expected and that they \nare truly mitigating the risks they claim to be mitigating. \nThere is no way to be sure that your network is not vul-\nnerable to something if you haven’t validated it yourself. \nEnsuring that the information security policy addresses \nyour organizational needs and assessing compliance with \nyour security policy across all systems, assets, appli-\ncations, and people is the only way to have a concrete \nmeans of validation. \n Here are some areas where actual validation should be \nperformed — in other words, these are areas where assigned \nIT personnel should go with policy in hand, log in, and \nverify the settings and reports before the auditors do: \n ● Verifying operating system settings \n ● Reviewing security device configuration and \nmanagement \n 40 M. E. Whitman, H. J. Mattord, Management of Information Security , \nCourse Technology, 2nd Edition, March 27, 2007. \n 41 M. E. Whitman, H. J. Mattord, Management of Information Security , \nCourse Technology, 2nd Edition, March 27, 2007. \n" }, { "page_number": 285, "text": "PART | II Managing Information Security\n252\n ● Establishing ongoing security tasks \n ● Maintaining physical security \n ● Auditing security logs \n ● Creating an approved product list \n ● Reviewing encryption strength \n ● Providing documentation and change control \n Vulnerability Assessments and Penetration \nTests \n Validating security (see Figure 14.22 ) with internal as \nwell as external vulnerability assessments and pen-\netration tests is a good way to measure an increase or \ndecrease in overall security, especially if similar assess-\nments are conducted on a regular basis. There are several \nways to test security of applications, hosts, and network \ndevices. With a vulnerability assessment, usually limited \nscanning tools or just one scanning tool is used to deter-\nmine vulnerabilities that exist in the target system. Then \na report is created and the manager reviews a holistic \npicture of security. With authorized penetration tests it’s \na little different. In that case the data owner is allow-\ning someone to use just about any means within reason \n(in other words, many different tools and techniques) \nto gain access to the system or information. A success-\nful penetration test does not provide the remediation \navenues that a vulnerability assessment does; rather, it is \na good test of how difficult it would be for someone to \ntruly gain access if he were trying. \n REFERENCES \n [1] R. Richardson, CSI Director, 2008 CSI Computer Crime & \nSecurity Survey, CSI Website, http://i.cmpnet.com/v2.gocsi.com/\npdf/CSIsurvey2008.pdf . \n [2] 45 C.F.R. § 164.310 Physical safeguards, Justia Website, http://\nlaw.justia.com/us/cfr/title45/45-1.0.1.3.70.3.33.5.html . \n [3] S. Saleh AlAboodi, A New Approach for Assessing the Maturity \nof Information Security, CISSP, www.isaca.org/Template.\ncfm?Section \u0003 Home & CONTENTID \u0003 34805 & TEMPLATE \u0003 /\nContentManagement/ContentDisplay.cfm . \n [4] A. Jaquith , Security Metrics: Replacing Fear, Uncertainty and \nDoubt , Addison-Wesley , 2007 . \n [5] AppSec2005DC-Anthony Canike-Enterprise AppSec Program \nPowerPoint Presentation, OWASP, www.owasp.org/index.php/\nImage:AppSec2005DC-Anthony_Canike-Enterprise_AppSec_\nProgram.ppt . \n [6] CISSP 10 Domains ISC2 Website, https://www.isc2.org/cissp/\ndefault.aspx. \n [7] Cloud Computing: The Enterprise Cloud, Terremark Worldwide \nInc. Web site, www.theenterprisecloud.com /. \n [8] Defense in Depth: A Practical Strategy for Achieving Information \nAssurance in Today’s Highly Networked Environments, National \nSecurity Agency, Information Assurance Solutions Group – STE 6737. \n [9] Definition of Defense in Depth, OWASP Web site, www.owasp.\norg/index.php/Defense_in_depth . \n [10] Definition \nof \nInformation \nSecurity, \nWikipedia, \n http://\nen.wikipedia.org/wiki/Information_security . \n [11] GSEC, GIAC Security Essentials Outline, SANS Institute, https://\nwww.sans.org/training/description.php?tid \u0003 672 . \n [12] ISO 17799 Security Standards, ISO Web site, www.iso.org/iso/sup-\nport/faqs/faqs_widely_used_standards/widely_used_standards_\nother/information_security.htm . \n [13] M.E. Whitman, H.J. Mattord., Management of Information Security. \n [14] M. Krause, H.F. Tipton, Information Security Management \nHandbook , sixth ed., Auerbach Publications, CRC Press LLC. \n [15] K. Scarfone, P. Mell, NIST Special Publication 800-94: Guide to \nIntrusion Detection and Prevention Systems (IDPS), Recommendations \nof the National Institute of Standards and Technology, http://csrc.nist.\ngov/publications/nistpubs/800-94/SP800-94.pdf . \n [16] S. Frankel Bernard, E.L. Owens, K. Scarfone, NIST Special \nPublication 800-97: Establishing Wireless Robust Security \nNetworks: A Guide to IEEE 802.11i, Recommendations of the \nNational Institute of Standards and Technology, http://csrc.nist.\ngov/publications/nistpubs/800-97/SP800-97.pdf . \n [17] P. Bowen, J. Hash, M. Wilson, NIST Special Publication 800-\n100: Information Security Handbook: A Guide for Managers, \nRecommendations of the National Institute of Standards and \nTechnology, \n http://csrc.nist.gov/publications/nistpubs/800-100/\nSP800-100-Mar07-2007.pdf . \nNo techniques\n13%\n46%\n47%\n49%\n49%\n49%\n55%\n64%\nExternal pen testing\nInternal pen testing\nE-mail monitoring\nWeb monitoring\nExternal audits\nAutomated tools\nInternal audits\n0\n10\n20\n30\n40\n50\n60\n70\n2008: 496 Respondents\n FIGURE 14.22 Security validation techniques, CSI/FBI survey, 2008. \n" }, { "page_number": 286, "text": "Chapter | 14 Information Security Essentials for IT Managers: Protecting Mission-Critical Systems\n253\n [18] R.A. Caralli, W.R. Wilson, and the Survivable Enterprise \nManagement Team, The Challenges of Security Management, \nNetworked Systems Survivability Program, Software Engineering \nInstitute, www.cert.org/archive/pdf/ESMchallenges.pdf . \n [19] S. Harris, All in One CISSP Certification Exam Guide, fourth \ned., McGraw Hill. \n [20] Symantec Global Internet Security Threat Report, Trends for \nJuly – December 2007, Vol. XII, published April 2008, Symantec \nWeb site, http://eval.symantec.com/mktginfo/enterprise/white_\npapers/b-whitepaper_internet_security_threat_report_xiii_04-\n2008.en-us.pdf \n [21] L. Galway, Quantitative Risk Analysis for Project Management, A \nCritical Review, WR-112-RC, February 2004, Rand.org Web site, \n www.rand.org/pubs/working_papers/2004/RAND_WR112.pdf . \n" }, { "page_number": 287, "text": "This page intentionally left blank\n" }, { "page_number": 288, "text": "255\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n Security Management Systems \n Joe Wright \n Computer Bits, Inc. \n Jim Harmening \n Computer Bits, Inc. \n Chapter 15 \n Today, when most companies and government agencies \nrely on computer networks to store and manage their \norganizations ’ data, it is essential that measures are put \nin place to secure those networks and keep them func-\ntioning optimally. Network administrators need to define \ntheir security management systems to cover all parts of \ntheir computer and network resources. \n Security management systems are sets of policies \nput place by an organization to maintain the security of \ntheir computer and network resources. These policies are \nbased on the types of resources that need to be secured, \nand they depend on the organization. Some groups of \npolicies can be applied to entire industries; others are \nspecific to an individual organization. \n A security management system starts as a set of poli-\ncies that dictate the way in which computer resources \ncan be used. The policies are then implemented by the \norganization’s technical departments and enforced. This \ncan be easy for smaller organizations but can require a \nteam for larger international organizations that have \nthousands of business processes. Either way, measures \nneed to be put in place to prevent, respond to, and fix \nsecurity issues that arise in an organization. \n 1. SECURITY MANAGEMENT SYSTEM \nSTANDARDS \n To give organizations a starting point to develop their \nown security management systems, the International \nOrganization \nfor \nStandardization \n(ISO) \nand \nthe \nInternational Electrotechnical Commission (IEC) have \ndeveloped a family of standards known as the Information \nSecurity Management System 27000 Family of Standards. \nThis group of standards, starting with ISO/IEC 27001, \ngives organizations the ability to certify their security \nmanagement systems. \n The certification process takes place in several stages. \nThe first stage is an audit of all documentation and policies \nthat currently exist for a system. The documentation is \nusually based directly on the requirements of the stand-\nard, but it does not have to be. Organizations can come \nup with their own sets of standards, as long as all aspects \nof the standard are covered. The second stage actually \ntests the effectiveness of the existing policies. The third \nstage is a reassessment of the organization to make sure \nit still meets the requirements. This third stage keeps \norganizations up to date over time as standards for secu-\nrity management systems change. This certification pro-\ncess is based on a Plan-Do-Check-Act iterative process: \n ● Plan the security management system and create the \npolicies that define it. \n ● Do implement the policies in your organization. \n ● Check to ensure the security management system’s \npolicies are protecting the resources they were meant \nto protect. \n ● Act to respond to incidents that breach the imple-\nmented policies. \n Certifying your security management system helps \nensure that you keep the controls and policies constantly \nup to date to meet certification requirements. Getting \ncertified also demonstrates to your partners and custom-\ners that your security management systems will help \nkeep your business running smoothly if network security \nevents were to occur. \n Though the ISO/IEC 27000 Family of Standards \nallows for businesses to optionally get certified, the \nFederal Information Security Management Act (FISMA) \nrequires all government agencies to develop security \nmanagement systems. The process of complying with \nFISMA is very similar to the process of implementing \nthe ISO 27000 family of standards. \n" }, { "page_number": 289, "text": "PART | II Managing Information Security\n256\n The first step of FISMA compliance is determin-\ning what constitutes the system you’re trying to protect. \nNext, you need to perform risk assessment to determine \nwhat controls you’ll need to put in place to protect your \nsystem’s assets. The last step is actually implementing \nthe planned controls. FISMA then requires mandatory \nyearly inspections to make sure an organization stays in \ncompliance. \n 2. TRAINING REQUIREMENTS \n Many security management system training courses for \npersonnel are available over the Internet. These courses \nprovide information for employees setting up security \nmanagement systems and for those using the computer \nand network resources of the company that are refer-\nenced in the policies of the security management system. \n Training should also include creating company secu-\nrity policies and creating user roles that are specific to \nthe organization. Planning policies and roles ahead of \ntime will help prevent confusion in the event of a prob-\nlem, since everyone will know what systems they are \nresponsible for. \n 3. PRINCIPLES OF INFORMATION \nSECURITY \n The act of securing information has been around for \nas long as the idea of storing information. Over time, \nthree main objectives of information security have been \ndefined: \n ● Confidentiality . Information is only available to the \npeople or systems that need access to it. This is done \nby encrypting information that only certain people \nare able to decrypt or denying access to those who \ndon’t need it. This might seem simple at first, but \nconfidentiality must be applied to all aspects of a \nsystem. This means preventing access to all backup \nlocations and even log files if those files can contain \nsensitive information. \n ● Integrity . Information can only be added or updated \nby those who need to update that data. Unauthorized \nchanges to data cause it to lose its integrity, and \naccess to the information must be cut off to everyone \nuntil the information’s integrity is restored. Allowing \naccess to compromised data will cause those \nunauthorized changes to propagate to other areas of \nthe system. \n ● Availability . The information needs to be available \nin a timely manner when requested. Access to no \ndata is just as bad as access to compromised data. No \nprocess can be performed if the data on which the \nprocess is based is unavailable. \n 4 . ROLES AND RESPONSIBILITIES OF \nPERSONNEL \n All personnel who come into contact with information \nsystems need to be aware of the risks from improper use \nof those systems. Network administrators need to know \nthe effects of each change they make to their systems \nand how that affects the overall security of that system. \nThey also need to be able to efficiently control access to \nthose systems in cases of emergency, when quick action \nis needed. \n Users of those systems need to understand what risks \ncan be caused by their actions and how to comply with \ncompany policy. \n Several roles should be defined within your \norganization: \n ● Chief information officer/director of information \ntechnology. This person is responsible for creat-\ning and maintaining the security policies for your \norganization. \n ● Network engineer. This person is responsible for \nthe physical connection of your network and the \nconnection of your network to the Internet. He or \nshe is also responsible for the routers, firewalls, and \nswitches that connect your organization. \n ● Network administrator. This person handles all other \nnetwork devices within the organization, such as \nservers, workstations, printers, copiers, and wireless \naccess devices. Server and workstation software is \nalso the responsibility of the network administrator. \n ● End users . These people are allowed to operate the \ncomputer in accordance with company policies, to \nperform their daily tasks. They should not have admin-\nistrator access to their PCs or, especially, servers. \n There are also many other specific administrators \nsome companies might require. These are Microsoft \nExchange Administrators, Database Administrators, and \nActive Directory Administrators, to name a few. These \nadministrators should have specific tasks to perform and \na specific scope in which to perform them that is stated \nin the company policy. \n 5. SECURITY POLICIES \n Each organization should develop a company policy \ndetailing the preferred use of company data or a company \napplication. An example of a policy is one that restricts \n" }, { "page_number": 290, "text": "Chapter | 15 Security Management Systems\n257\ntransfer of data on any device that was not purchased \nby the company. This can help prevent unauthorized \naccess to company data. Some companies also prefer \nnot to allow any removable media to be used within the \norganization. \n Security policies should also govern how the com-\nputer is to be used on a day-to-day basis. Very often, \ncomputer users are required to have Internet access to \ndo research pertaining to their jobs. This isn’t hard to \nrestrict in a specialized setting such as a law firm where \nonly a handful of sites contain pertinent information. \nIn other cases it can be nearly impossible to restrict all \nWeb sites except the ones that contain information that \napplies to your organization or your research. In those \ncases it is imperative that you have company policies \nthat dictate what Web sites users are able to visit and \nfor what purposes. When unrestricted Internet access is \nallowed, it is a good practice to use software that will \ntrack the Web sites a user visits, to make sure they are \nnot breaking company policy. \n 6. SECURITY CONTROLS \n There are three types of security controls that need to be \nimplemented for a successful security policy to be put \ninto action. They are physical controls, technical con-\ntrols, and administrative controls. \n Physical controls consist of things such as mag-\nnetic swipe cards, RFID, or biometric security to pre-\nvent access to stored information or network resources. \nPhysical controls also consist of environmental controls \nsuch as HVAC units, power generators, and fire suppres-\nsion systems. \n Technical controls are used to limit access to network \nresources and devices that are used in the organization. \nThey can be individual usernames and passwords used \nto access individual devices or access control lists that \nare part of a network operating system. \n Administrative controls consist of policies created by \nan organization that determine how the organization will \nfunction. These controls guide employees by describing \nhow their jobs are to be done and what resources they \nare supposed to use to do them. \n 7 . NETWORK ACCESS \n The first step in developing a security management sys-\ntem is documenting the network resources to which each \ngroup of users should have access to. Users should only \nhave access to the resources that they need to complete \ntheir jobs efficiently. An example of when this will come \nin handy is when the president of the company wants \naccess to every network resource and then his compu-\nter becomes infected with a virus that starts infecting \nall network files. Access Control Lists (ACLs) should \nbe planned ahead of time and then implemented on the \nnetwork to avoid complications with the network ACL \nhierarchy. \n An ACL dictates which users have access to certain \nnetwork resources. Network administrators usually have \naccess to all files and folders on a server. Department \nadministrators will have access to all files used by their \ndepartments. End users will have access to a subset of \nthe department files that they need to perform their jobs. \nACLs are developed by the head of IT for an organiza-\ntion and the network administrator and implemented by \nthe network administrator. \n Implementing ACLs prevents end users from being \nable to access sensitive company information and helps \nthem perform their jobs better by not giving them access \nto information that can act as a distraction. \n Access control can also apply to physical access as \nwell as electronic access. Access to certain networking \ndevices could cause an entire organization to stop func-\ntioning for a period of time, so access to those devices \nshould be carefully controlled. \n 8. RISK ASSESSMENT \n Before security threats can be blocked, all risks must \nfirst be identified and assessed. Risk assessment forms \nthe foundation of a good security management system. \nNetwork administrators must document all aspects of \ntheir network setup. This documentation should provide \ninformation on the network firewall, servers, clients, and \nany other devices physically connected or wirelessly \nconnected to the network. The most time should be spent \ndocumenting how the private computer network will be \nconnected to the Internet for Web browsing and email. \nSome common security risks are: \n ● USB storage devices. Devices that can be used to \ncopy proprietary company data off the internal net-\nwork. Many organizations use software solutions to \ndisable unused USB ports on a system; others physi-\ncally block the connections. \n ● Remote control software. Services such as \nGoToMyPc or Log Me In do not require any special \nrouter or firewall configuration to enable remote \naccess. \n ● Email. Filters should be put in place that prevent \nsensitive company information from simply being \nemailed outside the organization. \n" }, { "page_number": 291, "text": "PART | II Managing Information Security\n258\n ● General Internet use . There is always a possibility \nof downloading a malicious virus from the Internet \nunless all but trusted and necessary Web sites are \nrestricted to internal users. This can be accomplished \nby a content-filtering firewall or Web proxy server. \n ● Laptops . Lost laptops pose a very large security \nrisk, depending on the type on data stored on them. \nPolicies need to be put in place to determine what \ntypes of information can be stored on these devices \nand what actions should be taken if a laptop is lost. \n ● Peer-to-peer applications. P2P applications that \nare used to download illegal music and software \ncause a risk because the files that are downloaded \nare not coming from known sources. People who \ndownload an illegal version of an application could \nbe downloading a worm that can affect the entire \nnetwork. \n 9. INCIDENT RESPONSE \n Knowing what to do in case of a security incident is cru-\ncial to being able to track down what happened and how \nto make sure it never happens again. When a security \nincident is identified, it is imperative that steps are taken \nso that forensic evidence is not destroyed in the investi-\ngation process. Forensic evidence includes the content of \nall storage devices attached to the system at the time of \nthe incident and even the contents stored in memory of a \nrunning computer. \n Using an external hard drive enclosure to browse the \ncontent of the hard drive of a compromised system will \ndestroy date and timestamps that a forensic technician \ncan use to tie together various system events. \n When a system breach or security issue has been \ndetected, it is recommended to consult someone familiar \nwith forensically sound investigation methods. If forensic \nmethods are not used, it can lead to evidence not being \nadmissible in court if the incident results in a court case. \n There are specific steps to take with a computer sys-\ntem, depending on the type of incident that occurred. \nUnless a system is causing damage to itself by deleting \nfiles or folders that can be potential evidence, it is best \nto leave the system running for the forensic investigator. \nThe forensic investigator will: \n ● Document what is on the screen by photographing it. \nShe will also photograph the actual computer system \nand all cable connections. \n ● Capture the contents of the system’s memory. This is \ndone using a small utility installed from a removable \ndrive that will create a forensic image of what is in \nthe system’s physical memory. This can be used to \ndocument Trojan activity. If memory is not imaged \nand the computer was used to commit a crime, the \ncomputer’s user can claim that a malicious virus, \nwhich was only running in memory, was responsible. \n ● Turn off the computer. If the system is running a \nWindows workstation operating system such as \nWindows 2000 Workstation or Windows XP, the \nforensic technician will pull the plug on the system. \nIf the system is running a server operating system \nsuch as Windows 2000 Server, Windows 2003 \nServer, or a Linux- or Unix-based operating system \nsuch as Red Hat, Fedora, or Ubuntu, the investigator \nwill properly shut down the system. \n ● Create a forensic image of the system’s hard drive. \nThis is done using imaging software and usually \na hardware write-blocker to connect the system’s \nhard drive to the imaging computer. A hardware \nwrite-blocker is used to prevent the imaging com-\nputer from writing anything at all to the hard drive. \nWindows, by default, will create a recycle bin on a \nnew volume that it is able to mount, which would \ncause the evidence to lose forensic value. \n Investigators are then able to search through the sys-\ntem without making any changes to the original media. \n 10 . SUMMARY \n Organizations interested in implementing a compre-\nhensive security management system should start by \ndocumenting all business processes that are critical to an \norganization and then analyzing the risks associated with \nthem, then implement the controls that can protect those \nprocesses from external and internal threats. Internet \nthreats are not usually the cause of someone with mali-\ncious intent but someone who accidentally downloads \na Trojan or accidentally moves or deletes a directory \nor critical files. The final step is performing recursive \nchecking of the policies your organization has put in \nplace to adjust for new technologies that need to be pro-\ntected or new ways that external threats can damage your \nnetwork. The easiest way to implement security manage-\nment systems is to use the Plan-Do-Act-Check (PDAC) \nprocess to step though the necessary procedures. \n" }, { "page_number": 292, "text": "259\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Information Technology Security \nManagement \n Rahul Bhasker \n California State University \n Bhushan Kapoor \n California State University \n Chapter 16 \n Information technology security management can be \ndefined as processes that supported enabling organizational \nstructure and technology to protect an organization’s IT \noperations and assets against internal and external threats, \nintentional or otherwise. The principle purpose of IT secu-\nrity management is to ensure confidentiality, integrity, and \navailability (CIA) of IT systems. Fundamentally, security \nmanagement is a part of the risk management process and \nbusiness continuity strategy in an organization. \n 1. INFORMATION SECURITY \nMANAGEMENT STANDARDS \n A range of standards are specified by various industry \nbodies. Although specific to an industry, these standards \ncan be used by any organization and adapted to its goals. \nHere we discuss the main organizations that set standards \nrelated to information security management. \n Federal Information Security \nManagement Act \n At the U.S. federal level, the National Institute of \nStandards and Technology (NIST) has specified guide-\nlines for implementing the Federal Information Security \nManagement Act (FISMA). This act aims to provide the \nfollowing standards shown in Figure 16.1 . \n The “ Federal Information Security Management \nFramework Recommended by NIST ” 2 sidebar describes \nthe risk management framework as specified in FISMA. \nThe activities specified in this framework are para-\nmount in implementing an IT security management plan. \n 2 “ Federal Information Security Management Act, ” National Institute \nof Standards and Technology, http://csrc.nist.gov/groups/SMA/fi sma/\nindex.html , 2008 (downloaded 10/20/2008). \n• Standards for categorizing information and information systems by mission impact\n• Standards for minimum security requirements for information and information systems\n• Guidance for selecting appropriate security controls for information systems\n• Guidance for assessing security controls in information systems and determining security\n control effectiveness\n• Guidance for certifying and accrediting information systems\n FIGURE 16.1 Specifications in the Federal Information Security Management Act. 1 \n 1 “ Federal Information Security Management Act, ” National Institute \nof Standards and Technology, http://csrc.nist.gov/groups/SMA/fi sma/\nindex.html , 2008 (downloaded 10/20/2008). \n" }, { "page_number": 293, "text": "PART | II Managing Information Security\n260\nTechnical Commission, published ISO/IEC 17799:2005. 3 \nThese standards establish guidelines and general prin-\nciples for initiating, implementing, maintaining, and \nimproving information security management in an organ-\nization. The objectives outlined provide general guidance \non the commonly accepted goals of information security \nmanagement. The standards consist of best practices of \ncontrol objectives and controls in the areas of information \nsecurity management shown in Figure 16.2 . \n These objectives and controls are intended to be \nimplemented to meet the requirements identified by a \nrisk assessment. \n Other Organizations Involved in \nStandards \n Other organizations that are involved in information \nsecurity management include The Internet Society 4 and \nthe Information Security Forum. 5 These are professional \nsocieties with members in the thousands. The Internet \nSociety is the organization home for the groups respon-\nsible for Internet infrastructure standards, including the \nInternet Engineering Task Force (IETF) and the Internet \nArchitecture Board (IAB). The Information Security \nForum is a global nonprofit organization of several hun-\ndred leading organizations in financial services, manufac-\nturing, telecommunications, consumer goods, government, \nand other areas. It provides research into best practices \nand advice, summarized in its biannual Standard of Good \nPractice, which incorporates detailed specifications across \nmany areas. \n 2. INFORMATION TECHNOLOGY \nSECURITY ASPECTS \n The various aspects to IT security in an organization that \nmust be considered include: \n ● Security policies and procedures \n ● Security organization structure \n Although specified for the federal government, this frame-\nwork can be used as a guideline by any organization. \n International Standards Organization \n Another influential international body, the International \nStandards Organization and the International Electro \n 3 “ Information technology | Security techniques | Code of practice for \ninformation security management, ISO/IEC 17799, ” The International \nStandards Organization and The International Electro Technical \nCommission, www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_\ndetail.htm?csnumber \u0003 39612 , 2005 (downloaded 10/20/2008). \n 4 “ ISOC’s Standards and Technology Activities, ” Internet Society, \n www.isoc.org/standards , 2008 (downloaded 10/20/2008). \n 5 “ The Standard of Good Practice, ” Information Security Forum, \n https://www.securityforum.org/html/frameset.htm , 2008 (downloaded \n10/20/2008). \n Federal Information Security Management \nFramework Recommended by NIST \n Step 1: Categorize\nIn this step, information systems and internal information \nshould be categorized based on impact. \n Step 2: Select\nUse the categorization in the first step to select an initial \nset of security controls for the information system and \napply tailoring guidance as appropriate, to obtain a start-\ning point for required controls. \n Step 3: Supplement\nAssess the risk and local conditions, including the security \nrequirements, specific threat information, and cost/benefit \nanalyses or special circumstances. Supplement the initial \nset of security controls with the supplement analyses. \n Step 4: Document\nThe original set of security controls and the supplements \nshould be documented. \n Step 5: Implement\nThe security controls you identified and supplemented \nshould be implemented in the organization’s information \nsystems. \n Step 6: Assess\nThe security controls should be assessed to determine \nwhether the controls are implemented correctly, are \noperating as intended, and are producing the desired \noutcome with respect to meeting the security require-\nments for the system. \n Step 7: Authorize\nUpon a determination of the risk to organizational oper-\nations, organizational assets, or individuals resulting \nfrom their operation, authorize the information systems. \n Step 8: Monitor\nMonitor and assess selected security controls in the \ninformation system on a continuous basis, including \ndocumenting changes to the system. \n" }, { "page_number": 294, "text": "Chapter | 16 Information Technology Security Management\n261\n ● IT security processes \n – Processes for a business continuity strategy \n – Processes for IT security governance planning \n ● Rules and regulations \n Security Policies and Procedures \n Security policies and procedures constitute the main part \nof any organization’s security. These steps are essential for \nimplementing IT security management: authorizing secu-\nrity roles and responsibilities to various security personnel; \nsetting rules for expected behavior from users and security \nrole players; setting rules for business continuity plans; \nand more. The security policy should be generally agreed \nto by most personnel in the organization and should have \nthe support of the highest-level management. This helps in \nprioritization at the overall organization level. \n The following list, illustrated in Figure 16.3 , is a sam-\nple of some of the issues an organization is expected to \naddress in its policies. 7 Note, however, that the univer-\nsal list is virtually endless, and each organization’s list \nwill consist of issues based on several factors, including \nits size and the value and sensitivity of the information \nit owns or deals with. Some important issues included in \nmost security policies are: \n ● Access control standards . These are standards on \ncontrolling the access to various systems. These \ninclude password change standards. \n ● Accountability. Every user should be responsible \nfor her own accounts. This implies that any \nactivity under a particular user ID should be the \nresponsibility of the user whose ID it is. \n ● Audit trails. There should be an audit trail recorded \nof all the activities under a user ID. For example, \nall the login, log-out activities for 30 days should \nbe recorded. Additionally, all unauthorized attempts \nto access, read, write, and delete data and execute \nprograms should be logged. \n ● Backups. There should be a clearly defined backup \npolicy. Any backups should be kept in a secure \narea. A clear policy on the frequency of the backups \nand their recovery should be communicated to the \nappropriate personnel. \n ● Disposal of media. A clear policy should be defined \nregarding the disposal of media. This includes a \npolicy on which hardware and storage media, such \nas disk drives, diskettes, and CD-ROMs, are to be \ndestroyed. The level and method of destruction of \nbusiness-critical information that is no longer needed \nshould be well defined and documented. Personnel \nshould be trained regularly on the principles to follow. \nSecurity Policy\nOrganization of Information Security\nAsset Management\nHuman Resources Security\nPhysical and Environmental Security\nCommunication and Operations Management\nAccess Control\nInformation Systems Acquisition, Development and Maintenance\nInformation Security Incident Management\nBusiness Continuity Management\nCompliance\n FIGURE 16.2 International Standards Organization best-practice areas. 6 \nAccess Control Standards\nAccountability\nAudit Trails\nBackups\nDisposal of Media\nDisposal of Printed Matter\nInformation Ownership\nManagers Responsibility\nEquipment\nCommunication\nProcedures and Processes at Work\n FIGURE 16.3 Security aspects an organization is expected to address \nin its policies. \n 6 “ Information technology | Security techniques | Code of practice for \ninformation security management, ISO/IEC 17799, ” The International \nStandards Organization and The International Electro Technical \nCommission, www.iso.org/iso (downloaded 10/20/2008). \n 7 “ Information technology | Security techniques | Code of practice for \ninformation security management, ISO/IEC 17799, ” The International \nStandards Organization and The International Electro Technical \nCommission, www.iso.org/iso (downloaded 10/20/2008). \n" }, { "page_number": 295, "text": "PART | II Managing Information Security\n262\n ● Disposal of printed matter. Guidelines as to the \ndisposal of printed matter should be specified \nand implemented throughout the organization. In \nparticular, business-critical materials should be \ndisposed properly and securely. \n ● Information ownership. All the data and information \navailable in the organization should have an assigned \nowner. The owner should be responsible for deciding \non access rights to the information for various \npersonnel. \n ● Managers ’ responsibility . Managers at all levels \nshould ensure that their staff understands the security \npolicy and adheres to it continuously. They should be \nheld responsible for recording any deviations from \nthe core policy. \n ● Equipment. An organization should have specific \nguidelines about modems, portable storage, and other \ndevices. These devices should be kept in a secured \nphysical environment. \n ● Communication. Well-defined policy guidelines \nare needed for communication using corporate \ninformation systems. These include communications \nvia emails, instant messaging, and so on. \n ● Work procedures and processes. Employees of \nan organization should be trained to secure their \nworkstations when not in use. The policy can \nimpose a procedure of logging off before leaving \na workstation. It can also include quarantining any \ndevice (such as a laptop) brought from outside the \norganization before plugging it into the network. \n Security Organization Structure \n Various security-related roles need to be maintained and \nwell defined. These roles and their brief descriptions are \ndescribed here. 8 \n End User \n End users have a responsibility to protect information \nassets on a daily basis through adherence to the security \npolicies that have been set and communicated. End-user \ncompliance with security policies is key to maintaining \ninformation security in an organization because this group \nrepresents the most consistent users of the organization’s \ninformation. \n Executive Management \n Top management plays an important role in protect-\ning the information assets in an organization. Executive \nmanagement can support the goal of IT security by con-\nveying the extent to which management supports secu-\nrity goals and priorities. Members of the management \nteam should be aware of the risks that they are accept-\ning for the organization through their decisions or fail-\nure to make decisions. There are various specific areas \non which senior management should focus, but some that \nare specifically appropriate are user training, inculcating \nand encouraging a security culture, and identifying the \ncorrect policies for IT security governance. \n Security Officer \n The security officer “ directs, coordinates, plans, and \norganizes information security activities throughout the \norganization. ” 9 \n Data/Information Owners \n Every organization should have clearly identified data and \ninformation owners. These executives or managers should \nreview the classification and access security policies and \nprocedures. They should also be responsible for periodic \naudit of the information and data and its continuous secu-\nrity. They may appoint a data custodian in case the work \nrequired to secure the information and data is extensive \nand needs more than one person to complete. \n Information System Auditor \n Information system auditors are responsible for ensuring \nthat the information security policies and procedures have \nbeen adhered to. They are also responsible for establish-\ning the baseline, architecture, management direction, and \ncompliance on a continuous basis. They are an essential \npart of unbiased information about the state of informa-\ntion security in the organization. \n Information Technology Personnel \n IT personnel are responsible for building IT security \ncontrols into the design and implementations of the sys-\ntems. They are also responsible for testing these controls \nperiodically or whenever there is a change. They work \n 8 Tipton and Krause, “ Information Security Governance, ” Information \nSecurity Management Handbook , Auerbach Publications, 2008. \n 9 Tipton and Krause, “ Information Security Governance, ” Information \nSecurity Management Handbook , Auerbach Publications, 2008. \n" }, { "page_number": 296, "text": "Chapter | 16 Information Technology Security Management\n263\n with the executives and other managers to ensure com-\npliance in all the systems under their responsibility. \n Systems Administrator \n A systems administrator is responsible for configuring \nthe hardware and the operating system to ensure that \nthe information systems and their contents are available \nfor business as and when needed. These adminstrators \nare placed ideally in an organization to ensure security \nof these assets. They play a key role because they own \naccess to the most vulnerable information assets of an \norganization. \n IT Security Processes \n To achieve effective IT security requires processes related \nto security management. These processes include busi-\nness continuity strategy, processes related to IT secu-\nrity governance planning, and IT security management \nimplementation. \n Processes for a Business Continuity Strategy \n As is the case with any strategy, the business continuity \nstrategy depends on a commitment from senior manage-\nment. This can include some of the analysis that is obtained \nby business impact assessment/risk analysis focused on \nbusiness value drivers. These business value drivers are \ndetermined by the main stakeholders from the organiza-\ntions. Examples of these value drivers are customer service \nand intellectual property protection. 10 \n The Disaster Recovery Institute International (DRII) \nassociates eight tasks with the contingency planning \nprocess. 11 These are as follows: \n ● Business impact analysis, to analyze the impact of \noutage on critical business function operations. \n ● Risk assessment, to assess the risks to the current \ninfrastructure and the incorporation of safeguards to \nreduce the likelihood and impact of disasters. \n ● Recovery strategy identification, to develop a \nvariety of disaster scenarios and identify recovery \nstrategies. \n ● Recovery strategy selection, to select the appropriate \nrecovery strategies based on the perceived threats \nand the time needed to recover. \n ● Contingency plan development, to document the \nprocesses, equipment, and facilities required to \nrestore the IT assets. \n ● User training, to develop training programs to enable \nall affected users to perform their tasks. \n ● Plan verification, for accuracy and adequacy. \n ● Plan maintenance, for continuous upkeep of the plan \nas needs change. \n Processes for IT Security Governance \nPlanning \n IT security governance planning includes prioritization \nas its major function. This helps in utilizing the limited \nsources of the organization. Determining priorities among \nthe potential conflicting interests is the main focus of \nthese processes. This includes budget setting, resource \nallocation, and, most important, the political process \nneeded to prioritize in an organization. \n Rules and Regulations \n An organization is influenced by rules and regulations \nthat influence its business. In a business environment \nmarked by globalization, organizations have to be aware \nof both national and international rules and regulations. \nFrom an information security management perspective, \nvarious rules and regulations must be considered. These \nare listed in Figure 16.4 . \n We give more details on some rules and regulations \nhere: \n ● The Health Insurance Portability and Accountability \nAct (HIPAA) requires the adoption of national stand-\nards for electronic healthcare transactions and national \nidentifiers for providers, health insurance plans, and \nemployers. Healthcare providers have to protect the \npersonal medical information of the customer to com-\nply with this law. Similarly, the Gramm-Leach-Bliley \nAct of 1999 (GLBA), also known as the Financial \nServices Modernization Act of 1999, requires finan-\ncial companies to protect the information about indi-\nviduals that it collects during transactions. \n ● The Sarbanes-Oxley Act of 2002 (SOX). This \nlaw requires companies to protect and audit their \nfinancial data. The chief information officer and \nother senior executives are held responsible for \nreporting and auditing an organization’s financial \ninformation to regulatory and other agencies. \n 10 C. R. Jackson, “ Developing Realistic Continuity Planning Process \nMetrics, ” Information Security Management Handbook , Auerbach \nPublications, 2008. \n 11 “ Contingency Planning Process, ” DRII – The Institute for Continuity \nManagement, https://www.drii.org/professional_prac/profprac_appen-\ndix.html#BUSINESS_CONTINUITY_PLANNING_INFORMATION , \n2008 (downloaded 10/24/2008). \n" }, { "page_number": 297, "text": "PART | II Managing Information Security\n264\n ● State Security Breach Notification Laws (California \nand many others) require businesses, nonprofits, \nand state institutions to notify consumers when \nunencrypted “ personal information ” might have been \ncompromised, lost, or stolen. \n ● The Personal Information Protection and Electronics \nDocument Act (PIPEDA) supports and promotes \nelectronic commerce by protecting personal \ninformation that is collected, used, or disclosed \nin certain circumstances, by providing for the use \nof electronic means to communicate or record \ninformation or transactions, and by amending the \nCanada Evidence Act, the Statutory Instruments Act, \nand the Statute Revision Act that is in fact the case. \n ● The Computer Fraud and Abuse Act, or CFAA (also \nknown as Fraud and Related Activity in Connection \nwith Computers), is a U.S. law passed in 1986 \nand intended to reduce computer crimes. It was \namended in 1994, 1996, and 2001 by the U.S.A. \nPATRIOT Act. 12 \n The following sidebar, “ Computer Fraud and Abuse \nAct Criminal Offences, ” lists criminal offences covered \nunder this law. 13 \n 12 “ Fraud and Related Activities in Relation to the Computers, ” U.S. \nCode Collection, Cornell University Law School, www4.law.cornell.\nedu/uscode/18/1030.html , 2008 (downloaded 10/24/2008). \n 13 “ Fraud and Related Activities in Relation to the Computers, ” U.S. \nCode Collection, Cornell University Law School, www4.law.cornell.\nedu/uscode/18/1030.html , 2008 (downloaded 10/24/2008). \nHealth Insurance Portability and Accountability Act (HIPAA)\nGramm-Leach-Bliley Act\nSarbanes-Oxley Act of 2002\nSecurity Breach Notification Laws\nPersonal Information Protection and Electronic Document Act (PIPEDA)\nComputer Fraud and Abuse Act\nUSA PATRIOT Act\n FIGURE 16.4 Rules and regulations related to information security management. \n Computer Fraud and Abuse Act Criminal Offences \n (a) Whoever — \n (1) having knowingly accessed a computer without \nauthorization or exceeding authorized access, and \nby means of such conduct having obtained informa-\ntion that has been determined by the United States \nGovernment pursuant to an Executive order or statute \nto require protection against unauthorized disclosure \nfor reasons of national defense or foreign relations, or \nany restricted data, as defined in paragraph y. of sec-\ntion 11 of the Atomic Energy Act of 1954, with reason \nto believe that such information so obtained could be \nused to the injury of the United States, or to the advan-\ntage of any foreign nation willfully communicates, \ndelivers, transmits, or causes to be communicated, \ndelivered, or transmitted, or attempts to communicate, \ndeliver, transmit or cause to be communicated, deliv-\nered, or transmitted the same to any person not enti-\ntled to receive it, or willfully retains the same and fails \nto deliver it to the officer or employee of the United \nStates entitled to receive it; \n (2) intentionally accesses a computer without authori-\nzation or exceeds authorized access, and thereby \nobtains — \n (A) information contained in a financial record of a finan-\ncial institution, or of a card issuer as defined in section \n1602 (n) of title 15, or contained in a file of a con-\nsumer reporting agency on a consumer, as such terms \nare defined in the Fair Credit Reporting Act (15 U.S.C. \n1681 et seq.); \n (B) information from any department or agency of the \nUnited States; or \n (C) information from any protected computer if the con-\nduct involved an interstate or foreign communication; \n (3) intentionally, without authorization to access any non-\npublic computer of a department or agency of the United \nStates, accesses such a computer of that department or \nagency that is exclusively for the use of the Government \nof the United States or, in the case of a computer not \nexclusively for such use, is used by or for the Government \nof the United States and such conduct affects that use by \nor for the Government of the United States; \n" }, { "page_number": 298, "text": "Chapter | 16 Information Technology Security Management\n265\n (4) knowingly and with intent to defraud, accesses a pro-\ntected computer without authorization, or exceeds \nauthorized access, and by means of such conduct fur-\nthers the intended fraud and obtains anything of value, \nunless the object of the fraud and the thing obtained \nconsists only of the use of the computer and the value \nof such use is not more than $5,000 in any 1-year \nperiod; \n (5) \n (A) \n (i) knowingly causes the transmission of a program, infor-\nmation, code, or command, and as a result of such \nconduct, intentionally causes damage without authori-\nzation, to a protected computer; \n (ii) intentionally accesses a protected computer without \nauthorization, and as a result of such conduct, reck-\nlessly causes damage; or \n (iii) intentionally accesses a protected computer without \nauthorization, and as a result of such conduct, causes \ndamage; and \n (B) by conduct described in clause (i), (ii), or (iii) of sub-\nparagraph (A), caused (or, in the case of an attempted \noffense, would, if completed, have caused) — \n (i) loss to 1 or more persons during any 1-year period (and, \nfor purposes of an investigation, prosecution, or other pro-\nceeding brought by the United States only, loss resulting \nfrom a related course of conduct affecting 1 or more other \nprotected computers) aggregating at least $5,000 in value; \n (ii) the modification or impairment, or potential modification \nor impairment, of the medical examination, diagnosis, \ntreatment, or care of 1 or more individuals; \n (iii) physical injury to any person; \n (iv) a threat to public health or safety; or \n (v) damage affecting a computer system used by or for a \ngovernment entity in furtherance of the administration of \njustice, national defense, or national security; \n (6) knowingly and with intent to defraud traffics (as \ndefined in section 1029) in any password or simi-\nlar information through which a computer may be \naccessed without authorization, if — \n (A) such trafficking affects interstate or foreign com-\nmerce; or \n (B) such computer is used by or for the Government of the \nUnited States; [1] \n (7) with intent to extort from any person any money or \nother thing of value, transmits in interstate or foreign \ncommerce any communication containing any threat \nto cause damage to a protected computer; \n \n shall be punished as provided in subsection (c) of this \nsection. \n (b) Whoever attempts to commit an offense under subsec-\ntion (a) of this section shall be punished as provided in \nsubsection (c) of this section. \n (c) The punishment for an offense under subsection (a) or \n(b) of this section is — \n (1) \n (A) a fine under this title or imprisonment for not more \nthan ten years, or both, in the case of an offense under \nsubsection (a)(1) of this section which does not occur \nafter a conviction for another offense under this sec-\ntion, or an attempt to commit an offense punishable \nunder this subparagraph; and \n (B) a fine under this title or imprisonment for not more \nthan twenty years, or both, in the case of an offense \nunder subsection (a)(1) of this section which occurs \nafter a conviction for another offense under this sec-\ntion, or an attempt to commit an offense punishable \nunder this subparagraph; \n (2) \n (A) except as provided in subparagraph (B), a fine under \nthis title or imprisonment for not more than one year, or \nboth, in the case of an offense under subsection (a)(2), \n(a)(3), (a)(5)(A)(iii), or (a)(6) of this section which does not \noccur after a conviction for another offense under this \nsection, or an attempt to commit an offense punishable \nunder this subparagraph; \n (B) a fine under this title or imprisonment for not more \nthan 5 years, or both, in the case of an offense under \nsubsection (a)(2), or an attempt to commit an offense \npunishable under this subparagraph, if — \n (i) the offense was committed for purposes of commercial \nadvantage or private financial gain; \n (ii) the offense was committed in furtherance of any \ncriminal or tortious act in violation of the Constitution \nor laws of the United States or of any State; or \n (iii) the value of the information obtained exceeds $5,000; \nand \n (C) a fine under this title or imprisonment for not more \nthan ten years, or both, in the case of an offense under \nsubsection (a)(2), (a)(3) or (a)(6) of this section which \noccurs after a conviction for another offense under this \nsection, or an attempt to commit an offense punishable \nunder this subparagraph; \n (3) \n (A) a fine under this title or imprisonment for not more than \nfive years, or both, in the case of an offense under sub-\nsection (a)(4) or (a)(7) of this section which does not \noccur after a conviction for another offense under this \nsection, or an attempt to commit an offense punishable \nunder this subparagraph; and \n (B) a fine under this title or imprisonment for not more than \nten years, or both, in the case of an offense under sub-\nsection (a)(4), (a)(5)(A)(iii), or (a)(7) of this section which \noccurs after a conviction for another offense under this \nsection, or an attempt to commit an offense punishable \nunder this subparagraph; \n" }, { "page_number": 299, "text": "PART | II Managing Information Security\n266\n (4) \n (A) except as provided in paragraph (5), a fine under this \ntitle, imprisonment for not more than 10 years, or both, \nin the case of an offense under subsection (a)(5)(A)(i), or \nan attempt to commit an offense punishable under that \nsubsection; \n (B) a fine under this title, imprisonment for not more than 5 \nyears, or both, in the case of an offense under subsection \n(a)(5)(A)(ii), or an attempt to commit an offense punish-\nable under that subsection; \n (C) except as provided in paragraph (5), a fine under this \ntitle, imprisonment for not more than 20 years, or both, \nin the case of an offense under subsection (a)(5)(A)(i) or \n(a)(5)(A)(ii), or an attempt to commit an offense punish-\nable under either subsection, that occurs after a convic-\ntion for another offense under this section; and \n (5) \n (A) if the offender knowingly or recklessly causes or \nattempts to cause serious bodily injury from conduct in \nviolation of subsection (a)(5)(A)(i), a fine under this title \nor imprisonment for not more than 20 years, or both; \nand \n (B) if the offender knowingly or recklessly causes or \nattempts to cause death from conduct in violation of \nsubsection (a)(5)(A)(i), a fine under this title or impris-\nonment for any term of years or for life, or both. \n (d) \n (1) The United States Secret Service shall, in addition to any \nother agency having such authority, have the authority \nto investigate offenses under this section. \n (2) The Federal Bureau of Investigation shall have \nprimary authority to investigate offenses under subsec-\ntion (a)(1) for any cases involving espionage, foreign \ncounterintelligence, information protected against \nunauthorized disclosure for reasons of national defense \nor foreign relations, or Restricted Data (as that term \nis defined in section 11y of the Atomic Energy Act of \n1954 (42 U.S.C. 2014 (y)), except for offenses affecting \nthe duties of the United States Secret Service pursuant \nto section 3056 (a) of this title. \n (3) Such authority shall be exercised in accordance with \nan agreement which shall be entered into by the \nSecretary of the Treasury and the Attorney General. \n (e) As used in this section — \n (1) the term “ computer ” means an electronic, magnetic, \noptical, electrochemical, or other high speed data \nprocessing device performing logical, arithmetic, or \nstorage functions, and includes any data storage facility \nor communications facility directly related to or oper-\nating in conjunction with such device, but such term \ndoes not include an automated typewriter or typeset-\nter, a portable hand held calculator, or other similar \ndevice; \n (2) the term “ protected computer ” means a computer — \n (A) exclusively for the use of a financial institution or the \nUnited States Government, or, in the case of a compu-\nter not exclusively for such use, used by or for a finan-\ncial institution or the United States Government and \nthe conduct constituting the offense affects that use by \nor for the financial institution or the Government; or \n (B) which is used in interstate or foreign commerce or \ncommunication, including a computer located outside \nthe United States that is used in a manner that affects \ninterstate or foreign commerce or communication of \nthe United States; \n (3) the term “ State ” includes the District of Columbia, the \nCommonwealth of Puerto Rico, and any other com-\nmonwealth, possession or territory of the United States; \n (4) the term “ financial institution ” means — \n (A) an institution, with deposits insured by the Federal \nDeposit Insurance Corporation; \n (B) the Federal Reserve or a member of the Federal Reserve \nincluding any Federal Reserve Bank; \n (C) a credit union with accounts insured by the National \nCredit Union Administration; \n (D) a member of the Federal home loan bank system and \nany home loan bank; \n (E) any institution of the Farm Credit System under the \nFarm Credit Act of 1971; \n (F) a broker-dealer registered with the Securities and \nExchange Commission pursuant to section 15 of the \nSecurities Exchange Act of 1934; \n (G) the Securities Investor Protection Corporation; \n (H) a branch or agency of a foreign bank (as such terms \nare defined in paragraphs (1) and (3) of section 1(b) of \nthe International Banking Act of 1978); and \n (I) an organization operating under section 25 or section \n25(a) [2] of the Federal Reserve Act; \n (5) the term “ financial record ” means information derived \nfrom any record held by a financial institution per-\ntaining to a customer’s relationship with the financial \ninstitution; \n (6) the term “ exceeds authorized access ” means to access \na computer with authorization and to use such access \nto obtain or alter information in the computer that the \naccesser is not entitled so to obtain or alter; \n (7) the term “ department of the United States ” means the \nlegislative or judicial branch of the Government or one \nof the executive departments enumerated in section \n101 of title 5; \n (8) the term “ damage ” means any impairment to the \nintegrity or availability of data, a program, a system, or \ninformation; \n (9) the term “ government entity ” includes the Government \nof the United States, any State or political subdivision \nof the United States, any foreign country, and any state, \n" }, { "page_number": 300, "text": "Chapter | 16 Information Technology Security Management\n267\nprovince, municipality, or other political subdivision of \na foreign country; \n (10) the term “ conviction ” shall include a conviction under the \nlaw of any State for a crime punishable by imprisonment \nfor more than 1 year, an element of which is unauthorized \naccess, or exceeding authorized access, to a computer; \n (11) the term “ loss ” means any reasonable cost to any vic-\ntim, including the cost of responding to an offense, \nconducting a damage assessment, and restoring the \ndata, program, system, or information to its condi-\ntion prior to the offense, and any revenue lost, cost \nincurred, or other consequential damages incurred \nbecause of interruption of service; and \n (12) the term “ person ” means any individual, firm, corpora-\ntion, educational institution, financial institution, gov-\nernmental entity, or legal or other entity. \n (f) This section does not prohibit any lawfully authorized \ninvestigative, protective, or intelligence activity of a \nlaw enforcement agency of the United States, a State, \nor a political subdivision of a State, or of an intelli-\ngence agency of the United States. \n (g) Any person who suffers damage or loss by reason of \na violation of this section may maintain a civil action \nagainst the violator to obtain compensatory damages \nand injunctive relief or other equitable relief. A civil \naction for a violation of this section may be brought \nonly if the conduct involves 1 of the factors set forth \nin clause (i), (ii), (iii), (iv), or (v) of subsection (a)(5)(B). \nDamages for a violation involving only conduct \ndescribed in subsection (a)(5)(B)(i) are limited to eco-\nnomic damages. No action may be brought under this \nsubsection unless such action is begun within 2 years \nof the date of the act complained of or the date of the \ndiscovery of the damage. No action may be brought \nunder this subsection for the negligent design or manu-\nfacture of computer hardware, computer software, or \nfirmware. \n (h) The Attorney General and the Secretary of the Treasury \nshall report to the Congress annually, during the first 3 \nyears following the date of the enactment of this sub-\nsection, concerning investigations and prosecutions \nunder subsection (a)(5). \n The U.S.A. PATRIOT Act of 2001 increased the \nscope and penalties of this act by 14 : \n ● Raising the maximum penalty for violations to \nten years (from five) for a first offense and 20 years \n(from ten) for a second offense \n ● Ensuring that violators only need to intend to cause \ndamage generally, not intend to cause damage \nor other specified harm over the $5000 statutory \ndamage threshold \n ● Allowing aggregation of damages to different \ncomputers over a year to reach the $5000 threshold \n ● Enhancing punishment for violations involving \nany (not just $5000 in) damage to a government \ncomputer involved in criminal justice or the \nmilitary \n ● Including damage to foreign computers involved in \nU.S. interstate commerce \n ● Including state law offenses as priors for sentencing; \n ● Expanding the definition of loss to expressly include \ntime spent investigating \n ● Responding (this is why it is important for damage \nassessment and restoration) \n These details are summarized in Figure 16.5 . \n The PATRIOT Act of 2001 came under criticism \nfor a number of reasons. There are fears that the Act is \nan invasion of privacy and infringement on freedom of \nspeech. Critics also feel that the Act unfairly expands \nthe powers of the executive branch and strips away many \ncrucial checks and balances. \n The original act has a sunset clause that would have \ncaused many of the law’s provisions to expire in 2005. \nThe Act was reauthorized in early 2006 with some new \nsafeguards and with expiration dates for its two most \ncontroversial powers, which authorize roving wiretaps \nand secret searches of records. \n 3. CONCLUSION \n Information technology security management consists \nof processes to enable organizational structure and \n 14 “ Computer Fraud and Abuse Act, ” Wikipedia, http://en.wikipedia.\norg/wiki/Computer_Fraud_and_Abuse_Act , \n2008 \n(downloaded \n10/24/2008). \nMaximum Penalty\nExtent of Damage\nAggregation of Damage\nEnhancement of Punishment\nDamage to Foreign Computers\nState Law Offenses\nExpanding the Definition of Loss\nResponse\n FIGURE 16.5 U.S.A. PATRIOT Act increase in scope and penalties. \n" }, { "page_number": 301, "text": "PART | II Managing Information Security\n268\n technology to protect an organization’s IT operations and \nassets against internal and external threats, intentional \nor otherwise. These processes are developed to ensure \nconfidentiality, integrity, and availability of IT systems. \nThere are various aspects to the IT security in an organi-\nzation that need to be considered. These include security \npolicies and procedures, security organization structure, \nIT security processes, and rules and regulations. \n Security policies and procedures are essential for \nimplementing IT security management: authorizing secu-\nrity roles and responsibilities to various security personnel; \nsetting rules for expected behavior from users and security \nrole players; setting rules for business continuity plans; and \nmore. The security policy should be generally agreed to \nby most personnel in the organization and have support \nfrom the highest-level management. This helps in priori-\ntization at the overall organization level. The IT security \nprocesses are essentially part of an organization’s risk \nmanagement processes and business continuity strate-\ngies. In a business environment marked by globaliza-\ntion, organizations have to be aware of both national and \ninternational rules and regulations. Their information \nsecurity and privacy policies must conform to these rules \nand regulations. \n" }, { "page_number": 302, "text": "269\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Identity Management \n Dr. Jean-Marc Seigneur \n University of Geneva \n Dr. Tewfiq El Maliki \n University of Applied Sciences of Geneva \n Chapter 17 \n Digital identity lays the groundwork necessary to guaran-\ntee that the Internet infrastructure is strong enough to meet \nbasic expectations for security and privacy. “ Anywhere, \nanytime ” mobile computing is becoming real; in this \nambient intelligent world, the choice of identity manage-\nment mechanisms will have a large impact on social, cul-\ntural, business, and political aspects of our lives. Privacy \nis a human need, and all of society would suffer from its \ndemise; people have hectic lives and cannot spend all their \ntime administering their digital identities. The choice of \nidentity mechanisms will change the social, cultural, busi-\nness, and political environment. Furthermore, identity \nmanagement is also a promising topic for modern society. \n Recent technological advances in user identity man-\nagement have highlighted the paradigm of federated iden-\ntity management and user-centric identity management as \nimproved alternatives. The first empowers the management \nof identity; the second allows users to actively manage their \nidentity information and profiles. It also allows providers \nto easily deal with privacy aspects regarding user expecta-\ntions. This problem has been tackled with some trends and \nemerging solutions, as described in this chapter. First, we \nprovide an overview of identity management from Identity \n1.0 to 2.0, with emphasis on user-centric approaches. We \nsurvey how the requirements for user-centric identity man-\nagement and their associated technologies have evolved, \nwith emphasis on federated approaches and user-centricity. \nSecond, we focus on related standards XRI and LID, \nissued from the Yadis project, as well as platforms, mainly \nID-WSF, OpenID, CardSpace, Sxip, and Higgins. Finally, \nwe treat identity management in the field of mobility and \nfocus on the future of mobile identity management. \n 1. INTRODUCTION \n Anytime, anywhere mobile computing is becoming easier, \nmore attractive, and even cost-effective: Mobile devices \ncarried by roaming users offer more and more computing \npower and functionalities, including sensing and provid-\ning location awareness. 1 Many computing devices are also \ndeployed in the environments where the users evolve — for \nexample, intelligent home appliances or RFID-enabled \nfabrics. In this ambient intelligent world, the choices of \nidentity mechanisms will have a large impact on social, cul-\ntural, business and political aspects. Moreover, the Internet \nwill generate more complicated privacy problems. 2 \n Identity has become a burden in the online world. \nWhen it is stolen, it engenders a massive fraud, principally \nin online services, which generates a lack of confidence in \ndoing business with providers and frustration for users. \n Therefore, the whole of society would suffer from \nthe demise of privacy, which is a real human need. \nBecause people have hectic lives and cannot spend their \ntime administering their digital identities, we need con-\nsistent identity management platforms and technologies \nenabling usability and scalability, among other things. 3 \nIn this chapter, we survey how the requirements have \nevolved for mobile user-centric identity management \nand its associated technologies. \n 2. EVOLUTION OF IDENTITY \nMANAGEMENT REQUIREMENTS \n First, we define what we mean by a digital identity. \nLater, we summarize all the various requirements and \ndetail the most important ones: namely, privacy, usabil-\nity, and mobility. \n 1 G. Roussos and U. Patel, “ Mobile identity management: An enacted \nview, ” Birkbeck College, University of London, 2003. \n 2 A. Westin, Privacy and Freedom , Athenaeum, 1967. \n 3 J. Madelin et al., BT report, “ Comprehensive identity management \nbalancing cost, risk and convenience in identity management, ” 2007. \n" }, { "page_number": 303, "text": "PART | II Managing Information Security\n270\n Digital Identity Definition \n A digital identity is a representation of an entity in a spe-\ncific context. 4 For a long time, a digital identity was con-\nsidered the equivalent of a user’s real-life identity that \nindicates some of our attributes: \n ● Who we are: Name, citizenship, birthday \n ● What we like: Our favorite reading, food, clothes \n ● What our reputation is: Whether we are honest, with-\nout any problems \n A digital identity was seen as an extended identity \ncard or passport containing almost the same information. \n However, recent work 5 has argued that the link \nbetween the real-world identity and a digital identity is \nnot always mandatory. For example, on eBay, what mat-\nters is to know whether the seller’s digital reputation has \nbeen remarkable and that the seller can prove that she \ncontrols that digital identity. It is less important to know \nthat her real-world national identity is the Bermuda \nIslands, where suing anybody is rather unlikely to suc-\nceed. It should be underlined that in a major identity \nmanagement initiative, 6 a digital identity is defined as \n “ the distinguishing character or personality of an indi-\nvidual. An identity consists of traits, attributes, and \npreferences upon which one may receive personalized \nservices. Such services could exist online, on mobile \ndevices at work, or in many other places ” — that is, with-\nout mentioning a mandatory link to the real-world iden-\ntity behind the digital identity. \n The combination of virtual world with ubiquitous \nconnectivity has changed the physical constraints to an \nentirely new set of requirements as associated security \nissues such as phishing, spam, and identity theft have \nemerged. They are aggravated by the mobility of the user \nand the temporary anonymity of cyberrelationships. We \nare going toward a new, truly virtual world, always with \nthe implication of humanity. Therefore, we are facing the \nproblem of determining the identity of our interlocutors \nand the accuracy of their claims. Simply using strong \nauthentication will not resolve all these security issues. \n Digital identity management is a key issue that will \nensure not only service and functionality expectations \nbut also security and privacy. \n \n Identity Management Overview \n A model of identity 7 can been as follows: \n ● A user who wants to access to a service \n ● Identity Provider (IdP), the issuer of user identity \n ● Service Provider (SP), the relay party imposing an \nidentity check \n ● Identity (Id), a set of user attributes \n ● Personal Authentication Device (PAD), which holds \nvarious identifiers and credentials and could be used \nfor mobility \n Figure 17.1 lists the main components of identity \nmanagement. \n The relationship between entities, identities and iden-\ntifiers are shown in Figure 17.2 which illustrates that an \nentity, such as a user, may have multiple identities, and \neach identity may consist of multiple attributes that can \nbe unique or non-unique identifiers. \n Identity management refers to “ the process of repre-\nsenting, using, maintaining, deprovisioning and authenti-\ncating entities as digital identities in computer networks. ” 8 \n Authentication is the process of verifying claims about \nholding specific identities. A failure at this stage will \nthreaten the validity of the entire system. The technology \nis constantly finding stronger authentication using claims \nbased on: \n ● Something you know (password, PIN) \n ● Something you have (one-time-password) \nUser\nIdentity \nProvider\nIdentity\nPAD\nService\nProvider\n FIGURE 17.1 Identity management main components. \nEntity (user)\nIdentities\nAttributes/Identifiers\nName\nReputation\nAddress\nJob\nLocation\n FIGURE 17.2 Relationship among identities, identifiers, and entity. \n 4 T. Miyata et al., “ A survey on identity management protocols and \nstandards, ” IEICE TRANS. INF & SYST, 2006. \n 5 J.-M. Seigneur, “ Trust, security and privacy in global computing, ” \nPh. D. thesis, Trinity College Dublin, 2005. \n 7 A.B. Spantzel et al., “ User centricity: A taxonomy and open issues, ” \nIBM Zurich Research Laboratory, 2006. \n 8 Phillip J. Windley : “ Unmasking identity management architecture: \ndigital identity ” O’Reilly, August 2005. \n 6 Introduction to the Liberty Alliance Identity Architecture, Rev. 1.0, \nMarch 2003. \n" }, { "page_number": 304, "text": "Chapter | 17 Identity Management\n271\n ● Something you are (your voice, face, fingerprint \n[biometrics]) \n ● Your position \n ● Some combination of the four \n The BT report 9 has highlighted some interesting points \nto meet the challenges of identity theft and fraud: \n ● Developing risk calculation and assessment methods \n ● Monitoring user behavior to calculate risk \n ● Building trust and value with the user or consumer \n ● Engaging the cooperation of the user or consumer \nwith transparency and without complexity or shifting \nthe liability to the consumer \n ● Taking a staged approach to authentication deploy-\nment and process challenges using more advanced \ntechnologies \n Digital identity should manage three connected vertexes: \nusability, cost, and risk, as illustrated in Figure 17.3 . \n The user should be aware of the risk she is facing if her \ndevice’s or software’s security is compromised. Usability \nis the second aspect that should be guaranteed to the user \nunless he will find the system difficult to use, which could \nbe the source of a security problem. Indeed, many users, \nwhen flooded with passwords to remember, write them \ndown and hide them in a “ secret ” place under their key-\nboard. Furthermore, the difficulty of deploying and manag-\ning a large number of identities discourages the use of an \nidentity management system. The cost of a system should \nbe well studied and balanced related to risk and usability. \nMany systems such as one-time password tokens are not \nwidely used because they are too costly for widespread \ndeployment in large institutions. Traditionally, identity \nmanagement was seen as being service provider-centric \nbecause it was designed to fulfill the requirements of serv-\nice providers, such as cost effectiveness and scalability. \nUsers were neglected in many aspects because they were \nforced to memorize difficult or too many passwords. \n Identity management systems are elaborated to deal \nwith the following core facets 10 : \n ● Reducing identity theft. The problem of identity theft \nis becoming a major one, mainly in the online envi-\nronment. Providers need more efficient systems to \ntackle this issue. \n ● Management. The number of digital identities per \nperson will increase, so users need convenient \nsupport to manage these identities and the \ncorresponding authentication. \n ● Reachability. The management of reachability allows \na user to handle their contacts to prevent misuse of \ntheir email address (spam) or unsolicited phone calls. \n ● Authenticity. Ensuring authenticity with \nauthentication, integrity, and nonrepudiation \nmechanisms can prevent identity theft. \n ● Anonymity and pseudonymity. Providing anonymity \nprevents tracking or identifying the users of a \nservice. \n ● Organization personal data management. A quick \nmethod to create, modify, or delete work accounts is \nneeded, especially in big organizations. \n Without improved usability of identity manage-\nment 11 — for example, weak passwords used on many Web \nsites — the number of successful attacks will remain high. \nTo facilitate interacting with unknown entities, simple rec-\nognition rather than authentication of a real-world identity \nhas been proposed, which usually involves manual enroll-\nment steps in the real world. 12 Usability is indeed enhanced \nif no manual task is needed. There might be a weaker level \nof security, but that level might be sufficient for some \nactions, such as logging to a mobile game platform. Single \nSign-On (SSO) is the name given to the requirements of \neliminating multiple-password issues and dangerous pass-\nwords. When we use multiple user IDs and passwords just \nto use email systems and file servers at work, we feel the \ninconvenience that comes from having multiple identi-\nties. The second problem is the scattering of identity data, \nwhich causes problems for the integration of IT systems. \nSSO simplifies the end-user experience and enhances secu-\nrity via identity-based access technology. \nRisk\nCost\nUsability\nDigital\nIdentity\n FIGURE 17.3 Digital identity environment to be managed. \n 9 J. Madelin et al., BT report, “ Comprehensive identity management \nBalancing cost, risk and convenience in identity management, ” 2007. \n 10 Independent Center for Privacy Protection (ICPP) and Studio \nNotarile Genghini (SNG), “ Identity management systems (IMS): \nIdentifi cation and comparison study, ” 2003. \n 11 Independent Center for Privacy Protection (ICPP) and Studio \nNotarile Genghini (SNG), “ Identity management systems (IMS): \nIdentifi cation and comparison study, ” 2003. \n 12 J.-M. Seigneur, “ Trust, security and privacy in global computing, ” \nPh. D. thesis, Trinity College Dublin, 2005. \n" }, { "page_number": 305, "text": "PART | II Managing Information Security\n272\n Microsoft’s first large identity management system \nwas the Passport Network. It was a very large and wide-\nspread Microsoft Internet service, an identity provider for \nthe MSN and Microsoft properties and for the Internet. \nHowever, with Passport, Microsoft was suspected by \nmany of intending to have absolute control over the iden-\ntity information of Internet users and thus exploiting them \nfor its own interests. Passport failed to become “ the ” \nInternet identity management tool. \n Since then, Microsoft has clearly come to under-\nstand that an identity management solution cannot suc-\nceed unless some basic rules are respected. 13 That’s why \nMicrosoft’s Identity Architect, Kim Cameron, has stated \nthe seven laws of identity. His motivation was purely \npractical in determining the prerequisites of creating a \nsuccessful identity management system. He formulated \nthese essential principles to maintain privacy and security; \n ● User control and consent over the handling of their \ndata \n ● Minimal disclosure of data, and for a specified \npurpose \n ● Information should only be disclosed to people who \nhave a justifiable need for it \n ● The system must provide identifiers for both bilateral \nrelationships between parties and for incoming \nunsolicited communications \n ● It must support diverse operators and technologies \n ● It must be perceived as highly reliable and \npredictable \n ● There must be a consistent user experience across \nmultiple identity systems and using multiple \ntechnologies. \n Most systems do not fulfill several of these tests; they \nare particularly deficient in fine-tuning the access con-\ntrol over identity to minimize disclosure of data. \n Cameron’s principles are very clear but they are not \nexplicit enough to compare identity management sys-\ntems. That’s why we will explicitly define the identity \nrequirements. \n Privacy Requirement \n Privacy is a central issue due to the fact that the official \nauthorities of almost all countries have strict legal poli-\ncies related to identity. It is often treated in the case of \nidentity management because the management deals with \npersonal information and data. Therefore, it is important \nto give a definition. Alan F. Westin defines privacy as “ the \nclaim of individuals, groups and institutions to determine \nfor themselves, when, how and to what extent informa-\ntion about them is communicated to others. ” 14 However, \nwe will use Cooley’s broader definition of privacy 15 : “ the \nright to be let alone, ” because it also emphasizes the prob-\nlems related to disturbing the user’s attention, for exam-\nple, with email spam. \n User-Centricity \n The evolution of identity management systems is toward \nthe simplification of the user experience and reinforc-\ning authentication. It is well known that poor usability \nimplies a weakness in authentication. Mainly federated \nmanagement has responded to some of these require-\nments by facilitating the use and the managing of identifi-\ners and credentials in the boundary of a federated domain. \nNevertheless, it is improbable that only one federated \ndomain will subsist. Moreover, different levels of sensitiv-\nity and risks of various services will need different kinds of \ncredentials. It is obvious that we should give support and \natomization of identity management on the user’s side. \n A new paradigm must be introduced to solve the \nproblems of usability, scalability, and universal SSO. \nA user-oriented paradigm, called user-centric identity \nmanagement , has emerged. The expression user-con-\ntrolled management 16 was the first used to explain the \nuser-centric management model. Recent federated iden-\ntity management systems keep strong end-user controls \nover how identity information is disseminated among \nmembers of the federation. This new paradigm gives the \nuser full control over his identity by notifying him of the \ninformation collected and by guaranteeing his consent \nfor any type of manipulation of collected information. \nUser control and consent is also defined as the first law \nin Cameron’s Laws of Identity. 17 A user-centric identity \nmanagement system supports the user’s control and con-\nsiders user-centric architecture and usability aspects. \n There is no uniform definition, but one is “ User-\ncentric identity management is understood to mean digital \nidentity infrastructure where an individual end-user has \nsubstantially independent control over the dissemination \nand use of their identifier(s) and personally identifiable \ninformation (PII). ” 18 See Figure 17.4 . \n 13 K. Cameron, “ Laws of identity, ” May, 2005. \n 14 A. Westin, Privacy and Freedom , Athenaeum, 1967. \n 15 T. M. Cooley, “ A treatise on the law of torts, ” Callaghan, Chicago, \n1888. \n 16 Independent Center for Privacy Protection (ICPP) and Studio \nNotarile Genghini (SNG), “Identity management systems (IMS): \nidentifi cation and comparison study,” 2003. \n 17 K. Cameron, “ Laws of identity, ” May, 2005. \n 18 David Recordon VeriSign Inc, Drummond Reed, “ OpenID 2.0: A \nplatform for user-centric identity management, ” 2006. \n" }, { "page_number": 306, "text": "Chapter | 17 Identity Management\n273\n We can also give this definition of user centricity: “ In \nuser-centric identity management, the user has full con-\ntrol over her identity and consistent user experience dur-\ning all transactions when accessing her services. ” \n In other words, the user is allowed to keep at least \nsome or total control over his personal data. \n One of the principles of a user-centric identity is the \nidea that the user of a Web service should have full con-\ntrol over his identity information. A lot of technology dis-\ncussion and solutions have focused on service provider’s \nand rarely on user’s perspectives. The user-centric identity \nparadigm is a real evolution because it moves IT architec-\nture forward for users, with the following advantages: \n ● Empowering users to have total control over their \nprivacy \n ● Usability, as users are using the same identity for \neach identity transaction \n ● Consistent user experience, thanks to uniformity \nof the identity interface \n ● Limiting identity attacks, such as phishing \n ● Limiting reachability/disturbances, such as spam \n ● Reviewing policies on both sides, both identity \nproviders and service providers (Web sites), when \nnecessary \n ● Huge scalability advantages, since the identity \nprovider does not need any prior knowledge about \nthe service provider \n ● Assuring secure conditions when exchanging data \n ● Decoupling digital identity from applications \n ● Pluralism of operators and technologies \n The user-centric approach allows users to gain access \nanonymously because they retain full control over their \nidentities. Of course, full anonymity 19 and unlinkabil-\nity may lead to increased misuse by anonymous users. \nThen, pseudonymity is an alternative that is more suit-\nable to the ecommerce environment. In this regard, \nanonymity must be guaranteed at the application and \nnetwork levels. Some frameworks have been proposed \nto ensure user-centric anonymity using the concepts of \none-task authorization keys and binding signatures. 20 \n Usability Requirement \n Security is compromised with the proliferation of user \npasswords and even by their weakness. Indeed, some \nusers note their passwords on scratchpads because their \nmemorization poses a problem. The recent FFIEC guide-\nlines on authentication in online banking reports state \nthat “ Account fraud and identity theft are frequently the \nresult of single-factor (e.g., ID/password) authentica-\ntion exploitation. ” 21 From then on, security must be user \noriented because the user is the effective person con-\ncerned with it, and many recent attacks take advantage \nof users ’ lack of awareness of attacks (such as spoof-\ning, pharming, and phishing). 22 Without strong control \nand improved usability 23 of identity management, some \nattacks will always be possible. To facilitate interact-\ning with unknown entities, simple recognition rather \nthan authentication of a real-world identity, which usu-\nally involves manual enrollment steps in the real world, \nhas been proposed. 24 Usability is indeed enhanced if no \nmanual task is needed. A weaker level of security might \nbe reached, but that level might be sufficient for some \nactions, such as logging to a mobile game platform. \n As we’ve seen, SSO is the name given to the require-\nments of eliminating multiple password issues and dan-\ngerous passwords. When we use multiple user IDs and \npasswords to use the email systems and file servers at \nIdentity Provider\nIdentity Provider\nIdP\nIdP\nIdP\n FIGURE 17.4 IdP-centric and user-centric models. \n 19 R. Au, H. Vasanta, K. Kwang, R. Choo, M. Looi, “A user-centric \nanonymous authorisation framework in ecommerce, environment \ninformation security research centre,” Queensland University of \nTechnology, Brisbane, Australia. \n 20 R. Au, H. Vasanta, K. Kwang, R. Choo, M. Looi, “A user-centric \nanonymous authorisation framework in ecommerce, environment \ninformation security research centre,” Queensland University of \nTechnology, Brisbane, Australia. \n 21 Federal Financial Institutions Examination Council, authentication \nin an internet banking environment, October 2005, www.ffi ec.gov/\npress/pr101205.htm . \n 22 A. Erzberg and A. Gbara, TrustBar: protecting (even Na ï ve) web \nusers from spoofi ng and phishing attacks, www.cs.biu.ac.il/~erzbea/\npapaers/ecommerce/spoofi ng.htm , 2004. \n 23 Introduction to usability, www.usabilityfi rst.com/intro/index.tx1 , \n2005. \n 24 J.-M. Seigneur, “ Trust, security and privacy in global computing, ” \nPh. D. thesis, Trinity College Dublin, 2005. \n" }, { "page_number": 307, "text": "PART | II Managing Information Security\n274\nwork, we feel the pain that comes from having multiple \nidentities. The second problem is the scattering of identity \ndata, which causes problems for the integration of IT sys-\ntems. Moreover, it simplifies the end-user experience and \nenhances security via identity-based access technology. \n Therefore, we offer these features: \n ● Flexible authentication \n ● Directory independence \n ● Session and password management \n ● Seamlessness \n 3. THE REQUIREMENTS FULFILLED BY \nCURRENT IDENTITY MANAGEMENT \nTECHNOLOGIES \n This part of the chapter provides an overview of identity \nmanagement solutions from Identity 1.0 to Identity 2.0 \nand how they address the requirements introduced ear-\nlier. We focus on related standards XRI and LID, issued \nfrom the Yadis project, and platforms, mainly ID-WSF, \nOpenID, Higgins, CardSpace, and Sxip. Then we dis-\ncuss identity management in the field of mobility. \n Evolution of Identity Management \n This part of the chapter provides an overview of almost \nall identity management 1.0 (see Figure 17.5 ). We begin \nby describing the silo model, then different kinds of cen-\ntralized model and federated identity management. \n Identity Management 1.0 \n In the real world I use my identity card to prove who I \nam. How about in the online world? \n The first digital identity appeared when a user was \nassociated with the pair (username, password) or any \nother shared secret. This method is used for authenti-\ncation when connecting to an account or a directory. It \nproves your identity if you follow the guidelines strictly; \notherwise there is no proof. \n In fact, it is a single authority using opaque trust \ndecisions without any credentials (cryptographic proofs), \nchoice, or portability. \n In the context of Web access, the user must enroll \nfor every unrelated service, generally with different user \ninterfaces, and follow diverse policies and protocols. \nThus, the user has an inconsistent experience and deals \nwith different identity copies. In addition, some problems \nrelated to privacy have also emerged. Indeed, our privacy \nwas potentially invaded by Web sites. It is clear that sites \nhave a privacy policy, but there is no user control over \nher own identity. What are the conditions for using these \ndata? How can we improve our privacy? And to what \ngranularity do we allow Web sites to use our data? \n The same problem is revealed when we have access \nto resources. The more resources, the more management \nwe have to perform. It is an asymmetric trust. And the \npolicy decision may be opaque. \n It allows access with an opaque trust decision and a \nsingle centralized authority without a credentials choice. \nIt is a silo model 25 because it is neither portable nor scal-\nable. This is Identity 1.0. \n Identity management appeared with these problems \nin the 1980s. The first identity management system was \nthe Rec. X.500, developed by the ITU 26 and covering \ndirectory services such as Directory Access Protocol \n(DAP). The ISO was also associated with development \nof the standard. Like a lot of ITU standards, this one was \nvery heavy and complex. A light version appeared in the \n1990s for DAP. It was LDAP, which was standardized \nby the IETF and became widespread and adopted by \nNetscape. Microsoft has invented an equivalent Active \nDirectory, and for users it introduced Passport. It is also \nthe ITU that standardized X.509 for identities related to \ncertificates and it is the format currently recognized. It \nis a small file, generated by an authority of certification. \nIf there is a loss or a usurpation of the certificate, it can \nalways be revoked by the authority of certification. \n This is for single users; what about business corpo-\nrations that have automated their procedures and have a \nproliferation of applications with deprovisioning but still \nDirectory\nC. Answer\nB. Credentials\nA. Credentials\nResource\nUser\nD. Connection\n FIGURE 17.5 Identity 1.0 principle. \n 25 A. J ø sang and S. Pope, “ User-centric identity management, ” \nAusCERT Conference 2005. \n 26 International telecommunication union (ITU), Geneva, www.itu.\norg . \n" }, { "page_number": 308, "text": "Chapter | 17 Identity Management\n275\nin a domain-centric model? What about resources shared \nbetween domains? \n Silo Model \n The main identity management system deployed cur-\nrently on the Internet is called the silo model , shown in \n Figure 17.6 . Indeed, the identity provider and service \nprovider are mixed up and they share the same space. \nThe identity management environment is put in place and \noperated by a single entity for a fixed user community. \n Users of different services must have different \naccounts and therefore reenter the same information \nabout their identity which increases the difficulty of \nmanagement. Moreover, the users are overloaded with \nidentities and passwords to memorize, which produce a \nsignificant barrier to usage. \n A real problem is the “ forgetability ” of passwords \ndue to the infrequent use of some of these data. This can \nobviously lead to a higher cost of service provisions. \n This is for single users. What about enterprises that \nhave automated their procedures and have a prolifera-\ntion of applications with deprovisioning but are still in \na domain-centric model? What about resources shared \nbetween domains? \n The silo model is not interoperable and is deficient in \nmany aspects. That’s why the federated identity manage-\nment model is now emerging, and it is very appreciated \nby enterprises. A federated identity management system \nconsists of software components and protocols that han-\ndle in a decentralized manner the identity of individuals \nthroughout their identity life-cycle. 27 \n Solution by Aggregation \n Aggregating identity information and finding the rela-\ntionship between identity records is important to aggre-\ngating identity. There are some alternatives: \n ● The first approach consolidates authentication and \nattributes in only one site and is called a centralized \nmanagement solution, such as Microsoft Passport. \nThis solution avoids the redundancies and inconsist-\nencies in the silo model and gives the user a seamless \nexperience. 28 The evolution 29 , 30 was as follows: \n – Building a single central identity data store, \nwhich is feasible only for small organizations \n – Creating a metadirectory that synchronizes data \nfrom other identities, data stored elsewhere \n – Creating a virtual directory that provides a single \nintegrated view of the identity data stored \n – An SSO identity model that allows users to be \nauthenticated by one service provider \n ● The second approach decentralizes the responsibility \nof IdP to multiple such IdPs, which can be selected \nby end users. This is a federate system whereby \nsome attributes of identity are stored in distributed \nIdPs. A federated directories model, by linking iden-\ntity data stored together, has emerged. Protocols are \ndefined in several standards such as Shibboleth, 31 \nWeb services federation language 2003. \n Centralized versus Federation Identity \nManagement \n Microsoft Passport is a centralized system entirely con-\ntrolled by Microsoft and closely tied to other Microsoft \nproducts. Individuals and companies have proven reluc-\ntant adopters of a system so tightly controlled by one \ndominant company. \n Centrally managed repositories in centralized iden-\ntity infrastructures can’t solve the problem of cross-\norganizational authentication and authorization. This \napproach has several drawbacks because the IdP not \nonly becomes a single point of failure, it may also not be \ntrusted. That’s why Microsoft Passport was not successful. \nIn contrast, the federation identity will leave the identity \nresources in their various distributed locations but produce \nUser\n1\n2\n3\nSP\nIdP\nIdP\nIdP\nSP\nSP\n FIGURE 17.6 Identity silo model. \n 27 A. J ø sang et al., “ Usability and privacy in identity management \narchitectures, ” (AISW2007), Ballarat, Australia, 2007. \n 28 A.B. Spantzel et al., “ User centricity: A taxonomy and open \nissues, ” IBM Zurich Research Laboratory, 2006. \n 29 A. J ø sang and S. Pope, “ User-centric identity management, ” \nausCERT conference 2005. \n 30 A. J ø sang et al., “ Usability and privacy in identity management \narchitectures, ” (AISW2007), Ballarat, Australia, 2007. \n 31 Internet2, Shibboleth project, http://shibboleth.Internet2.edu . \n" }, { "page_number": 309, "text": "PART | II Managing Information Security\n276\na federation that links them to solve identity duplication, \nprovision, and management. \n A Simple Centralized Model \n A relatively simple centralized identity management model \nis to build a platform that centralizes identities. A separate \nentity acts as an exclusive user credentials provider for \nall service providers. This approach merges both authen-\ntication and attributes in only one site. This architecture, \nwhich could be called the common user identity manage-\nment model, is illustrated in Figure 17.7 . All identities for \neach SP are gathered to a unique identity management site \n(IdP). SPs have to provide each identity to the IdP. \n In this environment, users can have access to all serv-\nice providers using the same set of identifiers and creden-\ntials. A centralized certificate (CA) could be implemented \nwith a Public Key Infrastructure (PKI) or Simple Public \nKey Infrastructure (SKPI). 32 This architecture is very effi-\ncient in a close domain where users could be identified \nby a controlled email address. Although such architecture \nseems to be scalable, the concentration of privacy-related \ninformation has a great deal of difficulty in terms of social \nacceptance. 33 \n Metadirectories \n SPs can share certain identity-related data on a meta level. \nThis can be implemented by consolidating all service \nproviders ’ specific identities to a meta-identifier linked to \ncredentials. \n There are collections of directory information from \nvarious directory sources. We aggregated them to \nprovide a single view of data. Therefore, we can show \nthese advantages: \n ● A single point of reference provides an abstraction \nboundary between application and actual implemen-\ntation. A single point of administration avoids multi-\nple directories, too. \n ● Redundant directory information can be eliminated, \nreducing administration tasks. \n This approach can be seen from the user’s point of \nview as password synchronization across multiple service \nproviders. Thus, the password is automatically changed \nwith all the others. \n This architecture can be used in large enterprises \nwhere all services are linked to a metadirectory, as \nshown in Figure 17.8 . In this case, the ease of use is clear \nbecause the administration is done by a single authority. \n Virtual Directories \n Virtual directories (VDs) are directories that are not \nlocated in the same physical structure as the Web home \ndirectory but look as though they are to Web clients. The \nactual directories may be at a completely different loca-\ntion in the physical directory structure — for example, \non another hard disk or on a remote computer. They are \nsimilar in concept to metadirectories in that they provide \na single directory view from multiple independent direc-\ntories. They differ in the means used to accomplish this \ngoal. Metadirectories (MD) software agents replicate \nand synchronize data from various directories in what \nmight be batch processes. In contrast, Various Directories \n(VDs) provide a single view of multiple directories using \nreal-time queries based on mapping from fields in the \nSP2\nSP3\nIdP 4\nIdP 4 \ngathers all\nidentities\nfrom all\nSPs \nSP1\nUser\n4\n4\n4\n FIGURE 17.7 Simple centralized identity management. \nSP2 \nSP1 \nSP3\n3\nUser\n2\n1\nIdP 4\nMeta\nidentifier\nprovider 4\ncentralized\ncredential\n FIGURE 17.8 Metadirectory model. \n 32 C. Esslison et al., RFC 2693- SPKI Certifi cation Theory. IETF, \nSep. 1999, www.ietf.org/rfc/rfc2693.txt . \n 33 T. Miyata et al., “ A survey on identity management protocols and \nstandards, ” IEICE TRANS. INF & SYST, 2006. \n" }, { "page_number": 310, "text": "Chapter | 17 Identity Management\n277\nvirtual scheme to fields in the physical schemes of the real \ndirectories. \n Single Sign-On (SSO) \n Single sign-on, or SSO (see Figure 17.9 ), is a solution pro-\nposed to eliminate multiple password issues and dangerous \npasswords. Moreover, it simplifies the end-user experience \nand enhances security via identity-based access technology. \n Therefore, it offers these features: \n ● Flexible authentication \n ● Seamlessness \n ● Directory independence \n ● Session and password management \n Federated Identity Management \n We have seen different approaches to manage user iden-\ntity; they are not clearly interoperable and are deficient \nin unifying standard-based frameworks. On one hand, \nmaintenance of privacy and identity control are fun-\ndamental when offering identity to users; on the other \nhand, the same users ask for more easy and rapid access. \nThe balance of the two sides leads to federated net-\nwork identity. That’s why these environments are now \nemerging. A federated identity management system (see \n Figure 17.10 ) consists of software components and pro-\ntocols that handle the identity of individuals throughout \ntheir identity life cycle. \n This architecture gives the user the illusion that there \nis a single identifier authority. Even though the user has \nmany identifiers, he doesn’t need to know all of them. \nOnly one identifier is enough to have access to all serv-\nices in the federated domain. \n Each SP is responsible for the namespace of his \nusers, and all SPs are federated by linking the identity \ndomains. Thus, the federated identity model is based \non a set of SPs, called a circle of trust by the Liberty \nAlliance. This set of SPs follows an agreement on \nmutual security and authentication to allow SSO. Indeed, \nthe federated identity management combines SSO and \nauthorization tools using a number of mutual SPs ’ tech-\nnologies and standards. This practice makes the recogni-\ntion and entitlement of user identities by other SPs easy. \n Figure 17.10 shows the set of federated domains and the \npossibility for other SPs to have access to the same user \nwith different identifiers. \n The essential difference between federated identity \nsystems and centralized identity management is that \nthere is no single entity that operates the identity man-\nagement system. Federated systems support multiple \nidentity providers and a distributed and partitioned store \nfor identity information. Therefore, a federated identity \nnetwork allows a simplified sign-on to users by giving \nrapid access to resources, but it doesn’t require the user’s \npersonal information to be stored centrally. With this \nidentity network approach, users authenticate themselves \nonce and can control how their personal information and \npreferences are used by the service providers. \n Federated identity standards, like those produced by \nthe Liberty Alliance, provide SSO over all offered services \nand enable users to manage the sharing of their personal \ninformation through identity and service providers as well \nas the use of personalized services to give them access to \nconvergent services. The interoperability between dispa-\nrate security systems is assumed by an encapsulation layer \nthrough a trust domain, which links a set of trusted service \nproviders. \nSP3\nSP2\nIdP1\n1\n2\n3\n3\nUser\nSSO to\nOther\nDomains\nFederation\nIdentity\nDomain 1\nIdP2\nIdP3\nSP1\n FIGURE 17.9 Single sign-on model. \nSP2\n3\nUser\nSP1\nIdP 3\nCentralized\nProvider 3\nIdentifier and\nCredential\n FIGURE 17.10 Federated identity management model. \n" }, { "page_number": 311, "text": "PART | II Managing Information Security\n278\n However, there are some disadvantages with federated \nidentity management. The first is the user’s lack of privacy, \nbecause his personnel attributes and information can be \nmapped using correlation between identifiers. Anonymity \ncould be violated. The second is the scalability of users \nbecause they have access to the network from different \ndomains by authentication to their relative IdPs. Therefore, \nthe problem of passwords will continue across multiple \nfederated domains. \n A major challenge is to integrate all these compo-\nnents into a distributed network and to deal with these \ndrawbacks. This challenge cannot be taken up without \nnew paradigms and supported standards. \n The evolution of identity management systems is \ntoward simplification of user experience and reinforcing \nauthentication. It is very known that poor usability implies \nthe weakness of authentication. A new paradigm should be \nintroduced to solve those problems while still being com-\npatible at least with federated identity management. \n That is why user-centric identity management has \nemerged. 34 , 35 This paradigm is embraced by multi-\nple industry products and initiatives such as Microsoft \nCardSpace, 36 Sxip, 37 and Higgins Trust Framework. 38 \nThis is Identity 2.0. \n Identity 2.0 \n The user of Internet services is overcome with identities. \nShe is seldom able to transfer her identity from one site \nto another. The reputation that she gains in one network \nis useful to transfer to other networks. Nevertheless, she \ncannot profit from her constructed reputation, and she \nmust rebuild her identity and reputation time and again. \nThe actual systems don’t allow users to decide about the \nsharing of their attributes related to their identity with \nother users. This causes a lack of privacy control. Some \nsolutions propose an advanced social system that would \nmodel the social interaction after the real world. \n The solutions must be easy to use and enable users \nto share credentials among many services and must be \ntransparent from the end-user perspective. \n The principle of modern identity is to separate the \nacquisition process from the presentation process. It is the \nsame for the identification process and the authorization \nprocess. Moreover, it provides scalability and privacy. \nDoing so, we can have more control over our identities. \n The scale, security, and usability advantages of user-\ncentric identity make it the underpinning for Identity 2.0. \nThe main objective of the Identity 2.0 protocol is to pro-\nvide users with full control over their virtual identities. \nAn important aspect of Identity 2.0 is protection against \nincreasing Web attacks such as phishing as well as the \ninadvertent disclosure of confidential information while \nenabling convenient management. \n Identity 2.0 would allow users to use one identity respect-\ning transparency and flexibility. It is focused around the user \nand not around directory or identity providers. It requires \nidentified transactions between users and relaying party \nusing credentials, thus providing more traceable transactions. \nTo maximize the privacy of users, some credentials could be \ngiven to the users in advance. Doing so, the IdP could not \neasily know when the user is utilizing the credentials. \n The Identity 2.0 (see Figure 17.11 ) completely endorses \nthe paradigms of user-centric identity management, ena-\nbling users ’ full control over their identities. Service pro-\nviders will therefore be required to change their approaches \nby including requests for and authentication of users ’ iden-\ntity. Identity 2.0 systems are interested in using the concept \nof a user’s identity as credentials for the user, based on \nattributes such as user name and address to less traditional \nthings such as their desires, customer service history, and \nother attributes that are usually not so associated with a \nuser identity. \n Identity 2.0 initiatives \n When a Web site collects data from users, it cannot con-\nfirm whether or not the collected data is pertinent and \nUser\nURL\nIf Needed\nConnection\nResponse\nGet from URL\nService Provider\nIdentity Provider\n FIGURE 17.11 URL-based Id 2.0. \n 34 A.B. Spantzel et al., “ User centricity: A taxonomy and open \nissues, ” IBM Zurich Research Laboratory, 2006. \n 35 A. J ø sang and S. Pope, “ User-centric identity management, ” \nAusCERT Conference 2005. \n 36 Microsoft, A technical ref. for InfoCard in Windows, http://msdn.\nmicrosoft.com/winfx/reference/infocard/ 2005. \n 37 G. Roussos and U. Patel, “ Mobile identity management: An \nenacted view, ” Birkbeck College, University of London, 2003. \n 38 Higgins Trust Framework project, www.eclipse.org/higgins/ , 2006. \n" }, { "page_number": 312, "text": "Chapter | 17 Identity Management\n279\nreliable, since users often enter nonsense information \ninto online forms. This is due to the lack of Web sites to \ncontrol and verify users ’ data. Furthermore, due to the \nlegal limitation on requested data, a Web site cannot pro-\nvide true customized services, even though users require \nthem. On the other side, users have no direct control over \nwhat the Web site will do with their data. In addition, \nusers enter the same data many times when accessing \nWeb sites for the first time. Doing so, they have a huge \ndifficulty in managing their large numbers of identities. \n To mitigate these problems, various models of \nidentity management have been considered. One such \nmodel, Identity 2.0, proposes an Internet-scalable and \nuser-centric identity architecture that mimics real-world \ninteractions. \n Many research labs have collaborated to develop \nthe Identity 2.0 Internet-based Identity Management \nservices. They are based on the concept of user-centric \nidentity management, supporting enhanced identity veri-\nfication and privacy and user consent and control over \nany access to personal information for Internet-based \ntransactions. \n There are various Identity 2.0 initiatives: \n ● LID \n ● XRI \n ● SAML \n ● Shibboleth \n ● ID-WSF \n ● OpenID \n ● Microsoft’s CardSpace (formerly InfoCard) \n ● SXIP \n ● Higgins \n LID \n Like LDAP, Light-Weight Identity (LID) is based on \nthe principle of simplicity, since many existing identity \nschemes are too complicated to be largely adoptable. It \nsimplifies more complex protocols; but instead of being \nless capable due to fewer features, it has had success that \nmore complex predecessors lacked. This was because \ntheir simplification reduced the required complexity to \nthe point where many people could easily support them, \nand that was one of the goals of LID. \n LID is a set of protocols capable of representing \nand using digital identities on the Internet in a simple \nmanner, without relying on any central authority. LID \nis the original URL-based identity protocol, part of the \nOpenID movement. \n LID supports digital identities for humans, human \norganizations, and nonhumans (such as software agents, \nand Web sites). It implements Yadis 39 , a metadata dis-\ncovery service, and is pluggable on all levels. \n XRI/XDI \n The XRI EXtensible Resource Identifier (see Figure 17.12 ) \nand XDI 40 represent the fractional solution without the \nintegration of Web services. They are open standards and \nroyalty-free. XRI is about Addressing, whereas XDI is a \nData Sharing protocol that uses XRI. Both XRI and XDI \nare being developed under the support of OASIS. I-name \nand I-number registry services for privacy-protected digital \naddressing use XRI. It can be used as an identifier for per-\nsons, machines and agents . \n XRI offers a human-friendly form of persistent iden-\ntifier. That’s why it is a convenient identifier for SSO \nsystems. It supports both persistent and reassignable \nidentifiers in the same syntax and establishes global con-\ntext symbols. Moreover, it enables identification of the \nsame logical resource across multiple contexts and mul-\ntiple versions of the same logical resource. \n XDI is a Secure Distributed Data Sharing Protocol. \nIt is also an architecture and specification for privacy-\ncontrolled data exchange in which all data is identified \nusing XRIs. The XDI platform includes explicit specifica-\ntion for caching with both push and pull synchronization. \nXDI universal schema can represent any complex data and \nhave the ability to do cross-context addressing and linking. \n SAML \n The Security Assertion Markup Language (SAML) is an \nOASIS specification 41 that provides a set of rules for the \nstructure of identity assertions, protocols to move asser-\ntions, bindings of protocols for typical message transport \nmechanisms, and profiles. Indeed, SAML (see Figure \n17.13 ) is a set of XML and SOAP-based services and \nformats for the exchange of authentication and authori-\nzation information between security systems . \n The initial versions of SAML v1.0 and v1.1 define \nprotocols for SSO, delegated administration, and policy \nmanagement. The most recent version is SAML 2.0. It is \nExtends\nExtends\nChar\nBase\nAbstract\nInternational\nUniform\nURI\nIRI\nXRI\n FIGURE 17.12 XRI layers. \n 39 Yadis, “Yadis specifi cation 1.0,” released March 2006, http://yadis.org . \n 40 OASIS Working Draft Version 04, “ An Introduction to XRIs, ” \nMarch 14, 2005. \n 41 OASIS, “Conformance requirements for the OASIS Security \nAssertion Markup Language (SAML),” Vol. 20, 2005. \n" }, { "page_number": 313, "text": "PART | II Managing Information Security\n280\nnow the most common language to the majority of plat-\nforms that need to change the unified secure assertion. It \nis very useful and simple because it is based on XML. \n An assertion is a datum produced by a SAML author-\nity referring to authentication, attribute information, \nor authorizations applying to the user with respect to a \nspecified resource. \n This protocol (see Figure 17.14 ) enables interoper-\nability between security systems (browser SSO, Web \nservices security, and so on). Other aspects of federated \nidentity management as permission-based attribute shar-\ning are also supported. \n SAML is sometimes criticized for its complexity of \nthe specifications and the relative constraint of its secu-\nrity rules. Recently, the SAML community has shown \nsignificant interest in extending SAML to reach less \nstringent requirements for low-sensitivity use cases. The \nadvantages of SAML are robustness of its security and \nprivacy model and the guarantee of its interoperability \nbetween multiple-vendor implementations through the \nLiberty Alliance’s Conformance Program. \n Shibboleth \n Shibboleth 42 is a project for which the goal is to allow \nuniversities to share Web resources subject to control \naccess. Thereafter, it allows interoperation between insti-\ntutions using it. It develops architectures, policy structure, \npractical technologies, and an open-source implementa-\ntion. It is building components for both the identity pro-\nviders and the reliant parties. The key concept includes \n “ federated ” management identity, the meaning of which \nis almost the same as the Liberty term’s. 43 Access control \nis fundamentally based on user attributes, validated by \nSAML assertions. In Figure 17.15 , we can see the evolu-\ntion of SAML, Shibboleth, and XACML. 44 \n ID-WSF \n In 2001, a business alliance was formed to serve as an \nopen standards organization for federated identity man-\nagement; it was named the Liberty Alliance. 45 Its goals \nare to guarantee interoperability, support privacy, and \npromote adoption of its specifications, guidelines, and \nbest practices. The key objectives of the Liberty Alliance \n(see Figure 17.16 ) are to: \n ● Enable users to protect their privacy and identity \n ● Enable SPs to manage their clients \n ● Provide an open, federated SSO \n ● Provide a network identity infrastructure that sup-\nports all current emerging network access devices \nUser\nSAML Token\nSAML Token\nSP\nIdP\nAsking Token\nConnection\nTrust\nSAML Response\nSAML Request\n FIGURE 17.13 SAML token exchange. \nAssertion Id\nIssuer\nIssue Instant (time stamp)\nValidity Time Limit\nAudience Restriction\nAuthentication Statement\nAuthentication Method\nAuthentication Instant\nUser Account (IdP pseudonym)\nUser Account (SP pseudonym)\nDigital Signature Assertion\nAuthentication\nAssertion\n FIGURE 17.14 SAML assertion. \nSAML 1.0 \n2002/05\nSAML 1.1 \n2003/05\nSAML 2.0 \n2005/03\nXACML 1.0 \n2003/10\nShibboleth 1.0 \n2003/1Q\nAccommodation\nwith XACML\n FIGURE 17.15 Convergence between SAML and Shibboleth. \n 42 Internet2, Shibboleth project, http://shibboleth.Internet2.edu . \n 43 Liberty Developer Tutorial, www.projectliberty.org/resources/ \nLAP_ DIDW_Oct-15_2003_jp.pdf . \n 44 SACML, www.oasis-open.org/committees/tc_home.php?wg_ \nabbrev \u0003 xacml . \n 45 Liberty Alliance, “Liberty ID-FF architecture overview,” Liberty \nAlliance Project, 2005. \n" }, { "page_number": 314, "text": "Chapter | 17 Identity Management\n281\n The Liberty Alliance’s work in the first phase is to \nenable federated network identity management 46 . It \noffers, among other things, SSO and linking accounts \nin the set of SPs in the boundary of the circle of trust. \nThe work of this phase is referred to as the Identity \nFederation Framework (ID-FF). \n In the second phase, the specifications offer enhanc-\ning identity federation and interoperable identity-based \nWeb services. This body is referred to as the Identity Web \nServices Framework (ID-WSF). This framework involves \nsupport of the new open standard such as WS-Security \ndeveloped in OASIS. ID-WSF is a platform for the dis-\ncovery and invocation of identity services — Web services \nassociated with a given identity. In the typical ID-WSF \nuse case, after a user authenticates to an IdP, this fact is \nasserted to an SP through SAML-based SSO. Embedded \nwithin the assertion is information that the SP can option-\nally use to discover and invoke potentially numerous and \ndistributed identity services for that user. Some scenarios \npresent an unacceptable privacy risk because they suggest \nthe possibility of a user’s identity being exchanged with-\nout that user’s consent or even knowledge. ID-WSF has a \nnumber of policy mechanisms to guard against this risk, \nbut ultimately, it is worth noting that many identity trans-\nactions (such as automated bill payments) already occur \nwithout the user’s active real-time consent — and users \nappreciate this efficiency and convenience. \n To build additional interoperable identity services \nsuch as registration services, contacts, calendar, geolo-\ncation services, and alert services, it’s envisaged to use \nID-WSF. This specification is referred to as the Identity \nServices Interface Specification (ID-SIS). \n The Liberty Alliance specifications define the pro-\ntocol messages, profiles, and processing rules for \nidentity federation and management. They rely heav-\nily on other standards such as SAML and WS-Security, \nwhich is another OASIS specification that defines mech-\nanisms implemented in SOAP headers. \n These mechanisms are designed to enhance SOAP \nmessaging by providing a quality of protection through \nmessage integrity, message confidentiality, and sin-\ngle message authentication. Additionally, Liberty has \ncontributed portions of its specification back into the \ntechnical committee working on SAML. Other identity \nmanagement enabling standards include: \n ● Service Provisioning Markup Language (SPML) \n ● XML Access Control Markup Language (XACML) \n ● XML Key Management Specification (XKMS) \n ● XML Signature \n ● XML Encryption \n The WS-* (the Web Services protocol specifica-\ntions) are a set of specifications currently under devel-\nopment by Microsoft and IBM. It is part of a larger \neffort to define a security framework for Web services; \nthe results of proposals are often referred to as WS-*. It \nincludes such specifications as WS-Policy, WS-Security \nConversation, WS-Trust, and WS-Federation. This last \none has functionality for enabling pseudonyms and \nattribute-based interactions. Therefore, WS-Trust has the \nability to ensure security tokens as a means of brokering \nidentity and trust across domain boundaries. 47 \n The Liberty Alliance is developing and delivering a \nspecification that enables federate network identity man-\nagement. Figure 17.16 shows an overview of the Liberty \nAlliance architecture as described in the introduction to \nthe Liberty Alliance identity architecture. \n OpenID 2.0 \n Brad Fitzpatrick is at the origin of the development of \nOpenID 1.0. The intent of the OpenID framework is to \nspecify layers that are independent and small enough to \nbe acceptable and adopted by the market. 48 OpenID is \nbasically providing simple attribute sharing for low-value \ntransactions. It does not depend on any preconfigured \ntrust model. Version 1.0 has a deal with an HTTP-based \nURL authentication protocol. OpenID authentication 2.0 \nis becoming an open platform that supports both URL \nand XRI user identifiers. In addition, it would like to be \nmodular, lightweight, and user oriented. Indeed, OpenID \nauth. 2.0 allows users to choose, control and manage \nLiberty\nIdentity\nFederation\nFramework\n(ID-FF)\nLiberty Identity\nServices Interface\nSpecifications\n(ID-SIS)\nLiberty Identity Web\nServices \nFramework (ID-WSF)\nSAML\nWAP\nXML\nSSL\nSOAP\nXML Esig\nHTTP\nWSS\nWSDL\nXML Enc\n FIGURE 17.16 High-level overview of the Liberty Alliance architecture. \n 46 Liberty Alliance, “ Liberty developer tutorial, ” http://www.project-\nliberty.orgwww.projectliberty.org . \n 47 T. Miyata et al., “A survey on identity management protocols and \nstandards,” IEICE TRANS. INF & SYST, 2006. \n 48 David Recordon VeriSign Inc, Drummond Reed, “ OpenID 2.0: A \nplatform for user-centric identity management, ” 2006. \n" }, { "page_number": 315, "text": "PART | II Managing Information Security\n282\ntheir identity addresses. Moreover, the user chooses his \nidentity provider and has a large interoperability of his \nidentity and can dynamically use new services that stand \nout, such as attribute verification and reputation, without \nany loss of features. No software is required on the user’s \nside because the user interacts directly with the identity \nprovider’s site. This approach jeopardizes the user iden-\ntity because it could be hacked or stolen. Also, the user \nhas no ability to examine tokens before they are sent. \n At the beginning of identity management, each tech-\nnology came with its own futures, without any interest \nin others. Later, the OpenID 1.0 community realized the \nimportance of integrating other technologies as OASIS \nExtensible Resource Description Sequence (XRDS), \nwhich is useful for its simplicity and extensibility. \n OpenID Stack \n The first layer is for supporting users ’ identification. \nUsing URL or XRI forms, we can identify a user. URLs \nuse IP or DNS resolution and are unique and ubiquitously \nsupported. They can be a personal digital address as used \nby bloggers, even though these are not yet widely used. \n XRI is being developed under the support of OASIS \nand is about addressing. \n I-names are a generic term for XRI authority names \nthat provide abstract identifiers for the entity to which \nthey are assigned. They can be used as the entry point \nto access data under the control of that authority. Like a \ndomain name, the physical location of the information is \ntransparent to the requester. \n OpenID 2.0 provides a private digital address to \nallow a user to be identified only in specific conditions. \nThis guarantees the user privacy in a public domain. \n Discovery \n Yadis is used for identity service discovery for URLs \nand XRI resolution protocol for XRIs. They both use \nthe OASIS format XRDS. The protocol is simple and \ndescribes any type of service. \n Authentication \n This service lets a user prove his URL or I-name using \ncredentials (cryptographic proof). This protocol is \nexplained in Figure 17.17 . The OpenID doesn’t need a \ncentralized authority for enrollment and it is therefore a \nfederated identity management. With OpenID 2.0 the IdP \noffers the user the option of selecting a digital address to \nsend to the SP. To ensure anonymity, IdP can randomly \ngenerate a digital address used specially for this SP. \n Data Transport \n This layer ensures data exchange between the IdP and \nSP. It supports push and pull methods and it is inde-\npendent from authentication procedures. Therefore, the \nsynchronization of data and secure messaging and other \nservices will be enabled. The data formats are those \ndefined by SAML, SDI (XRI Data interchange), or any \nother data formats. This approach will enable evolution \nof the OpenID platform. \n The four layers construct the foundation of the \nOpenID ensuring user centricity (see Figure 17.18 ). \nThere are three points to guarantee this paradigm: \n ● User chooses his digital identity \n ● User chooses IdP \n ● User chooses SP \n OpenID is decentralized and well founded and at the \nsame time simple and easy to use and to deploy. It pro-\nvides an open development process and SSO for the Web \nand ease of integration into scripted Web platforms (such \nas Drupal and WordPress). Thus it has a great future. \nYou can learn about OpenID at www.openidenabled.com \nor join the OpenID community www.openid.net. \nAggregation\nPublic Records\nSecure Messaging\nProfile Exchange\nXRI\nYadis\nOpenID Authentication 2.0\nOpenID Data Transport Protocol\nURL\nOpinity\n FIGURE 17.17 OpenID protocol stack. \nClaimed Id\nIdP\nSP\nUser\nd. SP redirects browser \nto ldP\ng. Connection\na. User URL\ne. User is redirected\nto his IdP to\ncomplete trust\nrequest\n1. IdP redirects user to SP with\ncredential to prove his URL and\nthe data the user has released\nb. SP gets Yadis Doc\nc. SP initiates a trust \nchannel via discovery\n FIGURE 17.18 OpenID 1.1 protocol flow. \n" }, { "page_number": 316, "text": "Chapter | 17 Identity Management\n283\n CardSpace \n Rather than invent another technology for creating and \nrepresenting digital identities, Microsoft has adopted \nthe federated user-centric identity meta-system. This is \na serious solution that provides a consistent way to work \nwith multiple digital identities. Using standard protocols \nthat anyone can implement on any platform, the identity \nmeta-system allows the acquisition and use of any kind \nof security tokens to convey identity. \n CardSpace is Microsoft’s code name for this new \ntechnology that tackles the problem of managing and \ndisclosing identity information. CardSpace implements \nthe core of the identity meta-system, using open stand-\nard protocols to negotiate, request, and broker identity \ninformation between trusted IdPs and SPs. CardSpace is \na technology that helps developers integrate a consistent \nidentity infrastructure into applications, Web sites, and \nWeb services. \n By providing a way for users to select identities and \nmore, Windows CardSpace 49 plays an important part in \nthe identity meta-system. It provides the consistent user \nexperience required by the identity meta-system. It is \nspecifically hardened against tampering and spoofing, \nto protect the end user’s digital identities and maintain \nend-user control. Windows CardSpace enables users to \nprovide their digital identities in a familiar, secure, and \neasy way. \n In the terminology of Microsoft, the relying party is \nin the model service provider (SP). \n To prove an identity over a network, the user gives \ncredentials, which are some proofs about her identity. \nFor example, in the simplest digital identity the user-\nname is the identity whereas the password is said to \nbe the authentication credential. In the terminology of \nMicrosoft and others, these are called security tokens \nand contain one or more claims. Each claim contains \ninformation about the user, such as username or home \naddress. In addition, the security token proves that the \nclaims are correctly emitted by the real user and are \nbelonging to him. This could be done cryptographi-\ncally using various forms such as X.509 certificates and \nKerberos tickets, but unfortunately these are not practical \nto convey some kinds of claims. The standard SAML as \nseen before is indicated for this purpose because it can \nbe used to define security tokens. Indeed, SAML tokens \ncould enclose any desired information and thus become \nas largely useful in the network to show and control dig-\nital identity. \n CardSpace runs on Windows Vista, XP, Server 2003, \nand Server 2008, based on .NET3, and uses Web service \nprotocols: \n ● WS-Trust \n ● WS-Policy \n ● WS-SecurityPolicy \n ● WS-MetaDataExchange \n CardSpace runs in a virtual desktop on the PC, \nthereby locking out other processes and reducing the \npossibility of spyware intercepting information. \n Figure 17.19 shows that the architecture exactly fits \nthe principle of Identity 2.0. The user accesses one of \nany of his relying parties (SPs) using an application that \nsupports CardSpace. \n When the choice is made, the application asks for the \nrequirement of a security token of this specific SP that \nwill answer with SP policy. It really contains informa-\ntion about the claims and the accepted token formats. \n Once this is done, the application passes these \nrequirements to CardSpace, which asks for the security \ntoken from an appropriate identity provider. \n Once this security token has been received, CardSpace \ntransmits via application to the relying party. The relying \nparty can then use this token to authenticate the user. \n Note that each identity is emitted by an identity pro-\nvider and is stored on the user side. It contains the emit-\nter, the kind of security token the user can issue and the \ndetails about the identity claims. All difficulties are hid-\nden from the user; he has only to choose one CardSpace \nwhen the process of authentication is launched. Indeed, \nonce the required information is returned and passed to \nCardSpace, the system displays the card selection match-\ning the requirements on screen. In this regard, the user \nE. Security Token\nD. Security Token\nA. Ask security Token\nRequirements\nF.Connection\nB. Security Policy\nRelying Party\n(SP)\nIdP\nApplication\nCardSpace\nC. Asking Security Token\nUser\n FIGURE 17.19 Interactions among the users, identity providers, and \nrelying party. \n 49 Microsoft, “A technical ref. for InfoCard in Windows,” http://msdn.\nmicrosoft.com/winfx/reference/infocard/ , 2005. \n" }, { "page_number": 317, "text": "PART | II Managing Information Security\n284\nhas a consistent experience, since all applications based \non CardSpace will have the same interface, and the \nuser does not have to worry about the protocol used to \nexpress his identity’s security token. The PIN is entered \nby the user and the choice of his card is done in a private \nWindows desktop to prevent locally running processes. \n SXIP 2.0 \n In 2004, SXIP 1.0 grew from efforts to build a balanced \nonline identity solution that met the requirements of the \nentire online community. Indeed, SXIP 2.0 is the new \ngeneration of the SXIP 1.0 protocol, a platform that gives \nusers control over their online identities and enables \nonline communities to have richer relationships with their \nmembers. SXIP 2.0 defines entities ’ terminology as: \n ● Home site: URL-based identity given by IdP \n ● Membersite: SP that uses SXIP 2.0 \n ● User: Equivalent to the user in our model \n The Simple eXtensible Identity Protocol (SXIP) 50 \nwas designed to address the principles defined by the \nIdentity 2.0 model (see Figure 17.20 ), which proposes \nan Internet-scalable and user-centric identity architecture \nthat mimics real-world interactions. \n If an SP has integrated an SXIP to its Web site, which \nis easily done using SDKs, it is a Membersite. When a \nsubscriber of SXIP would like to access this Membersite: \n 1. He types his URL address and clicks Sxip in . \n 2. He types his URL identity issued by IdP (called the \nHomesite). \n 3. The browser is redirected to the Homesite. \n 4. He enters his username and password and is \ninformed that the Membersite has requested data, \nselects the related data, verifies it and can select to \nautomatically release data for other visits to this \nMembersite, and confirms. \n 5. The browser is redirected to the Membersite. \n 6. The user gains access to the content of the site. \n SXIP 2.0 is a platform based on a fully decentralized \narchitecture providing an open and simple set of pro cesses \nfor exchanging identity information. SXIP 2.0 has sig-\nnificantly reduced the problems resulting from moving \nidentity data from one site to another. It is a URL-based \nprotocol that allows a seamless user experience and fits \nexactly the user-centric paradigm. In that sense, the user \nhas full control of her identity and has an active role in the \nexchange of her identity data. Therefore, she can use porta-\nble authentication to connect to many Web sites. Doing so, \nthe user has more choice and convenience when exchang-\ning her identity data; this method also indirectly enables \nWeb sites to offer enhanced services to their subscribers. \n SXIP 2.0 provides the following features: \n ● Decentralized architecture. SXIP 2.0 is completely \ndecentralized and is a federated identity manage-\nment system. The online identity is URL-based and \nthe user identity is separated from the authority that \nissues the identifiers for this identity. In this regard, \nwe can easily move the location of the identity data \nwithout losing the associated identifier. \n ● Dynamic discovery. A simple and dynamic discovery \nmechanism ensures that users are always informed \nonline about the Homesite that is exporting their \nidentity data. \n ● Simple implementation. SXIP 2.0 is open source \nusing various high-level development languages \nsuch as Perl, Python, PHP, and Java. Therefore, the \nintegration of SXIP 2.0 into a Web site is effortless. \nIt does not require PKI, because it uses a URL-based \nprotocol. \n ● Support for existing technologies. SXIP 2.0 uses \nsimple Web browsers, the primary client and means \nof data exchange, providing users with choices in the \nrelease of their identity data. \n ● Interoperability. SXIP 2.0 can coexist with other \nURL-based protocols. \n ● Richer data on an Internet scale. SXIP 2.0 messages \nconsist of lists of simple name value pairs. It can \nexchange simple text, claims using SAML, and third-\nparty claims in one exchange and present them in many \nseparate exchanges. In addition, the identity provider is \nnot bothersome every time an identity is requested. \nHomesite\nURL-Based Identity (IdP)\nC. Redirection to Homesite\nD. Login/Pwd\nSelection/Confirmation\nUser\nE. Redirection to Membersite\nMembersite\n(SP)\nA, B. Connection to\nMembersite, ID-URL\nC. Redirection to Homesite\nF. Connection \nBrowser\nE. Redirection to Membersite\n FIGURE 17.20 SXIP entity interactions. \n 50 J. Merrels, “SXIP identity. DIX: Digital Identity Exchange \nprotocol.” Internet draft, March 2006. \n" }, { "page_number": 318, "text": "Chapter | 17 Identity Management\n285\n Finally, using SXIP 2.0, Web sites can also be author-\nitative about users regarding data, such as third-party \nclaims. Those are keys to build an online reputation, fur-\nther enriching the online exchange of identity data. \n Higgins \n Higgins 51 is a project supported principally by IBM and \nit is a part of IBM’s Eclipse open-source foundation. It \nwill also offer libraries for Java, C, and C \u0002 \u0002 as well \nas plug-ins for popular browsers. It is really an open-\nsource trust framework, the goals of which are to sup-\nport existing and new applications that give users more \nconvenience, privacy, and control over their identity \ninformation. The objective is to develop an extensible, \nplatform-independent, \nidentity \nprotocol-independent \nsoftware framework that provides a foundation for user-\ncentric identity management. Indeed, it enables appli-\ncations to integrate identity, profiles, and relationships \nacross heterogeneous systems. \n The main goals of Higgins as an identity manage-\nment system are interoperability, security, and privacy in \na decoupled architecture. This system is a true user-cen-\ntric one based on federated identity management. The \nuser has the ability to use a pseudonym or simply reply \nanonymously. \n We use the term context to cover a range of under-\nlying implementations. A context can be thought of as \na distributed container-like object that contains digital \nidentities of multiple people or processes. \n The platform intends to address four challenges: \n ● The need to manage multiple contexts \n ● The need for interoperability \n ● The need to respond to regulatory, public, or \ncustomer pressure to implement solutions based on \ntrusted infrastructure that offers security and privacy \n ● The lack of common interfaces to identity/network-\ning systems \n Higgins matches exactly the user-centric paradigms \nbecause it offers a consistent user experience based on \ncard icons for management and release of identity data. \nTherefore there is less vulnerability to phishing and other \nattacks. Moreover, user privacy is enabled by sharing only \nwhat is needed. Thus, the user has full control over his per-\nsonal data. The Identity Attribute Service enables aggrega-\ntion and federation of identity systems and even silos. \n For enterprises, Higgins integrates all data related to \nidentity, profile, reputation, and relationship information \nacross and among complex systems. \n Higgins is a trust framework that enables users and \nenterprises to adopt, share across multiple systems, and \nintegrate to new or existing applications digital iden-\ntity, profiles, and cross-relationship information. In fact, \nit facilitates as well the integration of different identity \nmanagement systems in the management of identity, \nprofile, reputation and relationship data across repositor-\nies. Using context providers, directories and communica-\ntions technologies (such as Microsoft/IBM WS-*, LDAP, \nemail, etc.) can be plugged into the Higgins framework. \nHiggins has become an Eclipse plug-in and is a project \nof the Eclipse Foundation. Any application developed \nwith Higgins will enable users to share identities with \nother users, under strict control. \n Higgins is beneficial for developers, users, and enter-\nprise. It relieves developers of knowing all the details of \nmultiple identity systems, thanks to one API that supports \nmany protocols and technologies: CardSpace, OpenID, XRI, \nLDAP, and so on. An application written to the Higgins API \ncan integrate identity, profile, and relationship information \nacross these heterogeneous systems. The goal of the frame-\nwork is to be useful in the development of applications \naccessed through browsers, rich clients, and Web services. \nThus, the Higgins Project is supported by IBM and Novell \nand attempts to thwart CardSpace, Microsoft’s project. \n The Higgins framework intends to define, in terms of \nservice descriptions, messages and port types consistent \nwith an SOA model and to develop a Java binding and \nimplementation as an initial reference. \n Applications can use Higgins to create a unified, vir-\ntual view of identity, profile, and relationship informa-\ntion. A key focus of Higgins is providing a foundation \nfor a new “ user-centric identity ” and personal informa-\ntion management applications. \n Finally, Higgins provides virtual integration, a user-cen-\ntric federated management model and trust brokering that \nare applied to identity, profile, and relationship informa-\ntion. Furthermore, Higgins provides common interfaces to \nidentity and, thanks to data context, it includes an enhanced \nautomation process. Those features are also offered across \nmultiple contexts, disparate systems, and implementations. \nIn this regard, Higgins is a fully interoperable framework. \n The Higgins service acts together with a set of so-\ncalled context providers that can represent a department, \nassociation, informal network, and so on. A context is the \nHiggins environment and digital identities, the policies \nand protocols that govern their interactions. Context pro-\nviders adjust existing legacy systems to the framework or \nimplement new ones; context providers may also contain \nthe identities of a machine or a human. A context encloses \na group of digital identities and their related claims and \n 51 U. Jendricke et al., Mobile Identity Management, UBICOMP \n2002. \n" }, { "page_number": 319, "text": "PART | II Managing Information Security\n286\nlinks. A context maintains a set of claims about proper-\nties and values (name, address, etc.). It is like a security \ntoken for CardSpace. The set of profile properties, the set \nof roles, and the access rights for each role are defined by \nand controlled by the context provider. \n Context providers act as adapters to existing systems. \nAdapter providers can connect, for example, to LDAP \nservers, identity management systems like CardSpace, \nor mailing list and social networking systems. A Higgins \ncontext provider (see Figure 17.21 ) has the ability to \nimplement the context interface and thus empower the \napplications layered on top of Higgins. \n The 10 requirements in the first column of Table 17.1 \nare those discussed earlier in the chapter. In the table, \nwhite means that the requirement is not covered, and \ngray that it’s fully fulfilled. \n At the moment, service providers have to choose \nbetween many authentications and identity management \nsystems and users are left to face the inconvenience of car-\nrying a variety of digital identities. The main initiatives \nhave different priorities and some unique advantages while \noverlapping in many areas. The most pressing require-\nments for users are interoperability, usability, and centricity. \nThanks to Higgins, the majority of identity requirements \nare guaranteed. Therefore, using Higgins the user is free to \nvisit all Web sites without worrying about the identity man-\nagement system used by the provider. \n 4. IDENTITY 2.0 FOR MOBILE USERS \n Now let’s talk about mobility, its evolution and its \nfuture. \n The number of devices such as mobile phones, smart \ncards, and RFIDs 53 is increasing daily and becoming huge. \nMobile phones have attracted particular interest because of \ntheir large penetration and pervasiveness that exceeds that \nof personal computers. Furthermore, the emergence of both \nIP-TV and wireless technology has facilitated the prolif-\neration of intelligent devices, mobile phones, RFIDs, and \nother forms of information technology that are developing \nat a rapid speed. These devices include a fixed identifier that \ncan be linked to a user’s identity. This identifier provides a \nmobile identity that takes into account information about the \nlocation and the mobile user’s personal data. 54 \n Mobile Web 2.0 \n Mobile Web 2.0 as a content-based service is an \nup-to-date offering of services within the mobile network. \nAs the number of people with access to mobile devices \nexceeds those using a desktop computer, mobile Web will \nbe a key factor for the next-generation network. At the \nmoment, mobile Web suffers from lack of interoperability \nand usability due to the small screen size and lower com-\nputational capability. Fortunately, these limitations are only \ntemporary, and within five years they will be easily over-\ncome. The next-generation public networks will converge \ntoward the mobile network, which will bring mobility to \nthe forefront. Thus, mobile identity management will play \na central role in addressing issues such as usability, privacy, \nand security, which are Wkey challenges for researchers in \n• eCommerce (e.g. Amazon, eBay)\n• Social Networking (e.g. LinkedIn)\n• Alumni Web Sites\n• Book Club \n• Family\n• Professional Networks\n• Dating Networks\nSocial\nNetworks\n• Healthcare Provider \n• Sales Force Automation\n• Corporate Directories\nContext\nProviders\nHiggins Trust Framework\nYou\nEmail\nor IM\nCommunities\nof Interest\nBuddy Lists\nWeb Sites\nEnterprise\nApps\nVirtual\nSpaces\n FIGURE 17.21 The Higgins Trust Framework and context. 52 \n 53 S. Garfi nkel and B. Rosenberg, RFID, Applications, Security and \nPrivacy , Addison-Wesley, 2006. \n 54 S. A. Weis et al., “ Security and privacy aspects of low-cost radio \nfrequency identifi cation systems, ” Proc. Of First International \nConference on Security in Pervasive Computing, March 2003 \n 52 Higgins Trust Framework project, www.eclipse.org/higgins/ , 2006. \n" }, { "page_number": 320, "text": "Chapter | 17 Identity Management\n287\n TABLE 17.1 Evaluating identity 2.0 technologies \n Requirement \n XRI/XDI \n ID/WSF Shibboleth \n CardSpace \n OpenID \n SXIP \n Higgins \n Empowering total control of users over their \nprivacy \n \n \n \n \n \n \n \n Usability; users are using the same identity for \neach identity transaction \n \n \n \n \n \n \n \n Giving a consistent user experience due to \nuniformity of identity interface \n \n \n \n \n \n \n \n Limiting identity attacks such as phishing \n \n \n \n \n \n \n \n Limiting reachability/disturbances such as \nspam \n \n \n \n \n \n \n \n Reviewing policies on both sides when \nnecessary, identity providers and service \nproviders \n \n \n \n \n \n \n \n Huge scalability advantages because the \nidentity provider does not have to get any prior \nknowledge about the service provider \n \n \n \n \n \n \n \n Assuring secure conditions when exchanging \ndata \n \n \n \n \n \n \n \n Decoupling digital identity from applications \n \n \n \n \n \n \n \n Pluralism of operators and technologies \n \n \n \n \n \n \n \nthe mobile network. Since the initial launch of mobile Web \nservices, customers have increasingly turned to their wire-\nless phones to connect with family and friends and to obtain \nthe latest news and information or even to produce content \nwith their mobiles and then publish it. Mobile Web 2.0 55 \nis the enforcement of evolution and will enhance the user \nexperience by providing connections in an easier and more \nefficient way. For this reason, it will be welcomed by the \nkey actors as a well-established core service identity man-\nagement tool for the next-generation mobile network. This \nmobile identity management will be used not only to iden-\ntify, acquire, access, and pay for services but also to offer \ncontext-aware services as well as location-based services. \n Mobility \n The mobile identity may not be stored in one location but \ncould be distributed among many locations, authorities, \nand devices. \n Indeed, identity is mobile in many respects 56 : \n ● There is device mobility, where a person is using the \nsame identity while using different devices \n ● There is location mobility, where a person is using \nthe same devices while changing location. \n ● There is context mobility, where a person is receiv-\ning services based on different societal roles: as a \nparent, as a professional, and so on. \n The three kinds of mobility are not isolated; they \ninteract often and became concurrently modified, creat-\ning much more complex situations than that implied from \nsingle mode. Mobile identity management addresses three \nmain challenges: usability via context awareness, trust \nbased on the perception of secure operation, and privacy \nprotection. 57 \n Evolution of Mobile Identity \n Mobile identity management is in its infancy. GSM \nnetworks, for example, provide management of \nSubscriber Identity Module (SIM) identities as a kind of \nmobile identity management, but they do not meet all the \nrequirements for complete mobile identity management. \n 55 A. Jaokar and T. Fish, Mobile Web 2.0, A book, 2007. \n 56 G. Roussos and U. Patel, “ Mobile identity management: An \nenacted view, ” Birkbeck College, University of London, 2003. \n 57 G. Roussos and U. Patel, “ Mobile identity management: An \nenacted view, ” Birkbeck College, University of London, 2003. \n" }, { "page_number": 321, "text": "PART | II Managing Information Security\n288\n Unlike static identity, already implemented in the \nWeb 2.0 identity, dynamic aspects, such as the user’s \nposition or the temporal context, gain increasing impor-\ntance for new kinds of mobile applications. 58 \n Mobile identity (MId) infrastructure solutions have \nevolved over time and can be classified into three solutions. \nThe first is just an extension of wired identity management \nto the mobile Internet. This is a widespread solution that \nis limited to the users of mobile devices running the same \noperating system as wired solutions. This limitation is \nexpected to evolve over time, mainly with the large deploy-\nment of Web services. Some specifications, such as Liberty \nAlliance specifications, have been developed for identity \nmanagement, including mobility. However, several limita-\ntions are observed when the MId system is derived from \na fixed context. These limitations are principally due to \nthe assumptions during their design and they do not match \nwell with extra requirements of mobility. 59 \n Many improvements such as interoperability, privacy, \nand security are to come, and also older centralized PKI \nmust be replaced by modern trust management systems, \nor at least a decentralized PKI. \n The second solution is capable of providing an alter-\nnative to the prevalent Internet-derived MId infrastructure \nconsisting of either connected (cellular phones) or uncon-\nnected (smart cards) mobile devices. The third consists of \nusing implantable radiofrequency identifier (RFID) devices. \nThis approach is expected to increase rapidly, even if the \nmarket penetration is smaller than that of cellular phones. \n In addition, the sensitivity risk of data related to differ-\nent applications and services is seldom at the same level, \nand the number of identifiers used by a person is constantly \nincreasing. Thus, there is a real need of different kinds of \ncredentials associated with different kinds of applications. \nIndeed, a tool on the user side that’s capable of manag-\ning user credentials and identities is inevitable. With the \nincreasing capacity of CPU power and the spreading \nnumber of mobile phones with SIM cards, a mobile phone \ncan be considered a personal authentication device (PDA). \nCell phones can securely hold user credentials, passwords, \nand even identities. Therefore we introduce a new, efficient \nidentity management device on the user side that’s able to \non one hand facilite memorization and on the other hand, \nstrengthen security by limiting the number of passwords \nand their weaknesses. All wired identity management \ncan be deployed using PADs. In addition, many different \nauthentication architectures, such as dual-channel authenti-\ncation, become possible and easy to implement. \n PADs as a Solution to Strong Authentication \n A PAD 60 is a tamper-resistant hardware device that could \ninclude smart cards and sensors or not. This term was \nused early in the context of security by Wong et al. 61 The \napproach is the same; the only thing changed so far is \nthat the performance of the mobile device has radically \nchanged. This is the opportunity to emphasize the user \ncentricity, since the PAD can strengthen the user experi-\nence and facilitate the automation and system support of \nidentity management on the user side. Figure 17.22 illus-\ntrates the combination of the PAD and silo model. The \nuser stores his identity in the PAD; whenever he would \nlike to connect to a service provider: \n 1. He authenticates himself with a PIN code to use the \nPAD. \n 2. He chooses the password to be used for his connec-\ntion to the specific service provider. \n 3. He launches and logs in to the specific service pro-\nvider by entering his username and password. \n The PAD is a good device to tackle the weakness \nand inconvenience of password authentication. It pro-\nvides a user-friendly and user-centric application and \neven introduces stronger authentication. The fundamental \nSP\nUser Platform\nUser\nPIN\n2\n1\n3\nSP\nSP\nIdP\nIdP\n1\n2\n3\nIdP\n FIGURE 17.22 Integration of a PAD in the silo model. \n 58 M. Hoffmann, “User-centric identity management in open mobile \nenvironments,” Fraunhofer-Institute for Secure Telecooperation (SIT). \n 59 G. Roussos and U. Patel, “ Mobile Identity Management: An \nenacted view, ” Birkbeck College, University of London, 2003. \n 60 A. J ø sang et al., “Trust requirements in identity management,” \nAISW 2005. \n 61 Wong et al. “ Polonius: an identity authentication system, ” \n Proceedings of the 1985 IEEE Symposium on Security and Privacy . \n" }, { "page_number": 322, "text": "Chapter | 17 Identity Management\n289\nadvantage of PADs compared with common PCs using \ncommon operating systems such as Windows or Linux is \nthat PADs have a robust isolation of processes. Therefore, \ncompromising one application does not compromise all \nthe applications. This advantage is becoming less impor-\ntant for mobile phones; as manufacturers introduce flex-\nibility, many vulnerabilities are also introduced. We have \nseen many viruses for mobile phones and even nowadays \nwe have viruses for RFID. This vulnerability can compro-\nmise authentication and even biometrics authentication. \nThat’s why we should be very vigilant in implement-\ning security in PAD devices. An ideal device is the USB \nstick running a standalone OS and integrating a biometric \nreader and mobile network access. You can find some of \nthese with fingerprint readers for a reasonable price. \n Two main categories can group many authentication \narchitectures that could be implemented in a PAD. They are \nsingle- and dual-channel authentications. Thereby, the cost, \nrisk, and inconvenience can be tackled at the same time. \n Figure 17.23 illustrates the principle of single-\nchannel authentication, which is the first application of \nthe PAD. Figure 17.24 illustrates the second principle of \ndual-channel authentication, which is more secure. \n Types of Strong Authentication Through \nMobile PADs \n The mobile network, mainly GSM, can help overcome \nmany security vulnerabilities, such as phishing or man-\nin-the-middle (MITM) attacks. It attracts all businesses \nthat would like to deploy dual-channel authentication but \nworry about cost and usability. The near-ubiquity of the \nmobile network has made feasible the utilization of this \napproach, which is even being adopted by some banks. \n SMS-Based One-Time Password (OTP) \n The main advantages of mobile networks are the facil-\nity and usability to send and receive SMSs. Moreover, \nthey could be used to set up and easily download Java \nprograms to a mobile device. In addition, mobile devices \nare using smart cards that can securely calculate and \nstore claims. \n The cost is minimized by adopting a mobile device \nusing SMS to receive OTP instead of special hardware \nthat can generate OTP. \n The scenario implemented by some banks is illus-\ntrated in Figure 17.25 and described as follows. First, the \nuser switches on her mobile phone and enters her PIN \ncode. Then: \n 1. The user logs in to her online account by entering \nher username and password (U/P). \n 2. The Web site receives the U/P. \n 3. The server verifies the U/P. \n 4. The server sends an SMS message with OTP. \n 5. The user reads the message. \n 6. The user enters the OTP into her online account. \n 7. The server verifies the OTP and gives access. \n The problem with this approach is the fact that the \ncost is assumed by the service provider. In addition, some \nUser\nPIN\nUser Platform\nSP\nIdP\n1\n1\n FIGURE 17.23 Single-channel authentication. \nUser\nPIN\n1\n1\nUser Platform\nSP\nMobile Network\nIdP\n FIGURE 17.24 Dual-channel authentication. \nUser\nPIN\n1\n5\n1, 6\n2\n4\nUser Platform\nSP\n3, 7\nIdP\n1\nMobile Network\n FIGURE 17.25 A scenario of SMS double-channel authentication. \n" }, { "page_number": 323, "text": "PART | II Managing Information Security\n290\ndrawbacks are very common, mainly in some developing \ncountries, such as lack of coverage and SMS latency. Of \ncourse, the MITM attack is not overcome by this approach. \n Soft-Token Application \n In this case, the PAD is used as a token emitter. The appli-\ncation is previously downloaded. SMS could be sent to the \nuser to set up the application that will play the role of soft \ntoken. \n The scenario is identical to the SMS, only the user \ngenerates her OTP using the soft token instead of waiting \nfor an SMS message. The cost is less than the SMS-based \nOTP. This approach is a single-channel authentication that \nis not dependent on mobile network coverage nor latency. \nFurthermore, the MITM attack is not tackled. \n Full-Option Mobile Solution \n We have seen in the two previously scenarios that the \nMITM attack is not addressed. There exists a counterattack \nto this security issue consisting of using the second channel \nto completely control all the transactions over the online \nconnection. Of course, the security of this approach is based \non the assumption that it is difficult for an attacker to steal \na user’s personal mobile phone or attack the mobile net-\nwork. Anyway, we have developed an application to crypt \nthe SMS message, which minimizes the risk of attacks. The \nscenario is illustrated in Figure 17.26 and is as follows: \n 1. The user logs in to his online account using a token. \n 2. The server receives the token. \n 3. The server verifies the token. \n 4. Access is given to the service. \n 5. The user requests a transaction. \n 6. An SMS message is sent with the requested transac-\ntion and a confirmation code. \n 7. The user verifies the transaction. \n 8. He enters the confirmation code. \n 9. The server verifies and executes the transaction. \n 10. The server sends a transaction confirmation. \n The Future of Mobile User-Centric Identity \nManagement in an Ambient Intelligence \nWorld \n Ambient intelligence (AmI) manifests itself through a \ncollection of everyday devices incorporating computing \nand networking capabilities that enable them to interact \nwith each other, make intelligent decisions, and interact \nwith users through user-friendly multimodal interfaces. \nAmI is driven by users ’ needs, and the design of its capa-\nbilities should be driven by users ’ requirements. \n AmI technologies are expected to combine concepts \nof ubiquitous computing and intelligent systems, putting \nhumans in the center of technological developments. \nIndeed, with the Internet extension to home and mobile \nnetworks, the multiplication of modes of connection \nwill make the individual the central point. Therefore, \nuser identity is a challenge in this environment and will \nguarantee infatuation with AmI. Moreover, AmI will be \nthe future environment where we will be surrounded by \nmobile devices that will be increasingly used for mobile \ninteractions with things, places, and people. \n The low cost and the shrinking size of sensors as well \nas the ease of deployment will aid AmI research efforts \nfor rapid prototyping. Evidently, a sensor combined with \nunique biometric identifiers is becoming more frequently \nutilized in accessing systems and supposedly provides \nproof of a person’s identity and thus accountability for \nsubsequent actions. \n To explore these new AmI technologies, it is easiest \nto investigate a scenario related to ubiquitous computing \nin an AmI environment. \n AmI Scenario \n A person with a mobile device, GPS (or equivalent), and \nan ad hoc communication network connected to sensors \nvisits an intelligent environment supermarket and would \nlike to acquire some merchandise. Here we illustrate \nhow this person can benefit from a mobile identity. \n When she enters the supermarket, she is identified \nby means of her mobile device or implemented RFID \ntag, and a special menu is displayed to her. Her profile, \nrelated to her context identity, announces a discount on \ngoods, if there is one. \n The members of her social network could propose \na connection to her, if they are present, and could even \nguide her to their location. Merchandise on display could \nUser\nPIN\n1\n7\n1, 5, 8\n2\n6\n4, 10\nUser Platform\nSP\n3, 9\nIdP\n1\nMobile Network\n FIGURE 17.26 Secure transaction via SMS. \n" }, { "page_number": 324, "text": "Chapter | 17 Identity Management\n291\ncommunicate with her device to show prices and details. \nLocation-based services could be offered to quickly find \nher specific articles. \n Her device could help her find diabetic foods or \nany restrictions associated with specific articles. A \nsecure Web connection could be initiated to give more \ninformation about purchases and the user account. The \nsupermarket could use an adaptive screen to show her \ninformation that is too extensive for her device screen. \n Payment could be carried out using the payment \nidentity stored in the user’s device, and even a biomet-\nric identity to prevent identity theft. Identity information \nand profiling should be portable and seamless for inter-\noperability. The identity must be managed to ensure user \ncontrol. Power and performance management in this \nenvironment is a must. The concept of authentication \nbetween electronic devices is also highlighted. \n To use identity management, the user needs an \nappropriate tool to facilitate managing disclosure of per-\nsonal data. A usable and secure tool should be proposed \nto help even inexperienced users manage their general \nsecurity needs when using the network. \n We need mobile identity management, which is a \nconcept that allows the user to keep her privacy, depend-\ning on the situation. By using identity management, the \nuser’s device acts in a similar way to the user. In differ-\nent contexts, the user presents a different appearance. \nDevices controlled by identity management change their \nbehavior, similar to the way a user would. \n Requirements for Mobile User-Centric \nIdentity Management in an AmI World \n As the network evolution is toward mobility with the \nproliferation of ubiquitous and pervasive computing sys-\ntems, the importance of identity management to build \ntrust relationships in the context of electronic and mobile \n(e/m) government and business is evident. 62 , 63 All these \nsystems require advanced, automated identity manage-\nment processes to be cost effective and easy to use. \n Several mobile devices such as mobile phones, smart \ncards, and RFID are used for mobility. Because mobile \ndevices have fixed identifiers, they are essentially provid-\ning a mobile identity that can be likened to a user. Mobile \nidentity takes into account location data of mobile users \nin addition to their personal data. A court decision in the \nUnited Kingdom established, as proof of location of the \naccused, the location trace of his mobile phone, which \nimplies a de facto recognition of the identity of a citizen \nas the identity of his mobile telephone. 64 \n That is why mobile identity management (MIdm) \nis necessary to empower mobile users to manage their \nmobile identities, to enforce their security and privacy \ninterests. Mobile identity management is a special kind of \nidentity management. For this purpose, mobile users must \nbe able to control the disclosure of their mobile identities, \ndependent on the respective service provider, and their \nlocation via mobile identity management systems. \n Ambient intelligence emphasizes the principles of \nsecure communication anywhere, anytime, with anything. \nThe evolution of AmI will directly influence identity man-\nagement with this requirement to ensure mutual interaction \nbetween users and things. Being anywhere will imply more \nand more mobility, interoperability, and profiling. At any \ntime will imply that online as well as offline connection \nbecause the network does not have 100% coverage and \nwill imply power as well as performance management to \noptimize battery use. With anything will imply sensor use, \nbiometrics, and RFID interaction; and securely implies \nmore and more integration of privacy, authentication, ano-\nnymity, and prevention of identity theft. \n From multilateral security, 65 , 66 Jendricke 67 has \nderived privacy principles for MIdm; we have completed \nthem below with a few other important principles. \n Management systems: \n 1. Context-detection \n ● Sensors \n ● Biometrics \n ● RFID \n 2. Anonymity \n 3. Security \n ● Confidentiality \n ● Integrity \n ● Nonrepudiation \n ● Availability \n 4. Privacy \n ● Protection of location information \n 5. Trustworthiness \n ● Segregation of power, separating knowledge, \nintegrating independent parties \n ● Using open source \n 62 MyGrocer Consortium, “MyGrocer white paper,” 2002. \n 63 M. Wieser, “ The computer for the twenty-fi rst century, ” Scientifi c \nAmerican , 1991. \n 64 G. Roussos and U. Patel, “ Mobile identity management: An \nenacted view, ” Birkbeck College, University of London, 2003. \n 65 K. Rannenberg, “ Multilateral security? A concept and examples for \nbalanced security, ” Proc. 9th ACM New Security Paradigms Workshop, \n2000. \n 66 K. Reichenbach et al. “ Individual management of personal reach-\nability in mobile communications, ” Proc. IFIP TC11 (September ’ 97). \n 67 U. Jendricke et al., “Mobile identity management,” UBICOMP 2002. \n" }, { "page_number": 325, "text": "PART | II Managing Information Security\n292\n ● Trusted seals of approval seal \n 6. Law enforcement/liability \n ● Digital evidence \n ● Digital signatures \n ● Data retention \n 7. Usability \n ● Comfortable and informative user interfaces \n ● Training and education \n ● Reduction of system complexity \n ● Raising awareness \n 8. Affordability \n ● Power of market: Produce Mobile Identity \nManagement Systems (MIMS) that are competi-\ntive and are able to reach a remarkable penetra-\ntion of market \n ● Using open-source building blocks \n ● Subsidies for development, use, operation, etc. \n 9. Power management: the energy provided by the \nbatteries of mobile devices is limited and that \nenergy must be used with care on energy-friendly \napplications and services \n 10. Online and offline identity proof \n 11. Small screen size and lower computational capability \n 12. Interoperability \n ● Identity needs to be portable to be understood \nby any device \n Research Directions \n Future identity management solutions will play a more \ncentral role in the IT industry due to the pervasiveness and \nincreased presence of identity information in all compo-\nnents of the IT stack. The Liberty Alliance specifications \nprovide a standardized solution to the problem of lack of \ncapability for mobile identity. The specified architecture \nwill have an impact on the architecture of mobile services. \nHowever, many open issues remain to be considered. \n There will be many issues raised concerning identity \nand mobility. All strong platforms combining identity \nmanagement and mobility play a central role and will \nbe key elements for the progress of the ambient intelli-\ngence network; they will also represent the main vehicle \nfor the information society. \n Here we list a few of the most important research \nquestions in the field of mobile user-centric identity \nmanagement: \n ● Requirements. How can we satisfy the requirements \nof mobile identity management in mobile systems \nand devices? \n ● Mobility . How can we manage identity when the \ndevice is offline? How can we manage biometric \ninformation? How can mobility and biometrics help \nin authentication? \n ● Privacy . What are the needed identity management \ncontrols to preserve individual privacy? How can we \nguarantee anonymity? How can we protect the user \nfrom location/activity tracking? \n ● Forensic science. What is the reliability of the \nidentity management system, and how can evidence \nextracted from the system be used in court? What \nprotections are in place for the identity holder or for \nthe relaying party? \n ● Costs of infrastructure. How can we limit the cost of \nmobile identity management? \n ● Interoperability . How can we evolve toward higher \nlevels of interoperability? \n ● Efficiencies . How can we efficiently integrate \nsensors, RFIDs, and biometrics into mobile identity \nmanagement systems? How can we manage \nperformance of mobile devices? How can we \nintegrate usability into mobile identity usage? \n ● Identity theft. Does large-scale deployment of \nidentity management systems make it easier or \nharder to perpetrate identity theft or identity fraud? \nHow can we deal with theft of devices or identities? \n ● Longevity of information. Do mobile identity \nmanagement systems provide adequate care in \ntracking changes in identity information over time? \n ● Authenticity of identity. What are the trust services \nthat must be in place to generate confidence in the \nidentity management service? \n 5. CONCLUSION \n The Internet is increasingly used, but the fact that the \nInternet has not been developed with an adequate iden-\ntity layer is a major security risk. Password fatigue and \nonline fraud are a growing problem and are damaging \nuser confidence. \n Currently, major initiatives are under way to try to \nprovide a more adequate identity layer for the Internet, \nbut their convergence has not yet been achieved. Higgins \nand Liberty Alliance seem to be the most promising ones. \n In any case, future identity management solutions \nwill have to work in mobile computing settings, any-\nwhere and anytime. \n This chapter has underlined the necessity of mobility \nand the importance of identity in future ambient intelli-\ngent environments. Mobile identity management will have \nto support a wide range of information technologies and \ndevices with critical requirements such as usability on the \nmove, privacy, scalability, and energy-friendliness. \n" }, { "page_number": 326, "text": "293\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Intrusion Prevention and Detection \nSystems \n Christopher Day \n Terremark Worldwide, Inc. \n Chapter 18 \n With the increasing importance of information systems \nin today’s complex and global economy, it has become \nmission and business critical to defend those information \nsystems from attack and compromise by any number of \nadversaries. Intrusion prevention and detection systems \nare critical components in the defender’s arsenal and \ntake on a number of different forms. Formally, intrusion \ndetection systems (IDSs) can be defined as “ software or \nhardware systems that automate the process of monitor-\ning the events occurring in a computer system or net-\nwork, analyzing them for signs of security problems. ” 1 \nIntrusion prevention systems (IPSs) are systems that \nattempt to actually stop an active attack or security prob-\nlem. Though there are many IDS and IPS products on \nthe market today, often sold as self-contained, network-\nattached computer appliances, truly effective intrusion \ndetection and prevention are achieved when viewed as a \nprocess coupled with layers of appropriate technologies \nand products. In this chapter, we will discuss the nature \nof computer system intrusions, those who commit these \nattacks, and the various technologies that can be utilized \nto detect and prevent them. \n 1. WHAT IS AN “ INTRUSION, ” \nANYWAY? \n Information security concerns itself with the confiden-\ntiality, integrity, and availability of information systems \nand the information or data they contain and process. An \nintrusion, then, is any action taken by an adversary that \nhas a negative impact on the confidentiality, integrity, or \navailability of that information. \n Given such a broad definition of “ intrusion, ” it is \ninstructive to examine a number of commonly occurring \nclasses of information system (IS) intrusions. \n Physical Theft \n Having physical access to a computer system allows an \nadversary to bypass most security protections put in place \nto prevent unauthorized access. By stealing a compu-\nter system, the adversary has all the physical access he \ncould want, and unless the sensitive data on the system is \nencrypted, the data is very likely to be compromised. This \nissue is most prevalent with laptop loss and theft. Given \nthe processing and storage capacity of even low-cost lap-\ntops today, a great deal of sensitive information can be put \nat risk if a laptop containing this data is stolen. In May \n2006, for example, it was revealed that over 26 million \nmilitary veterans ’ personal information, including names, \nSocial Security numbers, addresses, and some disability \ndata, was on a Veteran Affairs staffer’s laptop that was \nstolen from his home. 2 The stolen data was of the type \nthat is often used to commit identity theft, and due to the \nlarge number of impacted veterans, there was a great deal \nof concern about this theft and the lack of security around \nsuch a sensitive collection of data. \n Abuse of Privileges (The Insider Threat) \n An insider is an individual who, due to her role in the \norganization, has some level of authorized access to the \nIS environment and systems. The level of access can \nrange from that of a regular user to a systems adminis-\ntrator with nearly unlimited privileges. When an insider \n 1 “ NIST special publication on intrusion detection systems, ” NIST, \nWashington, D.C., 2006. \n 2 M. Bosworth, “ VA loses data on 26 million veterans, ” Consumeraffairs.\ncom, www.consumeraffairs.com/news04/2006/05/va_laptop.html , 2006. \n" }, { "page_number": 327, "text": "PART | II Managing Information Security\n294\n A Definition of Personally Identifiable Information \n Personally identifiable information (PII) is a set of infor-\nmation such as name, address, Social Security number, \nfinancial account number, credit-card number, and \ndriver’s license number. This class of information is con-\nsidered particularly sensitive due to its value to identity \nthieves and others who commit financial crimes such \nas credit-card fraud. Most U.S. states have some form \nof data breach disclosure law that imposes a burden of \nnotification on any organization that suffers unauthor-\nized access, loss, or theft of unencrypted PII. It is worth \nnoting that all the current laws provide a level of “ safe \nharbor ” for organizations that suffer a PII loss if the PII \nwas encrypted. California’s SB1386 was the first and \narguably most well known of the disclosure laws. \nabuses her privileges, the impact can be devastating. \nEven a relatively limited-privilege user is already start-\ning with an advantage over an outsider due to that user’s \nknowledge of the IS environment, critical business pro-\ncesses, and potential knowledge of security weaknesses \nor soft spots. An insider may use her access to steal \nsensitive data such as customer databases, trade secrets, \nnational security secrets, or personally identifiable infor-\nmation (PII), as discussed in the sidebar, “ A Definition \nof Personally Identifiable Information. ” Because she is \na trusted user, and given that many IDSs are designed \nto monitor for attacks from outsiders, an insider’s privi-\nleged abuse can go on for a long time unnoticed, thus \ncompounding the damage. An appropriately privileged \nuser may also use her access to make unauthorized \nmodifications to systems, which can undermine the \nsecurity of the environment. These changes can range \nfrom creating “ backdoor ” accounts to preserving access \nin the event of termination to installing so-called logic \nbombs, which are programs designed to cause damage to \nsystems or data at some predetermined point in time, \noften as a form of retribution for some real or perceived \nslight. \n 2. UNAUTHORIZED ACCESS BY AN \nOUTSIDER \n An outsider is considered anyone who does not have \nauthorized access privileges to an information system or \nenvironment. To gain access, the outsider may try to gain \npossession of valid system credentials via social engineer-\ning or even by guessing username and password pairs in a \nbrute-force attack. Alternatively, the outsider may attempt \nto exploit a vulnerability in the target system to gain \naccess. Often the result of successfully exploiting a system \nvulnerability leads to some form of high-privileged access \nto the target, such as an Administrator or Administrator-\nequivalent account on a Microsoft Windows system or a \nroot or root-equivalent account on a Unix- or Linux-based \nsystem. Once an outsider has this level of access on a \nsystem, he effectively “ owns ” that system and can steal \ndata or use the system as a launching point to attack other \nsystems. \n 3. MALWARE INFECTION \n Malware (see sidebar, “ Classifying Malware ” ) can be \ngenerally defined as “ a set of instructions that run on \nyour computer and make your system do something that \nallows an attacker to make it do what he wants it to do. ” 3 \nHistorically, malware in the form of viruses and worms \nwas more a disruptive nuisance than a real threat, but \nit has been evolving as the weapon of choice for many \nattackers due to the increased sophistication, stealthi-\nness, and scalability of intrusion-focused malware. Today \nwe see malware being used by intruders to gain access \nto systems, search for valuable data such as PII and \npasswords, monitor real-time communications, provide \nremote access/control, and automatically attack other \nsystems, just to name a few capabilities. Using malware \nas an attack method also provides the attacker with a \n “ stand-off ” capability that reduces the risk of identifica-\ntion, pursuit, and prosecution. By “ stand-off ” we mean \nthe ability to launch the malware via a number of anon-\nymous methods such as an insecure, open public wire-\nless access point. Once the malware has gained access \nto the intended target or targets, the attacker can man-\nage the malware via a distributed command and control \nsystem such as Internet Relay Chat (IRC). Not only does \nthe command and control network help mask the loca-\ntion and identity of the attacker, it also provides a scal-\nable way to manage many compromised systems at once, \nmaximizing the results for the attacker. In some cases the \nnumber of controlled machines can be astronomical, such \nas with the Storm worm infection, which, depending on \nthe estimate, ranged somewhere between 1 million and 10 \n million compromised systems. 4 These large collections of \ncompromised systems are often referred to as botnets . \n 3 E. Skoudis, Malware: Fighting Malicious Code , Prentice Hall, 2003. \n 4 P. Gutman, “ World’s most powerful supercomputer goes online, ” Full \nDisclosure , http://seclists.org/fulldisclosure/2007/Aug/0520.html , 2007. \n" }, { "page_number": 328, "text": "Chapter | 18 Intrusion Prevention and Detection Systems\n295\n 4. THE ROLE OF THE “ 0-DAY ” \n The Holy Grail for vulnerability researchers and exploit \nwriters is to discover a previously unknown and exploita-\nble vulnerability, often referred to as a 0-day exploit (pro-\nnounced zero day or oh day ). Given that the vulnerability \nhas not been discovered by others, all systems running the \nvulnerable code will be unpatched and possible targets for \nattack and compromise. The danger of a given 0-day is \na function of how widespread the vulnerable software is \nand what level of access it gives the attacker. For example, \na reliable 0-day for something as widespread as the ubiq-\nuitous Apache Web server that somehow yields root- or \nAdministrator-level access to the attacker is far more dan-\ngerous and valuable than an exploit that works against an \nobscure point-of-sale system used by only a few hundred \nusers (unless the attacker’s target is that very set of users). \n In addition to potentially having a large, vulnerable \ntarget set to exploit, the owner of a 0-day has the advan-\ntage that most intrusion detection and prevention systems \nwill not trigger on the exploit for the very fact that it has \nnever been seen before and the various IDS/IPS technol-\nogies will not have signature patterns for the exploit yet. \nWe will discuss this issue in more detail later. \n It is this combination of many unpatched targets and \nthe ability to potentially evade many forms of intrusion \ndetection and prevention systems that make 0-days such \na powerful weapon in the hands of attackers. Many legiti-\nmate security and vulnerability researchers explore soft-\nware systems to uncover 0-days and report them to the \nappropriate software vendor in the hopes of preventing \nmalicious individuals from finding and using them first. \nThose who intend to use 0-days for illicit purposes guard \nthe knowledge of a 0-day very carefully lest it become \nwidely and publically known and effective countermeas-\nures, including vendor software patches, can be deployed. \n 5 P. Gutman, “ World’s most powerful supercomputer goes online, ” Full \nDisclosure , http://seclists.org/fulldisclosure/2007/Aug/0520.html , 2007. \n Malware takes many forms but can be roughly classified by \nfunction and replication method: \n ● Virus. Self-replicating code that attaches itself to another \nprogram. It typically relies on human interaction to start \nthe host program and activate the virus. A virus usually \nhas a limited function set and its creator has no further \ninteraction with it once released. Examples are Melissa, \nMichelangelo, and Sobig. \n ● Worm. Self-replicating code that propagates over a net-\nwork, usually without human interaction. Most worms \ntake advantage of a known vulnerability in systems and \ncompromise those that aren’t properly patched. Worm \ncreators have begun experimenting with updateable \ncode and payloads, such as seen with the Storm worm. 5 \nExamples are Code Red, SQL Slammer, and Blaster. \n ● Backdoor. A program that bypasses standard security \ncontrols to provide an attacker access, often in a stealthy \nway. Backdoors rarely have self-replicating capability \nand are installed either manually by an attacker after \ncompromising a system to facilitate future access or by \nother self-propagating malware as payload. Examples \nare Back Orifice, Tini, and netcat (netcat has legitimate \nuses as well). \n ● Trojan horse. A program that masquerades as a legiti-\nmate, useful program while performing malicious func-\ntions in the background. Trojans are often used to steal \ndata or monitor user actions and can provide a back-\ndoor function as well. Examples of two well-known pro-\ngrams that have had Trojan versions circulated on the \nInternet are tcpdump and Kazaa. \n ● User-level root kit. Trojan/backdoor code that modi-\nfies operating system software so that the attacker can \nmaintain privileged access on a machine but remain \nhidden. For example, the root kit will remove malicious \npro cesses from user-requested process lists. This form of \nroot kit is called user-level because it manipulates oper-\nating system components utilized by users. This form \nof root kit can often be uncovered by the use of trusted \ntools and software since the core of the operating sys-\ntem is still unaffected. Examples of user-level root kits \nare the Linux Rootkit (LRK) family and FakeGINA. \n ● Kernel-level root kit. Trojan/backdoor code that modifies \nthe core or kernel of the operating system to provide the \nintruder the highest level of access and stealth. A ker-\nnel-level root kit inserts itself into the core of the operat-\ning system, the kernel, and intercepts system calls and \nthus can remain hidden even from trusted tools brought \nonto the system from the outside by an investigator. \nEffectively nothing the compromised system tells a user \ncan be trusted, and detecting and removing kernel-level \nroot kits is very difficult and often requires advanced \ntechnologies and techniques. Examples are Adore and \nHacker Defender. \n ● Blended malware. More recent forms of malware com-\nbine features and capabilities discussed here into one \nprogram. For example, one might see a Trojan horse \nthat, once activated by the user, inserts a backdoor uti-\nlizing user-level root-kit capabilities to stay hidden and \nprovide a remote handler with access. Examples of \nblended malware are Lion and Bugbear. \n Classifying Malware \n" }, { "page_number": 329, "text": "PART | II Managing Information Security\n296\n One of the more disturbing issues regarding 0-days is \ntheir lifetimes. The lifetime of a 0-day is the amount of \ntime between the discovery of the vulnerability and pub-\nlic disclosure through vendor or researcher announce-\nment, mailing lists, and so on. By the very nature of \n0-day discovery and disclosure it is difficult to get reli-\nable statistics on lifetimes, but one vulnerability research \norganization claims their studies indicate an average \n0-day lifetime of 348 days. 6 Hence, if malicious attackers \nhave a high-value 0-day in hand, they may have almost a \nyear to put it to most effective use. If used in a stealthy \nmanner so as not to tip off system defenders, vendors, \nand researchers, this sort of 0-day can yield many high-\nvalue compromised systems for the attackers. Though \nthere has been no official substantiation, there has been \na great deal of speculation that the Titan Rain series of \nattacks against sensitive U.S. government networks \nbetween 2003 and 2005 utilized a set of 0-days against \nMicrosoft software. 7 , 8 \n 5. THE ROGUE’S GALLERY: ATTACKERS \nAND MOTIVES \n Now that we have examined some of the more common \nforms computer system intrusions take, it is worthwhile \nto discuss the people who are behind these attacks and \nattempt to understand their motivations. The appropriate \nselection of intrusion detection and prevention technolo-\ngies is dependent on the threat being defended against, \nthe class of adversary, and the value of the asset being \nprotected. \n Though it is always risky to generalize, those who \nattack computer systems for illicit purposes can be \nplaced into a number of broad categories. At minimum \nthis gives us a “ capability spectrum ” of attackers to \nbegin to understand motivations and, therefore, threats: \n ● Script kiddy. The pejorative term script kiddy is used \nto describe those who have little or no skill at writing \nor understanding how vulnerabilities are discovered \nand exploits are written, but download and utilize \nothers ’ exploits, available on the Internet, to attack \nvulnerable systems. Typically, script kiddies are not a \nthreat to a well-managed, patched environment since \nthey are usually relegated to using publicly known \nand available exploits for which patches and detec-\ntion signatures already exist. \n ● Joy rider. This type of attacker is often represented \nby people with potentially significant skills in \ndiscovering vulnerabilities and writing exploits but \nwho rarely have any real malicious intent when they \naccess systems they are not authorized to access. \nIn a sense they are “ exploring ” for the pleasure of \nit. However, though their intentions are not directly \nmalicious, their actions can represent a major source \nof distraction and cost to system administrators, who \nmust respond to the intrusion anyway, especially \nif the compromised system contained sensitive \ndata such as PII where a public disclosure may be \nrequired. \n ● Mercenary. Since the late 1990s there has been a \ngrowing market for those who possess the skills to \ncompromise computer systems and are willing to \nsell them from organizations willing to purchase \nthese skills. 9 Organized crime is a large consumer \nof these services. Computer crime has seen a \nsignificant increase in both frequency and severity \nover the last decade, primarily driven by direct, \nillicit financial gain and identity theft. 10 In fact, \nso successful have these groups become that a \nfull-blown market has emerged, including support \norganizations offering technical support for rented \nbotnets and online trading environments for the \nexchange of stolen credit card data and PII. Stolen \ndata has a tangible financial value, as shown in \n Table 18.1 , which indicates the dollar-value ranges \nfor various types of PII. \n ● Nation-state backed. Nations performing espionage \nagainst other nations do not ignore the potential for \nintelligence gathering via information technology \nsystems. Sometimes this espionage takes the form \nof malware injection and system compromises such \nas the previously mentioned Titan Rain attack; \nother times it can take the form of electronic \ndata interception of unencrypted email and other \nmessaging protocols. A number of nations have \ndeveloped or are developing an information warfare \ncapability designed to impair or incapacitate an \nenemy’s Internet-connected systems, command-\nand-control systems, and other information \n 6 J.Aitel, “ The IPO of the 0-day, ” www.immunityinc.com/downloads/\n0day_IPO.pdf , 2007. \n 7 M. H. Sachs, “ Cyber-threat analytics, ” www.cyber-ta.org/down-\nloads/fi les/Sachs_Cyber-TA_ThreatOps.ppt , 2006. \n 8 J. Leyden, “ Chinese crackers attack US.gov, ” The Register , www.\ntheregister.co.uk/2006/10/09/chinese_crackers_attack_us/ , 2006. \n 9 P. Williams, “ Organized crime and cyber-crime: Implications for \nbusiness, ” www.cert.org/archive/pdf/cybercrime-business.pdf , 2002. \n 10 C. Wilson, “ Botnets, cybercrime, and cyberterrorism: Vulnerabilities and \npolicy issues for Congress, ” http://fas.org/sgp/crs/terror/RL32114.pdf , 2008. \n" }, { "page_number": 330, "text": "Chapter | 18 Intrusion Prevention and Detection Systems\n297\n TABLE 18.1 PII Values \n Goods and Services \n Percentage \n Range of Prices \n Financial Accounts \n 22% \n $10 – $1,000 \n Credit Card Information \n 13% \n $.40 – $20 \n Identity Information \n 9% \n $1 – $15 \n eBay Accounts \n 7% \n $1 – $8 \n Scams \n 7% \n $2.5 – $50/week for hosting, $25 for design \n Mailers \n 6% \n $1 – $10 \n Email Addresses \n 5% \n $.83 – $10/MB \n Email Passwords \n 5% \n $4 – $30 \n Drop (request or offer) \n 5% \n 10% – 50% of drop amount \n Proxies \n 5% \n $1.50 – $30 \n (Compiled from Miami Electronic Crimes Task Force and Symantec Global Internet Security Threat Report (2008)) \ntechnology capability. 11 These sorts of capabilities \nwere demonstrated in 2007 against Estonia, allegedly \nby Russian sympathizers utilizing a sustained series \nof denial-of-service attacks designed to make certain \nWeb sites unreachable as well as to interfere with \nonline activities such as email and mission-critical \nsystems such as telephone exchanges. 12 \n 6. A BRIEF INTRODUCTION TO TCP/IP \n Throughout the history of computing, there have been \nnumerous networking protocols, the structured rules \ncomputers use to communicate with each other, but \nnone have been as successful and become as ubiquitous \nas the Transmission Control Protocol/Internet Protocol \n(TCP/IP) suite of protocols. TCP/IP is the protocol suite \nused on the Internet, and the vast majority of enter-\nprise and government networks have now implemented \nTCP/IP on their networks. Due to this ubiquity almost \nall attacks against computer systems today are designed \nto be launched over a TCPI/IP network, and thus the \nmajority of intrusion detection and prevention systems \nare designed to operate with and monitor TCP/IP-based \nnetworks. Therefore, to better understand the nature \nof these technologies it is important to have a working \nknowledge of TCP/IP. Though a complete description \nof TCP/IP is beyond the scope of this chapter, there are \nnumerous excellent references and tutorials for those \ninterested in learning more. 13 , 14 \n Three features that have made TCP/IP so popular and \nwidespread are 15 : \n ● Open protocol standards that are freely available. \nThis and independence from any particular operating \nsystem or computing hardware means that TCP/IP \ncan be deployed on nearly any computing device \nimaginable. \n ● Hardware, transmission media, and device \nindependence . TCP/IP can operate over numerous \nphysical devices and network types such as Ethernet, \nToken Ring, optical, radio, and satellite. \n ● A consistent and globally scalable addressing \nscheme. This ensures that any two uniquely \naddressed network nodes can communicate with \neach other (notwithstanding any traffic restrictions \nimplemented for security or policy reasons), even if \nthose nodes are on different sides of the planet. \n 11 M. Graham, “ Welcome to cyberwar country, USA, ” WIRED , \n www.wired.com/politics/security/news/2008/02/cyber_command , \n2008. \n 12 M. Landler and J. Markoff, “ Digital fears emerge after data siege in \nEstonia, ” New York Times , www.nytimes.com/2007/05/29/technology/\n29estonia.html , 2007. \n 13 R. Stevens, TCP/IP Illustrated, Volume 1: The Protocols, Addison-\nWesley Professional, 1994. \n 14 D. E. Comer, Internetworking with TCP/IP Vol. 1: Principles, \nProtocols, and Architecture , 4 th ed., Prentice Hall, 2000. \n 15 C. Hunt, TCP/IP Network Administration , 3 rd ed., O’Reilly Media, \nInc., 2002. \n" }, { "page_number": 331, "text": "PART | II Managing Information Security\n298\n 7. THE TCP/IP DATA ARCHITECTURE \nAND DATA ENCAPSULATION \n The best way to describe and visualize the TCP/IP proto-\ncol suite is to think of it as a layered stack of functions, \nas in Figure 18.1 . \n Each layer is responsible for a set of services and \ncapabilities provided to the layers above and below it. \nThis layered model allows developers and engineers to \nmodularize the functionality in a given layer and mini-\nmize the impacts of changes on other layers. Each layer \nperforms a series of functions on data as it is prepared \nfor network transport or received from the network. The \nway those functions are performed internal to a given \nlayer is hidden from the other layers, and as long as the \nagreed rules and standards are adhered to with regard to \nhow data is passed from layer to layer, the inner work-\nings of a given layer are isolated from any other layer. \n The Application Layer is concerned with applications \nand processes, including those with which users interact, \nsuch as browsers, email, instant messaging, and other \nnetwork-aware programs. There can also be numerous \napplications in the Application Layer running on a com-\nputer system that interact with the network, but users \nhave little interaction with such as routing protocols. \n The Transport Layer is responsible for handling data \nflow between applications on different hosts on the net-\nwork. There are two Transport protocols in the TCP/IP \nsuite: the Transport Control Protocol (TCP) and the \nUser Datagram Protocol (UDP). TCP is a connection- \nor session-oriented protocol that provides a number of \nservices to the application, such as reliable delivery via \nPositive Acknowledgment with Retransmission (PAR), \npacket sequencing to account for out-of-sequence receipt \nof packets, receive buffer management, and error detec-\ntion. In contrast, UDP is a low-overhead, connectionless \nprotocol that provides no delivery acknowledgment or \nother session services. Any necessary application reli-\nability must be built into the application, whereas with \nTCP, the application need not worry about the details of \npacket delivery. Each protocol serves a specific purpose \nand allows maximum flexibility to application devel-\nopers and engineers. There may be numerous network \nservices running on a computer system, each built on \neither TCP or UDP (or, in some cases, both), so both \nprotocols utilize the concept of ports to identify a spe-\ncific network service and direct data appropriately. For \nexample, a computer may be running a Web server, and \nstandard Web services are offered on TCP port 80. That \nsame computer could also be running an email system \nutilizing the Simple Mail Transport Protocol (SMTP), \nwhich is by standard offered on TCP port 25. Finally, \nthis server may also be running a Domain Name Service \n(DNS) server on both TCP and UDP port 53. As can be \nseen, the concept of ports allows multiple TCP and UDP \nservices to be run on the same computer system without \ninterfering with each other. \n The Network Layer is primarily responsible for \npacket addressing and routing through the network. The \nInternet Protocol (IP) manages this process within the \nTCP/IP protocol suite. One very important construct \nfound in IP is the concept of an IP address. Each sys-\ntem running on a TCP/IP network must have at least \none unique address for other computer systems to direct \ntraffic to it. An IP address is represented by a 32-bit \nnumber, which is usually represented as four integers \nranging from 0 to 255 separated by decimals, such as \n192.168.1.254. This representation is often referred to \nas a dotted quad . The IP address actually contains two \npieces of information: the network address and the node \naddress. To know where the network address ends and \nthe node address begins, a subnet mask is used to indi-\ncate the number of bits in the IP address assigned to the \nnetwork address and is usually designated as a slash \nand a number, such as /24. If the example address of \n192.168.1.254 has a subnet mask of /24, we know that \nthe network address is 24 bits, or 192.168.1, and the \nnode address is 254. If we were presented with a subnet \nmask of /16, we would know that the network address is \n192.168 while the node address is 1.254. Subnet mask-\ning allows network designers to construct subnets of \nvarious sizes, ranging from two nodes (a subnet mask of \n/30) to literally millions of nodes (a subnet of /8) or any-\nthing in between. The topic of subnetting and its impact \nApplication Layer\nTransport Layer\nNetwork Layer\nPhysical Layer\nData Flow\n FIGURE 18.1 TCP/IP data architecture stack. \n" }, { "page_number": 332, "text": "Chapter | 18 Intrusion Prevention and Detection Systems\n299\non addressing and routing is a complex one, and the \ninterested reader is referred to 16 for more detail. \n The Physical Layer is responsible for interaction \nwith the physical network medium. Depending on the \nspecifics of the medium, this can include functions such \nas collision avoidance, the transmission and reception of \npackets or datagrams, basic error checking, and so on. \nThe Physical Layer handles all the details of interfacing \nwith the network medium and isolates the upper layers \nfrom the physical details. \n Another important concept in TCP/IP is that of data \nencapsulation. Data is passed up and down the stack as it \ntravels from a network-aware program or application in \nthe Application Layer, is packaged for transport across \nthe network by the Transport and Network Layer, and \neventually is placed on the transmission medium (cop-\nper or fiber-optic cable, radio, satellite, and so on) by \nthe Physical Layer. As data is handed down the stack, \neach layer adds its own header (a structured collection of \nfields of data) to the data passed to it by the above layer. \n Figure 18.2 illustrates three important headers: the IP \nheader, the TCP header, and the UDP header. Note that \nthe various headers are where layer-specific constructs \nsuch as IP address and TCP or UDP port numbers are \nplaced, so the appropriate layer can access this informa-\ntion and act onit accordingly. \n The receiving layer is not concerned with the con-\ntent of the data passed to it, only that the data is given \nto it in a way compliant with the protocol rules. The \nPhysical Layer places the completed packet (the full col-\nlection of headers and application data) onto the trans-\nmission medium for handling by the physical network. \nWhen a packet is received, the reverse process occurs. \nAs the packet travels up the stack, each layer removes \nits respective header, inspects the header content for \ninstructions on which upper layer in the protocol stack \nto hand the remaining data to, and passes the data to the \n4-bit version\n8-bit type of service\n16-bit IP/fragment identification\n3-bit flags\n13-bit fragment offset\n16-bit header checksum\n8-bit protocol ID\n8-bit time to live (TTL)\n32-bit source IP address\n32-bit destination IP address\noptions (if present)\ndata (including upper layer headers)\n16-bit source port number\n16-bit source port number\n16-bit UDP length (header plus data)\n16-bit UDP checksum\ndata (if any)\n32-bit sequence number\n32-bit acknowledgement number\n6-bit flags\n16-bit window size\n16-bit urgent pointer\noptions (if present)\ndata (if any)\n6-bit\nreserved\n16-bit TCP checksum\n4-bit TCP\nheader\nlength\n16-bit destination port number\n16-bit destination port number\n16-bit total packet length (value in bytes)\n4-bit header\nlength\nIP, Version 4 Header\nTCP Header\nUDP Header\n(sourced from Request for Comment (RFC) 791, 793, 768)\n FIGURE 18.2 IP, TCP, and UDP headers. \n 16 R. Stevens, TCP/IP Illustrated, Volume 1: The Protocols, Addison-\nWesley Professional, 1994. \n" }, { "page_number": 333, "text": "PART | II Managing Information Security\n300\nappropriate layer. This process is repeated until all TCP/\nIP headers have been removed and the appropriate appli-\ncation is handed the data. The encapsulation process is \nillustrated in Figure 18.3 . \n To best illustrate these concepts, let’s explore a some-\nwhat simplified example. Figure 18.4 illustrates the various \nsteps in this example. Assume that a user, Alice, want to \nsend an email to her colleague Bob at CoolCompany.com. \n 1. Alice launches her email program and types in \nBob’s email address, bob@coolcompany.com , as \nwell as her message to Bob. Alice’s email program \nconstructs a properly formatted SMTP-compliant \nmessage, resolves Cool Company’s email server \naddress utilizing a DNS query, and passes the mes-\nsage to the TCP component of the Transport Layer \nfor processing. \n 2. The TCP process adds a TCP header in front of the \nSMTP message fields including such pertinent infor-\nmation as the source TCP port (randomly chosen as \na port number greater than 1024, in this case 1354), \nthe destination port (port 25 for SMTP email), and \nother TCP-specific information such as sequence \nnumbers and receive buffer sizes. \n 3. This new data package (SMTP message plus TCP \nheader) is then handed to the Network Layer and an \nIP header is added with such important information \nas the source IP address of Alice’s computer, the des-\ntination IP address of Cool Company’s email server, \nand other IP-specific information such as packet \nlengths, error-detection checksums, and so on. \n 4. This complete IP packet is then handed to the \nPhysical Layer for transmission onto the physical \nnetwork medium, which will add network layer \nheaders as appropriate. Numerous packets may be \nneeded to fully transmit the entire email message \ndepending on the various network media and pro-\ntocols that must be traversed by the packets as they \nleave Alice’s network and travel the Internet to Cool \nCompany’s email server. The details will be han-\ndled by the intermediate systems and any required \nupdates or changes to the packet headers will be \nmade by those systems. \n 5. When Cool Company’s email server receives the \npackets from its local network medium via the \nPhysical Layer, it removes the network frame and \nhands the remaining data to the Network Layer. \n 6. The Network Layer strips off the IP header and \nhands the remaining data to the TCP component of \nthe Transport Layer. \n 7. The TCP process removes and examines the TCP \nheader to, among other tasks, examine the destina-\ntion port (again, 25 for email) and finally hand the \nSMTP message to the SMTP server process. \n 8. The SMTP application performs further application \nspecific processing as well delivery to Bob’s email \napplication by starting the encapsulation process all \nover again to transit the internal network between \nBob’s PC and the server. \n It is important to understand that network-based compu-\nter system attacks can occur at every layer of the TCP/IP \nstack and thus an effective intrusion detection and preven-\ntion program must be able to inspect at each layer and act \naccordingly. Intruders may manipulate any number of fields \nwithin a TCP/IP packet to attempt to bypass security pro-\ncesses or systems including the application-specific data, all \nin an attempt to gain access and control of the target system. \n 8. SURVEY OF INTRUSION DETECTION \nAND PREVENTION TECHNOLOGIES \n Now that we have discussed the threats to information \nsystems and those who pose them as well as examined \nthe underlying protocol suite in use on the Internet and \nenterprise networks today, we are prepared to explore \nthe various technologies available to detect and prevent \nNetwork Frame\nTrailer (if present)\nApplication Data\nTCP or UDP Header\nIP Header\nApplication Data\nTCP or UDP Header\nIP Header\nApplication Data\nTCP or UDP Header\nApplication Data\nNetwork Frame\nHeader\n FIGURE 18.3 TCP/IP encapsulation. \n" }, { "page_number": 334, "text": "Chapter | 18 Intrusion Prevention and Detection Systems\n301\n 9. ANTI-MALWARE SOFTWARE \n We have discussed malware and its various forms pre-\nviously. Anti-malware software (see Figure 18.5 ), in \nthe past typically referred to as antivirus software , is \ndesigned to analyze files and programs for known sig-\nnatures, or patterns, in the data that make up the file or \nprogram and that indicate malicious code is present. This \nsignature scanning is often accomplished in a multitiered \nintrusions. It is important to note that though technolo-\ngies such as firewalls, a robust patching program, and \ndisk and file encryption (see sidebar, “ A Definition of \nEncryption ” ) can be part of a powerful intrusion preven-\ntion program, these are considered static preventative \ndefenses and will not be discussed here. In this part of \nthe chapter, we discuss various dynamic systems and \ntechnologies that can assist in the detection and preven-\ntion of attacks on information systems. \nDear Bob, \nThank you for dinner last night. We \nhad a great time and hope to see\nyou again soon! \nRegards,\nAlice\nTo: bob@coolcompany.com\nDear Bob, \nThank you for dinner last night. We \nhad a great time and hope to see\nyou again soon! \nRegards,\nAlice\nTo: bob@coolcompany.com\nIP Address: 192.168.1.245\n1\nAlice\nInternet\nCool Company SMTP E-Mail Server\nIP Address: 10.5.2.34\nBob\nData Flow Down TCP/IP Stack\nNetwork Frame\nNetwork Frame\nIP Header\nSource Address: 192.168.1.245 \nDestination Address: 10.5.2.34\nIP Header\nSource Address: 192.168.1.245 \nDestination Address: 10.5.2.34\nIP Header\nSource Address: 192.168.1.245 \nDestination Address: 10.5.2.34\nIP Header\nSource Address: 192.168.1.245 \nDestination Address: 10.5.2.34\nTCP Header\nSource Port: 1354 \nDestination Port: 25\nTCP Header\nSource Port: 1354 \nDestination Port: 25\nTCP Header\nSource Port: 1354 \nDestination Port: 25\nTCP Header\nSource Port: 1354 \nDestination Port: 25\nTCP Header\nSource Port: 1354 \nDestination Port: 25\nSMTP Application Data: \"To:\nbob@coolcompany.com.....\"\nSMTP Application Data: \"To:\nbob@coolcompany.com.....\"\nSMTP Application Data: \"To:\nbob@coolcompany.com.....\"\nSMTP Application Data: \"To:\nbob@coolcompany.com.....\"\nSMTP Application Data: \"To:\nbob@coolcompany.com.....\"\nSMTP Application Data: \"To:\nbob@coolcompany.com.....\"\nSMTP Application Data: \"To:\nbob@coolcompany.com.....\"\nSMTP Application Data: \"To:\nbob@coolcompany.com.....\"\nTCP Header\nSource Port: 1354 \nDestination Port: 25\n7\n8\n6\n5\n!\n4\n3\n2\nData Flow Up TCP/IP Stack\n FIGURE 18.4 Application and network interaction example. \n 17 B. Schneier, Applied Cryptography, Wiley, 1996. \n Encryption is the process of protecting the content or mean-\ning of a message or other kinds of data. 17 Modern encryp-\ntion algorithms are based on complex mathematical \nfunctions that scramble the original, cleartext message or \ndata in such a way that makes it difficult or impossible for an \nadversary to read or access the data without the proper key \nto reverse the scrambling. The encryption key is typically a \nlarge number of values, that when fed into the encryption \nalgorithm, scrambles and unscrambles the data being pro-\ntected and without which it is extremely difficult or impos-\nsible to decrypt encrypted data. The science of encryption is \ncalled cryptography and is a broad and technical subject. \n A Definition of Encryption \n" }, { "page_number": 335, "text": "PART | II Managing Information Security\n302\napproach where the entire hard drive of the computer \nis scanned sequentially during idle periods and any file \naccessed is scanned immediately to help prevent dormant \ncode in a file that has not been scanned from becoming \nactive. When an infected file or malicious program is \nfound, it is prevented from running and either quarantined \n(moved to a location for further inspection by a systems \nadministrator) or simply deleted from the system. There \nare also appliance-based solutions that can be placed on \nthe network to examine certain classes of traffic such as \nemail before they are delivered to the end systems. \n In any case, the primary weakness of the signature-\nbased scanning method is that if the software does not \nhave a signature for a particular piece of malware, the \nmalware will be effectively invisible to the software and \nwill be able to run without interference. A signature \nmight not exist because a particular instance of the anti-\nmalware software may not have an up-to-date signature \ndatabase or the malware may be new or modified so as \nto avoid detection. To overcome this increasingly com-\nmon issue, more sophisticated anti-malware software \nwill monitor for known-malicious behavioral patterns \ninstead of, or in addition to, signature-based scanning. \nBehavioral pattern monitoring can take many forms such \nas observing the system calls all programs make and \nidentifying patterns of calls that are anomalous or known \nto be malicious. Another common method is to create a \nwhitelist of allowed known-normal activity and prevent \nall other activity, or at least prompt the user when a non-\nwhitelisted activity is attempted. Though these methods \novercome some of the limitations of the signature-based \nmodel and can help detect previously never seen mal-\nware, they come with the price of higher false-positive \nrates and/or additional administrative burdens. \n While anti-malware software can be evaded by new \nor modified malware, it still serves a useful purpose as a \ncomponent in a defense-in-depth strategy. A well main-\ntained anti-malware infrastructure will detect and pre-\nvent known forms, thus freeing up resources to focus \non other threats, but it can also be used to help speed \nand simplify containment and eradication of a malware \ninfection once an identifying signature can be developed \nand deployed. \n 10. NETWORK-BASED INTRUSION \nDETECTION SYSTEMS \n For many years, network-based intrusion detection sys-\ntems (NIDS) have been the workhorse of information \nsecurity technology and in many ways have become syn-\nonymous with intrusion detection. 18 NIDS function in \none of three modes: signature detection, anomaly detec-\ntion, and hybrid. \n A signature-based NIDS operates by passively exam-\nining all the network traffic flowing past its sensor inter-\nface or interfaces and examines the TCP/IP packets for \nsignatures of known attacks, as illustrated in Figure 18.6 . \n TCP/IP packet headers are also often inspected to \nsearch for nonsensical header field values sometimes \nused by attackers in an attempt to circumvent filters \nand monitors. In much the same way that signature-\nbased anti-malware software can be defeated by never-\nbefore-seen malware or malware sufficiently modified \nto no longer possess the signature used for detection, \n signature-based NIDS will be blind to any attack for \nwhich it does not have a signature. Though this can be \na very serious limitation, signature-based NIDS are still \nuseful due to most systems ’ ability for the operator to \nadd custom signatures to sensors. This allows security \nand network engineers to rapidly deploy monitoring \nand alarming capability on their networks in the event \nthey discover an incident or are suspicious about certain \nactivity. Signature-based NIDS are also useful to moni-\ntor for known attacks and ensure that none of those are \nsuccessful at breaching systems, freeing up resources to \ninvestigate or monitor other, more serious threats. \n NIDS designed to detect anomalies in network traf-\nfic build statistical or baseline models for the traffic they \nmonitor and raise an alarm on any traffic that deviates \nsignificantly from those models. There are numerous \nmethods for detecting network traffic anomalies, but one \nof the most common involves checking traffic for compli-\nance with various protocol standards such as TCP/IP for \nthe underlying traffic and application layer protocols such \nas HTTP for Web traffic, SMTP for email, and so on. \nMany attacks against applications or the underlying net-\nwork attempt to cause system malfunctions by violating \nthe protocol standard in ways unanticipated by the sys-\ntem developers and which the targeted protocol-handling \nFile\nFile\nFile\nFile\nAnti-Malware\nScanner\nFile\n110010010\n FIGURE 18.5 Anti-malware file scanning. \n 18 S. Northcutt, Network Intrusion Detection , 3 rd ed., Sams, 2002. \n" }, { "page_number": 336, "text": "Chapter | 18 Intrusion Prevention and Detection Systems\n303\nlayer does not deal with properly. Unfortunately, there \nare entire classes of attacks that do not violate any proto-\ncol standard and thus will not be detected by this model \nof anomaly detection. Another commonly used model \nis to build a model for user behavior and to generate an \nalarm when a user deviates from the “ normal ” patterns. \nFor example, if Alice never logs into the network after \n9:00 p.m. and suddenly a logon attempt is seen from \nAlice’s account at 3:00 a.m., this would constitute a sig-\nnificant deviation from normal usage patterns and gener-\nate an alarm. Some of the main drawbacks of anomaly \ndetection systems are defining the models of what is nor-\nmal and what is malicious, defining what is a significant \nenough deviation from the norm to warrant an alarm, and \ndefining a sufficiently comprehensive model or mod-\nels to cover the immense range of behavioral and traffic \npatterns that are likely to be seen on any given network. \nDue to this complexity and the relative immaturity of \nadaptable, learning anomaly detection technology, there \nare very few production-quality systems available today. \nHowever, due to not relying on static signatures and the \npotential of a successful implementation of an anomaly \ndetection, NIDS for detecting 0-day attacks and new or \ncustom malware is so tantalizing that much research con-\ntinues in this space. \n A hybrid system takes the best qualities of both \nsignature-based and anomaly detection NIDS and inte-\ngrates them into a single system to attempt to overcome \nthe weaknesses of both models. Many commercial NIDS \nnow implement a hybrid model by utilizing signature \nmatching due to its speed and flexibility while incorpo-\nrating some level of anomaly detection to, at minimum, \nflag suspicious traffic for closer examination by those \nresponsible for monitoring the NIDS alerts. \n Aside from the primary criticism of signature-based \nNIDS their depending on static signatures, common addi-\ntional criticisms of NIDS are they tend to produce a lot \nof false alerts either due to imprecise signature construc-\ntion or poor tuning of the sensor to better match the envi-\nronment, poor event correlation resulting in many alerts \nfor a related incident, the inability to monitor encrypted \nnetwork traffic, difficulty dealing with very high-speed \nnetworks such as those operating at 10 gigabits per sec-\nond, and no ability to intervene during a detected attack. \nThis last criticism is one of the driving reasons behind \nthe development of intrusion prevention systems. \n 11. NETWORK-BASED INTRUSION \nPREVENTION SYSTEMS \n NIDS are designed to passively monitor traffic and raise \nalarms when suspicious traffic is detected, whereas \nnetwork-based intrusion prevention systems (NIPS) are \ndesigned to go one step further and actually try to prevent \nthe attack from succeeding. This is typically achieved \nby inserting the NIPS device inline with the traffic it is \nmonitoring. Each network packet is inspected and only \npassed if it does not trigger some sort of alert based on a \nsignature match or anomaly threshold. Suspicious pack-\nets are discarded and an alert is generated. \n The ability to intervene and stop known attacks, in \ncontrast to the passive monitoring of NIDS, is the great-\nest benefit of NIPS. However, NIPS suffers from the \nsame drawbacks and limitations as discussed for NIDS, \nsuch as heavy reliance on static signatures, inability to \nexamine encrypted traffic, and difficulties with very high \nnetwork speeds. In addition, false alarms are much more \nsignificant due to the fact that the NIPS may discard that \nNetwork-based Intrusion\nDetection System\nScanning Network\nPackets\nPacket\nPacket\nPacket\nIP Header\nTCP Header\nNetwork\n101001001001\nApplication Data\n FIGURE 18.6 NIDS device-scanning packets flowing past a sensor interface. \n" }, { "page_number": 337, "text": "PART | II Managing Information Security\n304\ntraffic even though it is not really malicious. If the desti-\nnation system is business or mission critical, this action \ncould have significant negative impact on the function-\ning of the system. Thus, great care must be taken to \ntune the NIPS during a training period where there is no \npacket discard before allowing it to begin blocking any \ndetected, malicious traffic. \n 12. HOST-BASED INTRUSION \nPREVENTION SYSTEMS \n A complementary approach to network-based intrusion \nprevention is to place the detection and prevention system \non the system requiring protection as an installed software \npackage. Host-based intrusion prevention systems (HIPS), \nthough often utilizing some of the same signature-based \ntechnology found in NIDS and NIPS, also take advantage \nof being installed on the protected system to protect by \nmonitoring and analyzing what other processes on the \nsystem are doing at a very detailed level. This process \nmonitoring is very similar to that which we discussed in \nthe anti-malware software section and involves observing \nsystem calls, interprocess communication, network traf-\nfic, and other behavioral patterns for suspicious activity. \nAnother benefit of HIPS is that encrypted network traffic \ncan be analyzed after the decryption process has occurred \non the protected system, thus providing an opportunity to \ndetect an attack that would have been hidden from a NIPS \nor NIDS device monitoring network traffic. \n Again, as with NIPS and NIDS, HIPS is only as \neffective as its signature database, anomaly detection \nmodel, or behavioral analysis routines. Also, the pres-\nence of HIPS on a protected system does incur proces-\nsing and system resource utilization overhead and on a \nvery busy system, this overhead may be unacceptable. \nHowever, given the unique advantages of HIPS, such \nas being able to inspect encrypted network traffic, it is \noften used as a complement to NIPS and NIDS in a tar-\ngeted fashion and this combination can be very effective. \n 13. SECURITY INFORMATION \nMANAGEMENT SYSTEMS \n Modern network environments generate a tremendous \namount of security event and log data via firewalls, net-\nwork routers and switches, NIDS/NIPS, servers, anti-\nmalware systems, and so on. Envisioned as a solution \nto help manage and analyze all this information, secu-\nrity information management (SIM) systems have since \nevolved to provide data reduction, to reduce the sheer \nquantity of information that must analyzed, and event \ncorrelation capabilities that assist a security analyst to \nmake sense of it all. 19 A SIM system not only acts as a \ncentralized repository for such data, it helps organize it \nand provides an analyst the ability to do complex que-\nries across this entire database. One of the primary ben-\nefits of a SIM system is that data from disparate systems \nis normalized into a uniform database structure, thus \nallowing an analyst to investigate suspicious activity or a \nknown incident across different aspects and elements of \nthe IT environment. Often an intrusion will leave vari-\nous types of “ footprints ” in the logs of different systems \ninvolved in the incident; bringing these all together and \nproviding the complete picture for the analyst or investi-\ngator is the job of the SIM. \n Even with modern and powerful event correlation \nengines and data reduction routines, however, a SIM sys-\ntem is only as effective as the analyst examining the output. \nFundamentally, SIM systems are a reactive technology, \nlike NIDS, and because extracting useful and actionable \ninformation from them often requires a strong understand-\ning of the various systems sending data to the SIM, the \nanalysts ’ skill set and experience become very critical to \nthe effectiveness of the SIM as an intrusion detection sys-\ntem. 20 SIM systems also play a significant role during inci-\ndent response because often evidence of an intrusion can \nbe found in the various logs stored on the SIM. \n 14. NETWORK SESSION ANALYSIS \n Network session data represents a high-level summary of \n “ conversations ” occurring between computer systems. 21 \nNo specifics about the content of the conversation such \nas packet payloads are maintained, but various elements \nabout the conversation are kept and can be very useful \nin investigating an incident or as an indicator of suspi-\ncious activity. There are a number of ways to generate \nand process network session data ranging from vendor-\nspecific implementations such as Cisco’s NetFlow 22 \nto session data reconstruction from full traffic analysis \nusing tools such as Argus. 23 However the session data is \ngenerated, there are a number of common elements con-\nstituting the session, such as source IP address, source \n 19 http://en.wikipedia.org/wiki/Security_Information_Management . \n 20 www.schneier.com/blog/archives/2004/10/security_inform.html . \n 21 R. Bejtlich, The Tao of Network Security Monitoring: Beyond \nIntrusion Detection, Addison-Wesley Professional, 2004. \n 22 Cisco NetFlow, wwww.cisco.com/web/go/netfl ow . \n 23 Argus, http://qosient.com/argus/ . \n" }, { "page_number": 338, "text": "Chapter | 18 Intrusion Prevention and Detection Systems\n305\nport, destination IP address, destination port, time-stamp \ninformation, and an array of metrics about the session, \nsuch as bytes transferred and packet distribution. \n Using the collected session information, an analyst \ncan examine traffic patterns on a network to identify \nwhich systems are communicating with each other and \nidentify suspicious sessions that warrant further investi-\ngation. For example, a server configured for internal use \nby users and having no legitimate reason to communi-\ncate with addresses on the Internet will cause an alarm \nto be generated if suddenly a session or sessions appear \nbetween the internal server and external addresses. At \nthat point the analyst may suspect a malware infection \nor other system compromise and investigate further. \nNumerous other queries can be generated to identify \nsessions that are abnormal in some way or another such \nas excessive byte counts, excessive session lifetime, or \nunexpected ports being utilized. When run over a suf-\nficient timeframe, a baseline for traffic sessions can be \nestablished and the analyst can query for sessions that \ndon’t fit the baseline. This sort of investigation is a form \nof anomaly detection based on high-level network data \nversus the more granular types discussed for NIDS and \nNIPS. Figure 18.7 illustrates a visualization of network \nsession data. The pane on the left side indicates one node \ncommunicating with many others; the pane on the right \nis displaying the physical location of many IP addresses \nof other flows. \n Another common use of network session analysis \nis to combine it with the use of a honeypot or honeynet \n(see sidebar, “ Honeypots and Honeynets ” ). Any network \nactivity, other than known-good maintenance traffic such \nas patch downloads, seen on these systems is, by defi-\nnition, suspicious since there are no production business \nfunctions or users assigned to these systems. Their sole \npurpose is to act as a lure for an intruder. By monitor-\ning network sessions to and from these systems, an early \nwarning can be raised without even necessarily needing \nto perform any complex analysis. \n 15. DIGITAL FORENSICS \n Digital forensics is the “ application of computer science \nand investigative procedures for a legal purpose involv-\ning the analysis of digital evidence. ” 25 Less formally, \ndigital forensics is the use of specialized tools and tech-\nniques to investigate various forms of computer-oriented \ncrime including fraud, illicit use such as child pornogra-\nphy, and many forms of computer intrusions. \n Digital forensics as a field can be divided into two \nsubfields: network forensics and host-based forensics. \nNetwork forensics focuses on the use of captured network \ntraffic and session information to investigate computer \n FIGURE 18.7 Network Session Analysis Visualization Interface. \n 24 www.honeynet.org . \n 25 K. Zatyko, “ Commentary: Defi ning digital forensics, ” Forensics \nMagazine , www.forensicmag.com/articles.asp?pid \u0003 130 , 2007. \n A honeypot is a computer system designed to act as a \nlure or trap for intruders. This is most often achieved by \nconfiguring the honeypot to look like a production system \nthat possibly contains valuable or sensitive information \nand provides legitimate services, but in actuality neither \nthe data or the services are real. A honeypot is carefully \nmonitored and, since there is no legitimate reason for a \nuser to be interacting with it, any activity seen targeting \nit is immediately considered suspicious. A honeynet is a \ncollection of honeypots designed to mimic a more com-\nplex environment than one system can support. 24 \n Honeypots and Honeynets \n" }, { "page_number": 339, "text": "PART | II Managing Information Security\n306\ncrime. Host-based forensics focuses on the collection and \nanalysis of digital evidence from individual computer \nsystems to investigate computer crime. Digital forensics \nis a vast topic, and a comprehensive discussion is beyond \nthe scope of this chapter; interested readers are referred \nto 26 for more detail. \n In the context of intrusion detection, digital forensic \ntechniques can be utilized to analyze a suspected com-\npromised system in a methodical manner. Forensic inves-\ntigations are most commonly used when the nature of \nthe intrusion is unclear, such as those perpetrated via a \n0-day exploit, but wherein the root cause must be fully \nunderstood either to ensure that the exploited vulnerabil-\nity is properly remediated or to support legal proceed-\nings. Due to the increasing use of sophisticated attack \ntools and stealthy and customized malware designed to \nevade detection, forensic investigations are becoming \nincreasingly common, and sometimes only a detailed and \nmethodical investigation will uncover the nature of an \nintrusion. The specifics of the intrusion may also require \na forensic investigation such as those involving the theft \nof Personally Identifiable Information (PII) in regions \ncovered by one or more data breach disclosure laws. \n 16. SYSTEM INTEGRITY VALIDATION \n The emergence of powerful and stealthy malware, kernel-\nlevel root kits, and so-called clean-state attack frame-\nworks that leave no trace of an intrusion on a computer’s \nhard drive have given rise to the need for technology that \ncan analyze a running system and its memory and provide \na series of metrics regarding the integrity of the system. \nSystem integrity validation (SIV) technology is still in its \ninfancy and a very active area of research but primarily \nfocuses on live system memory analysis and the notion \nof deriving trust from known-good system elements. 27 \nThis is achieved by comparing the system’s running \nstate, including the processes, threads, data structures, \nand modules loaded into memory, to the static elements \non disk from which the running state was supposedly \nloaded. Through a number of cross-validation processes, \ndiscrepancies between what is running in memory and \nwhat should be running can be identified. When properly \nimplemented, SIV can be a powerful tool for detecting \nintrusions, even those utilizing advanced techniques. \n 17. PUTTING IT ALL TOGETHER \n It should now be clear that intrusion detection and pre-\nvention are not single tools or products but a series of \nlayered technologies coupled with the appropriate meth-\nodologies and skill sets. Each of the technologies sur-\nveyed in this chapter has its own specific strengths and \nweaknesses, and a truly effective intrusion detection and \nprevention program must be designed to play to those \nstrengths and minimize the weaknesses. Combining \nNIDS and NIPS with network session analysis and \na comprehensive SIM, for example, helps offset the \ninherent weakness of each technology as well as pro-\nvide the information security team greater flexibility to \nbring the right tools to bear for an ever-shifting threat \nenvironment. \n An essential element in a properly designed intrusion \ndetection and prevention program is an assessment of \nthe threats faced by the organization and a valuation of \nthe assets to be protected. There must be an alignment \nof the value of the information assets to be protected and \nthe costs of the systems put in place to defend them. The \nprogram for an environment processing military secrets \nand needing to defend against a hostile nation state must \nbe far more exhaustive than that for a single server con-\ntaining no data of any real value that must simply keep \nout assorted script kiddies. \n For many organizations, however, their information \nsystems are business and mission critical enough to war-\nrant considerable thought and planning with regard to \nthe appropriate choices of technologies, how they will \nbe implemented, and how they will be monitored. Only \nthrough flexible, layered, and comprehensive intrusion \ndetection and prevention programs can organizations \nhope to defend their environment against current and \nfuture threats to their information security. \n 26 K. Jones, Real Digital Forensics: Computer Security and Incident \nResponse , Addison-Wesley Professional, 2005. \n 27 www.volatilesystems.com . \n" }, { "page_number": 340, "text": "307\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Computer Forensics \n Scott R. Ellis \n RGL \n Chapter 19 \n This guide is intended to provide in-depth information \non computer forensics as a career, a job, and a science. It \nwill help you avoid mistakes and find your way through \nthe many aspects of this diverse and rewarding field. \n 1. WHAT IS COMPUTER FORENSICS? \n Definition: Computer forensics is the acquisition, preserva-\ntion, and analysis of electronically stored information (ESI) \nin such a way that ensures its admissibility for use as either \nevidence, exhibits, or demonstratives in a court of law . \n Rather than discussing at great length what compu-\nter forensics is (the rest of the chapter will take care of \nthat), let’s, for the sake of clarity, define what computer \nforensics is not . It is not an arcane ability to tap into a \nvast, secret repository of information about every single \nthing that ever happened on, or to, a computer. Often, it \ninvolves handling hardware in unique circumstances and \ndoing things with both hardware and software that are \nnot, typically, things that the makers or manufacturers \never intended (see sidebar: “ Angular Momentum ” ). \n Not every single thing a user ever did on a computer \nis 100% knowable beyond a shadow of a doubt or even \nbeyond reasonable doubt. Some things are certainly \n knowable with varying degrees of certainty. and there is \nnothing that can happen on a computer through the use \nof a keyboard and a mouse that cannot be replicated with \na software program or macro of some sort. It is fitting, \nthen, that many of the arguments of computer forensics \nbecome philosophical, and that degrees of certainty exist \nand beg definition. Such as: How heavy is the burden \nof proof? Right away, lest arrogant thoughtlessness pre-\nvail, anyone undertaking a study of computer forensics \nmust understand the core principles of what it means to \nbe in a position of authority and to what varying degrees \nof certainty an examiner may attest to without finding \nhe has overstepped his mandate (see sidebar, “ Angular \nMomentum ” ). The sections “ Testifying in Court ” and \n “ Beginning to End in Court ” in this chapter, and the arti-\ncle “ Computer Forensics and Ethics, Green Home Plate \nGallery View, ” address these concerns as they relate to \nethics and testimonial work.\n Hard drive platters spin very fast. The 5¼-inch floppy disk \nof the 1980s has evolved into heavy, metallic platters that \nspin at ridiculous speeds. If your car wheels turned at \n10,000 RPM, you would zip along at 1000 mph. Some hard \ndrives spin at 15,000 RPM. That’s really fast. But though \nHD platters will not disintegrate (as CD-ROMs have been \nknown to do), anecdotal evidence suggests that in larger, \nheavier drives, the bearings can become so hot that they \nliquefy, effectively stopping the drive in its tracks. Basic \nknowledge of surface coating magnetic properties tells us \nthat if the surface of a hard drive becomes overly hot, the \nbasic magnetic properties, the “ zeroes and ones ” stored by \nthe particulate magnetic domains, will become unstable. \nThe stability of these zeroes and ones maintains the con-\nsistency of the logical data stored by the medium. The high \nspeeds of large, heavy hard drives generate heat and ulti-\nmately result in drive failure. \n Forensic technicians are often required to handle hard \ndrives, sometimes while they have power attached to them. \nFor example, a computer may need to be moved while it \nstill has power attached to it. Here is where a note about \nangular momentum becomes necessary: A spinning object \ntranslates pressure applied to its axis of rotation 90 degrees \nfrom the direction of the force applied to it. Most hard \ndrives are designed to park the heads at even the slight-\nest amount of detected g-shock. They claim to be able to \n Angular Momentum \n" }, { "page_number": 341, "text": "PART | II Managing Information Security\n308\nwithstand thousands of “ g’s, ” but this simply means that if \nthe hard drive was parked on a neutron star it would still \nbe able to park its heads. This is a relatively meaning-\nless, static attribute. Most people should understand that \nif you drop your computer, you may damage the internal \nmechanics of a hard drive and, if the platters are spinning, \nthe drive may be scored. Older drives, those that are more \nthan three years old, are especially susceptible to damage. \nPrecautions should be taken to keep older drives vibration \nfree and cool. \n With very slight, gentle pressures, a spinning hard drive \ncan be handled much as you would handle a gyro and with \nlittle potential for damage. It can be handled in a way that \nwon’t cause the heads to park. \n Note: Older drives are the exception, and any slight \nmotion at all can cause them to stop functioning. \n Most hard drives have piezoelectric shock detectors that \nwill cause the heads to park the instant a certain g-shock \nvalue is reached. Piezoelectrics work under the principle \nthat when a force is exerted along a particular axis, certain \ntypes of crystals will emit an electrical charge. Just like their \npocket-lighter counterparts, they don’t always work and \neventually become either desensitized or oversensitized; if \nthe internal mechanism of the hard drive is out of tolerance, \nthe shock detector may either work or it won’t park the \nheads correctly and the heads will shred the surface of the \nplatters. Or, if it does work, it may thrash the surface any-\nway as the heads park and unpark during multiple, repeated \nshocks. A large, heavy drive (such as a 1TB drive) will \nbehave exactly as would any heavy disk spinning about an \naxis — just like a gyro (see Figure 19.1 ). So, when a vector of \nforce is applied to a drive and it is forced to turn in the same \ndirection as the vector, instead of its natural 90 degree offset, \nthe potential for damage to the drive increases dramatically. \n Such is the case of a computer mounted in brackets \ninside a PC in transit. The internal, spinning platters of the \ndrive try to shift in opposition to the direction the exter-\nnal drive turned; it is an internal tug of war. As the platters \nattempt to lean forward, the drive leans to the left instead. \nIf there is any play in the bearings at all, the cushion of air \nthat floats the drive heads can be disrupted and the surface \nof the platters can scratch. If you must move a spinning \ndrive (and sometimes you must), be very careful, go very \nslowly, and let the drive do all the work. If you must trans-\nport a running hard drive, ensure that the internal drive is \nrotating about the x axis, that it is horizontally mounted \ninside the computer, and that there is an external shock \npad. Some PCs have vertically mounted drives. \n Foam padding beneath the computer is essential. If you \nare in the business of transporting running computers that \nhave been seized, a floating platform of some sort wouldn’t \nbe a bad idea. A lot of vibrations, bumps, and jars can \nshred less durably manufactured HDs. This author once \nshredded his own laptop hard drive by leaving it turned on, \ncarrying it 15 city blocks, and then for an hour-long ride \non the train. You can transport and move computers your \nentire life and not have an issue. But the one time you do, \nit could be an unacceptable and unrecoverable loss. \n Never move an older computer. If the computer cannot \nbe shut down, then the acquisition of the computer must \nbe completed on site. Any time a computer is to be moved \nwhile it’s running, everyone involved in the decision-mak-\ning process should sign off on the procedure and be aware \nof the risk. \n Lesson learned: Always make sure that a laptop put into \nstandby mode has actually gone into standby mode before \nclosing the lid. Windows can be tricky like that sometimes. \nResulting \nDirection \nof Tilt\nDirection\nof\nRotation\nExternal Tilt Force\n FIGURE 19.1 A spinning object distributes applied force dif-\nferently than a stationary object. Handle operating hard drives \nwith extreme care. \n 2. ANALYSIS OF DATA \n Never underestimate the power of a maze. There are two \ntypes of mazes that maze experts generally agree on: uni-\ncursive and multicursive. In the unicursive maze, the maze \nhas only one answer; you wind along a path and with a little \npatience you end up at the end. Multicursory mazes, accord-\ning to the experts, are full of wrong turns and dead ends. \n The fact of the matter is, however, when one sets foot \ninto a multicursive maze, one need only recognize that it \nis, in fact, two unicursory mazes put together, with the \npath through the maze being the seam. If you take the \nunicursory tack and simply always stay to the right (or \nthe left , as long as you are consistent) , you will traverse \nthe entire maze and have no choice but to traverse and \ncomplete it successfully. \n This is not a section about mazes, but they are analo-\ngous in that there are two approaches to computer foren-\nsics: one that churns through every last bit of data and \none that takes shortcuts. This is called analysis . Quite \n" }, { "page_number": 342, "text": "Chapter | 19 Computer Forensics\n309\noften as a forensic analyst, the sheer amount of data that \nneeds to be sifted will seem enormous and unrelenting. \nBut there is a path that, if you know it, you can follow \nit through to success. Here are a few guidelines on how \nto go about conducting an investigation. For starters, we \ntalk about a feature that is built into most forensic tool-\nsets. It allows the examiner to “ reveal all ” in a fashion \nthat can place hundreds of thousands of files at the fin-\ngertips for examination. It can also create a bad situa-\ntion, legally and ethically. \n Computer Forensics and Ethics, Green \nHome Plate Gallery View 1 \n A simplified version of this article was published on the \nChicago Bar Association blog in late 2007. This is the \noriginal version, unaltered. \n EnCase is a commonly used forensic software pro-\ngram that allows a computer forensic technologist to \nconduct an investigation of a forensic hard disk copy. \nOne of the functions of the software is something known \nas “ green-home-plate-gallery-view. ” This function allows \nthe forensic technologist to create, in plain view , a gal-\nlery of every single image on the computer. \n In EnCase, the entire folder view structure of a com-\nputer is laid out just as in Windows Explorer with the \nexception that to the left of every folder entry are two \nboxes. One is shaped like a square, and the other like a \nlittle “ home plate ” that turns green when you click on it; \nhence the “ green-home-plate. ” The technical name for the \nhome plate box is “ Set Included Folders. ” With a single \nclick, every single file entry in that folder and its subfold-\ners becomes visible in a table view. An additional option \nallows the examiner to switch from a table view of entries \nto a gallery view of thumbnails . Both operations together \ncreate the “ green-home-plate-gallery-view ” action. \n With two clicks of the mouse, the licensed EnCase \nuser can gather up every single image that exists on the \ncomputer and place it into a single, scrollable, thumbnail \ngallery. In a court-ordered investigation where the search \nmay be for text-based documents such as correspondence, \ngreen-home-plate-gallery-view has the potential of being \n misused to visually search the computer for imaged/\nscanned documents. I emphasize misused because, ulti-\nmately, this action puts every single image on the compu-\nter in plain view. It is akin to policemen showing up for \na domestic abuse response with an x-ray machine in tow \nand x-raying the contents of the whole house. \n Because this action enables one to view every single \nimage on a computer, including those that may not have \nanything to do with the forensic search at hand, it raises \na question of ethics, and possibly even legality. Can a \nforensic examiner green-home-plate-gallery-view with-\nout reasonable cause? \n In terms of a search where the search or motion \nspecifically authorized searching for text-based “ docu-\nments, ” green-home-plate-gallery-view is not the correct \napproach, nor is it the most efficient. The action exceeds \nthe scope of the search and it may raise questions regard-\ning the violation of rights and/or privacy. To some inex-\nperienced examiners, it may seem to be the quickest and \neasiest route to locating all the documents on a compu-\nter. More experienced examiners may use it as a quick \nlitmus test to “ peek under the hood ” to see if there are \ndocuments on the computer. \n Many documents that are responsive to the search \nmay be in the form of image scans or PDFs. Green-\nhome-plate-gallery-view renders them visible and avail-\nable, just for the scrolling. Therein lies the problem: If \nanything incriminating turns up, it also has the ring of \ntruth in a court of law when the examiner suggests that \nhe was innocently searching for financial documents \nwhen he inadvertently discovered, in plain view , offen-\nsive materials that were outside the scope of his origi-\nnal mandate. But just because something has the ring of \ntruth does not mean that the bell’s been rung. \n For inexperienced investigators, green-home-plate-\ngallery-view (see Figure 19.2 ) may truly seem to be the \nonly recourse and it may also be the one that has yielded \nthe most convictions. However, because there is a more \nefficient method to capture and detect text, one that can \nprotect privacy and follows the constraints of the search \nmandate, it should be used. \n Current forensic technology allows us, through \nelectronic file signature analysis, sizing, and typing of \nimages, to capture and export every image from the sub-\nject file system. Next, through optical character recog-\nnition (OCR), the experienced professional can detect \nevery image that has text and discard those that do not. \nIn this manner, images are protected from being viewed, \nthe subject’s privacy is protected, and all images with \ntext are located efficiently. The resultant set can then \nbe hashed and reintroduced into EnCase as single files, \nhashed, and indexed so that the notable files can be \nbookmarked and a report generated. \n Technically, this is a more difficult process because it \nrequires extensive knowledge of imaging, electronic file \nsignature analysis, automated OCR discovery techniques, \nand hash libraries. But to a trained technician, this is \n 1 “ Computer forensics and ethics, ” Green Home Plate Gallery View, \nChicago Bar Association Blog, September 2007. \n" }, { "page_number": 343, "text": "PART | II Managing Information Security\n310\n actually faster than green-home-plate-gallery-view, which \nmay yield thousands of images and may take hundreds of \nhours to accurately review. \n In practice, the easiest way is seldom the most ethi-\ncal way to solve a problem; neither is it always the most \nefficient method of getting a job done. Currently, there \nare many such similar scenarios that exist within the \ndiscipline of computer forensics. As the forensic tech-\nnology industry grows and evolves, professional organi-\nzations may eventually emerge that will provide codes of \nethics and some syncretism in regard to these issues. For \nnow, however, it falls on attorneys to retain experienced \ncomputer forensic technologists who place importance \non developing appropriate ethical protocols. Only when \nthese protocols are in place can we successfully under-\nstand the breadth and scope of searches and prevent pos-\nsible violations of privacy. \n Database Reconstruction \n A disk that hosts an active database is a very busy place. \nAgain and again throughout this chapter, a single recur-\nring theme will emerge: Data that has been overwritten \ncannot, by any conventionally known means, be recov-\nered. If it could be, then Kroll Ontrack and every other \ngiant in the forensics business would be shouting this \nservice from the rooftops and charging a premium price \nfor it. Experimentally, accurate statistics on the amount \nof data that will be overwritten by the seemingly random \naction of a write head may be available, but most likely \nit functions by rules that are different for every system \nbased on the amount of usage, the size of the files, and \nthe size of the unallocated clusters. Anecdotally, the \nformula goes something like this; the rules change under \nany given circumstances, but this story goes a long way \ntoward telling how much data will be available: \n On a server purposed with storing surveillance video, \nthere are three physical hard drives. Drive C serves as \nthe operating system (OS) disk and program files disk; \nDrives E and F, 350 gigabytes (GB) each, serve as stor-\nage disks. When the remote DVR units synchronize each \nevening, every other file writes to every other disk of the \ntwo storage drives. Thirty-day-old files automatically get \ndeleted by the synchronization tool. \n After eight months of use, the entire unallocated \nclusters of each drive, 115 GB on one drive and 123 GB \non the other, are completely filled with MPG data. An \nadditional 45 GB of archived deleted files are available \nto be recovered from each drive. \n In this case, the database data were MPG movie files. \nIn many databases, the data, the records (as the database \nindexes and grows and shrinks and is compacted and \noptimized), will grow to populate the unallocated clus-\nters. Database records found in the unallocated clusters \nare not an indicator of deleted records. Database records \nthat exist in the unallocated clusters that don’t exist in \nthe live database are a sign of deleted records. \n Lesson learned: Don’t believe everything you see. \nCheck it out and be sure. Get second, third, and fourth \nopinions when you’re uncertain. \n 3. COMPUTER FORENSICS IN THE \nCOURT SYSTEM \n Computer forensics is one of the few computer-related \nfields in which the practitioner will be found in the \n FIGURE 19.2 Green-home-plate-gallery-view. \n" }, { "page_number": 344, "text": "Chapter | 19 Computer Forensics\n311\n Preserving Digital Evidence in the Age of eDiscovery 2 \n Society has awoken in these past few years to the reali-\nties of being immersed in the digital world. With that, the \nharsh realities of how we conduct ourselves in this age of \nbinary processing are beginning to take form in terms of \nboth new laws and new ways of doing business. In many \nactions, both civil and criminal, digital documents are the \nnew “ smoking gun. ” And with the new Federal laws that \nopen the floodgates of accessibility to your digital media, \nthe sanctions for mishandling such evidence become a fact \nof law and a major concern. \n At some point, most of us (any of us) could become \ninvolved in litigation. Divorce, damage suits, patent infringe-\nment, intellectual property theft, and employee misconduct \nare just some examples of cases we see. When it comes \nto digital evidence, most people simply aren’t sure of their \nresponsibilities. They don’t know how to handle the requests \nfor and subsequent handling of the massive amounts of data \nthat can be crucial to every case. Like the proverbial smok-\ning gun, “ digital evidence ” must be handled properly. \n Recently, a friend forwarded us an article about a case \nruling in which a routine email exhibit was found inadmissi-\nble due to authenticity and hearsay issues. What we should \ntake away from that ruling is that electronically stored \ninformation (ESI), just like any other evidence, must clear \nstandard evidentiary hurdles. Whenever ESI is offered as \nevidence, the following evidence rules must be considered. \n In most courts, there are four types of evidence. Computer \nfiles that are extracted from a subject machine and presented \nin court typically fall into one or more of these types: \n ● Documentary evidence is paper or digital evidence that \ncontains human language. It must meet the authenticity \nrequirements outlined below. It is also unique in that it \nmay be disallowed if it contains hearsay. Emails fall into \nthe category of documentary evidence. \n ● Real evidence must be competent (authenticated), rel-\nevant, and material. For example, a computer that was \ninvolved in a court matter would be considered real evi-\ndence provided that it hasn’t been changed, altered, or \naccessed in a way that destroyed the evidence. The abil-\nity to use these items as evidence may be contingent on \nthis fact, and that’s is why preservation of a computer or \ndigital media must be done. \n ● Witness testimony . With ESI, the technician should be able \nto verify how he retrieved the evidence and that the evi-\ndence is what it purports to be, and he should be able to \nspeak to all aspects of computer use. The witness must both \nremember what he saw and be able to communicate it. \n ● Demonstrative evidence uses things like PowerPoint, \nphotographs, or computer-aided design (CAD) drawings \nof crime scenes to demonstrate or reconstruct an event. \nFor example, a flowchart that details how a person goes \nto a Web site, enters her credit-card number, and makes \na purchase would be considered demonstrative. \n For any of these items to be submitted in court, they each \nmust, to varying degrees, pass the admissibility requirements \nof relevance, materiality, and competence. For evidence to be \n relevant , it must make the event it is trying to prove either more \nor less probable. A forensic analyst may discover a certain Web \npage on the subject hard drive that shows the subject visited a \nWeb site where flowers are sold and that he made a purchase. \nIn addition to perhaps a credit-card statement, this shows that \nit is more probable that the subject of an investigation visited \nthe site on his computer at a certain time and location. \n Materiality means that something not only proves the \nfact (it is relevant to the fact that it is trying to prove) but is \nalso material to the issues in the case. The fact that the sub-\nject of the investigation purchased flowers on a Web site \nmay not be material to the matter at hand. \n Finally, competency is the area where the forensic side \nof things becomes most important. Assuming that the pur-\nchase of flowers from a Web site is material (perhaps it is \na stalking case), how the evidence was obtained and what \nhappened to it after that will be put under a microscope \nby both the judge and the party objecting to the evidence. \nThe best evidence collection experts are trained profession-\nals with extensive experience in their field. The best attor-\nneys will understand this and will use experts when and \nwhere needed. Spoliation results from mishandled ESI, and \nspoiled data is generally inadmissible. It rests upon every-\none involved in a case — IT directors, business owners, and \nattorneys — to get it right. Computer forensics experts can-\nnot undo damage that has been done, but if involved in the \n beginning , they can prevent it from happening. \ncourtroom on a given number of days of the year. With \nthat in mind, the following sections are derived from \nthe author’s experiences in the courtroom, the lessons \nlearned there, and the preparation leading up to giv-\ning testimony. To most lawyers and judges, compu-\nter forensics is a mysterious black art. It is as much a \ndiscipline of the art to demystify and explain results in \nplain English as it is to conduct an examination. It was \nwith special consideration of the growing prevalence \nof the use of electronically stored information (ESI) in \nthe courtroom, and the general unfamiliarity with how it \nmust be handled as evidence, that spawned the idea for \nthe sidebar “ Preserving Digital Evidence in the Age of \neDiscovery. ” \n 2 Scott R. Ellis, “ Preserving digital evidence in the age of ediscov-\nery, ” Daily Southtown , 2007. \n" }, { "page_number": 345, "text": "PART | II Managing Information Security\n312\n What Have You Been Clicking On? \n You’ve probably heard the rumor that whenever you click \non something on a Web page, you leave a deeply rooted \ntrail behind you for anyone (with the right technology) to \nsee. In computer forensics, just like in archeology, these \npieces that a user leaves behind are called artifacts . An \nartifact is a thing made by a human. It tells the story of a \nbehavior that happened in the past. \n On your computer, that story is told by metadata stored in \ndatabases as well as by files stored on your local machine. \nThey reside in numerous places, but this article will address \njust three: Internet history, Web cache, and temporary \nInternet files (TIF). Because Internet Explorer (IE) was devel-\noped in a time when bandwidth was precious, storing data \nlocally prevents a browser from having to retrieve the same \ndata every time the same Web page is visited. Also, at the \ntime, no established technology existed that, through a \nWeb browser, could show data on a local machine without \nfirst putting the file on the local machine. A Web browser, \nessentially, was a piece of software that combined FTP and \n document-viewing technology into one application plat-\nform, complete with its own protocol, HTTP. \n In IE, press Ctrl \u0002 H to view your history. This will \nshow a list of links to all the sites that were viewed over a \nperiod of four weeks. The history database only stores the \nname and date the site was visited. It holds no information \nabout files that may be cached locally. That’s the job of the \nWeb cache. \n To go back further in history, an Internet history viewer is \nneeded. A Google search for “ history.dat viewer ” will turn \nup a few free tools. \n The Web cache database, index.dat, is located in the \nTIF folder; it tracks the date, the time the Web page down-\nloaded, the original Web page filename, and its local name \nand location in the TIF. Information stays in the index.dat \nfor a long time, much longer than four weeks. You will \nnotice that if you set your computer date back a few weeks \nand then press Ctrl \u0002 H , you can always pull the last four \nweeks of data for as long as you have had your computer \n(and not cleared your cache). Using a third-party viewer to \nview the Web cache shows you with certainty the date and \norigination of a Web page. The Web cache is a detailed \ninventory of everything in the TIF. Some history viewers will \nshow Web cache information, too. \n The TIF is a set of local folders where IE stores Web pages \nyour computer has downloaded. Typically, the size varies \ndepending on user settings. But Web sites are usually small, \nso 500 MB can hold thousands of Web pages and images! \nViewing these files may be necessary. Web mail, financial, \nand browsing interests are all stored in the TIF. However, \nmalicious software activity, such as pop-ups, exploits, \nviruses, and Trojans, can cause many strange files to appear \nin the TIF. For this reason, files that are in the TIF should \nbe compared to their entry in the Web cache. Time stamps \non files in the TIF may or may not accurately show when a \nfile was written to disk. System scans periodically alter Last \nAccessed and Date Modified time stamps! Because of hard-\ndisk caching and delayed writing, the Date Created time \nstamp may not be the actual time the file arrived. Computer \nforensics uses special tools to analyze the TIF, but much is \nstill left to individual interpretations. \n Inspection of all the user’s Internet artifacts, when intact, \ncan reveal what a user was doing and whether or not a \nclick trail exists. Looking just at time stamps or IE history \nisn’t enough. Users can easily delete IE history, and time \nstamps aren’t always accurate. Missing history can disrupt \nthe trail. Missing Web cache entries or time stamp – altering \nsystem scans can destroy the trail. Any conclusions are best \nnot preceded by a suspended leap through the air (you may \nland badly and trip and hurt yourself). Rather, check for \nviruses and bad patching, and get the artifacts straight. If \nthere is a click trail, it will be revealed by the Web cache, \nthe files in the TIF, and the history. Bear in mind that when \npieces are missing, the reliability of the click trail erodes, \nand professional examination may be warranted. \n 4. UNDERSTANDING INTERNET HISTORY \n Of the many aspects of user activity, the Internet history \nis usually of the greatest interest. In most investigations, \npeople such as HR, employers, and law enforcement \nseek to understand the subject’s use of the Internet. What \nWeb sites did he visit? When did he visit them? Did he \nvisit them more than once? The article “ What Have You \nBeen Clicking On ” (see sidebar) seeks to demystify the \nconcept of temporary Internet files (TIF).\n 5. TEMPORARY RESTRAINING ORDERS \nAND LABOR DISPUTES \n A temporary restraining order (TRO) will often be issued \nin intellectual property or employment contract disputes. \nThe role of the forensic examiner in a TRO may be \nmultifold, or it may be limited to a simple, one-time \nacquisition of a hard drive. Often when an employee \nleaves an organization under less than amicable terms, \naccusations will be fired in both directions and the \n" }, { "page_number": 346, "text": "Chapter | 19 Computer Forensics\n313\nresulting lawsuit will be a many-headed beast. Attorneys \non both sides may file motions that result in forensic \nanalysis of emails, user activity, and possible contract \nviolations as well as extraction of information from \nfinancial and customer relationship management (CRM) \ndatabases. \n Divorce \n Typically the forensic work done in a divorce case will \ninvolve collecting information about one of the parties \nto be used to show that trust has been violated. Dating \nsites, pornography, financial sites, expatriate sites, and \nemail should be collected and reviewed. \n Patent Infringement \n When one company begins selling a part that is patented \nby another company, a lawsuit will likely be filed in fed-\neral court. Subsequently, the offending company will \nbe required to produce all the invoices relating to sales \nof that product. This is where a forensic examiner may \nbe required. The infringed-on party may find through \ntheir own research that a company has purchased the \npart from the infringer and that the sale has not been \nreported. A thorough examination of the financial sys-\ntem will reveal all the sales. It is wise when doing this \nsort of work to contact the financial system vendor to \nget a data dictionary that defines all the fields and the \npurpose of the tables. \n Invoice data is easy to collect. It will typically reside \nin just two tables: a header and a detail table. These \ntables will contain customer codes that will need to be \njoined to the customer table, so knowing some SQL will \nbe a great help. Using the database server for the specific \ndatabase technology of the software is the gold standard \nfor this sort of work. Getting the collection to launch \ninto VMware is the platinum standard, but sometimes an \nimage won’t want to boot. Software utilities such as Live \nView do a great job of preparing the image for deploy-\nment in a virtualized environment. \n When to Acquire, When to Capture \nAcquisition \n When a forensics practitioner needs to capture the data \non a hard disk, he does so in a way that is forensically \nsound. This means that, through any actions on the part \nof the examiner, no data on the hard drive is altered and \na complete and total copy of the surface of the hard drive \nplatters is captured. Here are some common terms used \nto describe this process: \n ● Collection \n ● Mirror \n ● Ghost \n ● Copy \n ● Acquisition \n Any of these terms is sufficient to describe the proc-\ness. The one that attorneys typically use is mirror because \nthey seem to understand it best. A “ forensic ” acquisi-\ntion simply means that the drive was write-protected \nby either a software or hardware write blocker while the \nacquisition was performed. \n Acquisition of an entire hard drive is the standard \napproach in any case that will require a deep analysis of \nthe behaviors and activities of the user. It is not always \nthe standard procedure. However, most people will \nagree that a forensic procedure must be used whenever \ninformation is copied from a PC. Forensic, enterprise, \nand ediscovery cases all vary in their requirements for \nthe amount of data that must be captured. In discovery, \nmuch of what is located on a computer may be deemed \n “ inaccessible, ” which is really just fancy lawyer talk for \n “ it costs too much to get it. ” Undeleting data from hun-\ndreds of computers in a single discovery action in a civil \ncase would be a very rare thing to happen and would \nonly take place if massive malfeasance was suspected. \nIn these cases, forensic creation of logical evidence files \nallows the examiner to capture and copy relevant infor-\nmation without altering the data. \n Creating Forensic Images Using Software \nand Hardware Write Blockers \n Both software and hardware write blockers are available. \nSoftware write blockers are versatile and come in two \nflavors. One is a module that “ plugs ” into the forensic \nsoftware and can generally be used to write block any \nport on the computer. The other method of software \nwrite blocking is to use a forensic boot disk. This will \nboot the computer from the hard drive. Developing \nchecklists that can be repeatable procedures is an ideal \nway to ensure solid results in any investigation. \n Software write blockers are limited by the port speed \nof the port they are blocking, plus some overhead for the \nwrite-blocking process. But then, all write blockers are \nlimited in this manner. \n Hardware write blockers are normally optimized \nfor speed. Forensic copying tools such as Logicube and \nTableau are two examples of hardware write blockers, \n" }, { "page_number": 347, "text": "PART | II Managing Information Security\n314\nthough there are many companies now that make them. \nLogiCube will both hash and image a drive at a rate of \nabout 3GB a minute. They are small and portable and \ncan replace the need for bulky PCs on a job site. There \nare also appliances and large enterprise software pack-\nages that are designed to automate and alleviate the labor \nrequirements of large discovery/disclosure acquisitions \nthat may span thousands of computers. \n Live Capture of Relevant Files \n Before conducting any sort of a capture, all steps should \nbe documented and reviewed with counsel before pro-\nceeding. Preferably, attorneys from both sides on a \nmatter and the judge agree to the procedure before it \nis enacted. Whenever a new procedure or technique is \nintroduced late on the job site, if there are auditors or \nobservers present, the attorneys will argue, which can \ndelay the work by several hours. Most forensic software \ncan be loaded to a USB drive and launched on a live sys-\ntem with negligible forensic impact to the operating envi-\nronment. Random Access Memory (RAM) captures are \nbecoming more popular; currently this is the only way to \ncapture an image of physical RAM. Certain companies \nare rumored to be creating physical RAM write blockers. \nLaunching a forensic application on a running system \nwill destroy a substantial amount of physical RAM as \nwell as the paging file. If either RAM or the paging file \nis needed, the capture must be done with a write blocker. \n Once the forensic tool is launched, either with a write \nblocker or on a live system, the local drive may be pre-\nviewed. The examiner may only be interested in Word \ndocuments, for example. Signature analysis is a lengthy \nprocess in preview mode, as are most searches. A bet-\nter method, if subterfuge is not expected: Filtering the \ntable pane by extension produces a list of all the docs. \n “ Exporting ” them will damage the forensic informa-\ntion, so instead you need to create a logical evidence file \n(LEF). Using EnCase, a user can create a condition to \nview all the .DOC files and then dump the files into a \nlogical evidence file in about 30 seconds. Once the logi-\ncal evidence file is created, it can later be used to cre-\nate a CD-ROM. There are special modules available that \nwill allow an exact extraction of native files to CD to \nallow further processing for a review tool. \n Redundant Array of Independent (or \nInexpensive) Disks (RAID) \n Acquiring an entire RAID set disk by disk and then reas-\nsembling them in EnCase is probably the easiest way of \ndealing with a RAID and may be the only way to capture \na software RAID. Hardware RAIDs can be most effi-\nciently captured using a boot disk. This allows the cap-\nture of a single volume that contains all the unique data \nin an array. It can be trickier to configure and as with \neverything, practice makes perfect. Be sure you under-\nstand how it works. The worst thing that can happen is \nthat the wrong disk or an unreadable disk gets imaged \nand the job has to be redone at your expense. \n File System Analyses \n FAT12, FAT16, and FAT32 are all types of file systems. \nSpecial circumstances aside, most forensic examiners \nwill find themselves regularly dealing with either FAT \nor NTFS file systems. FAT differs from NTFS primarily \nin the way that it stores information about how it stores \ninformation. Largely, from the average forensic exam-\niner’s standpoint, very little about the internal workings \nof these file systems is relevant. Most modern forensic \nsoftware will do the work of reconstructing and extract-\ning information from these systems, at the system level, \nfor you. Nonetheless, an understanding of these systems \nis critical because, at any given time, an examiner just \nmight need to know it. The following are some examples \nshowing where you might need to know about the file \nsystem: \n ● Rebuilding RAID arrays \n ● Locating lost or moved partitions \n ● Discussions of more advanced information that can \nbe gleaned from entries in the MFT or FAT \n The difference between FAT12, 16, and 32 is in the \nsize of each item in the File Allocation Table (FAT). \nEach has a correspondingly sized entry in the FAT. For \nexample, FAT12 has a 12-bit entry in the FAT. Each \n12-bit sequence represents a cluster. This places a limi-\ntation on the file system regarding the number of file \nextents available to a file. The FAT stores the following \ninformation: \n ● Fragmentation \n ● Used or unused clusters \n ● A list of entries that correspond to each cluster on \nthe partition \n ● Marks a cluster as used, reserved, unused, or bad \n ● The cluster number of the next cluster in the chain \n Sector information is stored in the directory. In the \nFAT file system, directories are actually files that contain \nas many 32-byte slots as there are entries in the folder. \nThis is also where deleted entries from a folder can be \n" }, { "page_number": 348, "text": "Chapter | 19 Computer Forensics\n315\n Oops! Did I Delete That? \n The file is gone. It’s not in the recycle bin. You’ve done a \ncomplete search of your files for it, and now the panic sets \nin. It’s vanished. You may have even asked people to look \nthrough their email because maybe you sent it to them (you \ndidn’t). Oops. Now what? \n Hours, days, maybe even years of hard work seem to be \nlost. Wait! Don’t touch that PC! Every action you take on \nyour PC at this point may be destroying what is left of your \nfile. It’s not too late yet. You’ve possibly heard that things \nthat are deleted are never really deleted, but you may also \nthink that it will cost you thousands of dollars to recover \ndeleted files once you’ve emptied the recycle bin. Suffice \nit to say, unless you are embroiled in complicated edis-\ncovery or forensic legal proceedings where preservation is \na requirement, recovering some deleted files for you may \ncost no more than a tune-up for your car. \n Now the part where I told you: “ Don’t touch that PC! ” \nI meant it. Seriously: Don’t touch that PC. The second you \nrealize you’ve lost a file, stop doing anything. Don’t visit \nany Web sites. Don’t install software. Don’t reformat your \nhard drive, do a Windows repair, or install undelete soft-\nware. In fact, if it’s not a server or running a shared data-\nbase application, pull the plug from the back of the PC. \nEvery time you “ write ” to your hard drive, you run the risk \nthat you are destroying your lost file. In the case of one \nauthor we helped, ten years of a book were “ deleted. ” With \nimmediate professional help, her files were recovered. \n Whether you accidentally pulled your USB drive from the \nslot without stopping it first (corrupting your MFT), intention-\nally deleted it, or have discovered an “ OS not found ” mes-\nsage when you booted your PC, you need your files back and \nyou need them now. However, if you made the mistake of \nactually overwriting a file with a file that has the same name \nand thereby replaced it, “ Abandon hope all ye who enter \nhere. ” Your file is trashed, and all that may be left are some \nscraps of the old file, parts that didn’t get overwritten but can \nbe extracted from file slack. If your hard drive is physically \ndamaged, you may also be looking at an expensive recovery. \ndiscovered. Figure 19.3 shows how the sector view is \nrepresented by a common forensic analysis tool. \n NTFS \n NTFS is a significant advancement in terms of data \nstorage. It allows for long filenames, almost unlimited \nstorage, and a more efficient method of accessing infor-\nmation. It also provides for much greater latency in \ndeleted files, that is, deleted files stick around a lot longer \nin NTFS than they do in FAT. The following items are \nunique to NTFS. Instead of keeping the filenames in \nfolder files, the entire file structure of NTFS is retained in \na flat file database called the Master File Table (MFT): \n ● Improved support for metadata \n ● Advanced data structuring improves performance \nand reliability \n ● Improved disk space utilization with a maximum disk \nsize of 7.8TB, or 2 64 sectors; sector sizes can vary in \nNTFS and are most easily controlled using a third-\nparty partitioning tool such as Partition Magic. \n ● Greater security \n The Role of the Forensic Examiner in \nInvestigations and File Recovery \n The forensic examiner is at his best when he is searching \nfor deleted files and attempting to reconstruct a pattern of \nuser behavior. The sidebar “ Oops! Did I Delete That? ” \nfirst appeared in the Chicago Daily Southtown column \n “ Bits You Can Use. ” In addition, this section also includes \na discussion of data recovery and an insurance investiga-\ntion article (see sidebar, “ Don’t Touch That Computer! \nData Recovery Following Fire, Flood, or Storm ” ).\n FIGURE 19.3 The sector view. \n" }, { "page_number": 349, "text": "PART | II Managing Information Security\n316\n Fire, flood, earthquakes, landslides, and other catastrophes \noften result in damaged computer equipment — and loss \nof electronic data. For claims managers and adjusters, this \ndata loss can manifest itself in an overwhelming number \nof insurance claims, costing insurers millions of dollars \neach year. \n The improper handling of computers immediately fol-\nlowing a catastrophic event is possibly one of the leading \ncauses of data loss — and once a drive has been improperly \nhandled, the chances of data retrieval plummet. Adjusters \noften assume a complete loss when, in fact, there are \nmethods that can save or salvage data on even the most \ndamaged computers. \n Methods to Save or Salvage Data \n One of the first steps toward salvaging computer data is to \nfocus on the preservation of the hard drive. Hard drives are \nhighly precise instruments, and warping of drive compo-\nnents by even fractions of a millimeter will cause damage \nto occur in the very first moments of booting up. During \nthese moments the drive head can shred the surface of the \nplatters, rendering the data unrecoverable. \n Hard drives are preserved in different ways, depending \non the damaging event. If the drive is submerged in a flood, \nthe drive should be removed and resubmerged in clean, dis-\ntilled water and shipped to an expert. It is important that this \nis done immediately. Hard drive platters that are allowed \nto corrode after being exposed to water, especially if the \ndrive experienced seepage, will oxidize and data will be \ndestroyed. A professional can completely disassemble the \ndrive to ensure that all of its parts are dry, determine what \nlevel of damage has already occurred, and then decide \nhow to proceed with the recovery. Care must always be \ntaken during removal from the site to prevent the drive from \nbreaking open and being exposed to dust. \n Fire or Smoke Damage \n After a fire or a flood, the hard drive should not be moved \nand in no circumstances should it be powered up. A certi-\nfied computer forensic expert with experience in handling \ndamaged drives should be immediately contacted. Typically, \nthese experts will be able to dismantle the drive and move \nit without causing further damage. They are able to assess \nthe external damage and arrive at a decision that will safely \ntriage the drive for further recovery steps. Fire-damaged \ndrives should never be moved or handled by laymen. \n Shock Damage \n Shock damage can occur when someone drops a compu-\nter or it is damaged via an automobile accident; in more \ncatastrophic scenarios, shock damage can result when serv-\ners fall through floors during a fire or are even damaged \nby bulldozers that are removing debris. This type of crush-\ning damage often results in bending of the platters and can \nbe extensive. As in fire and flood circumstances, the drive \nshould be isolated and power should not be applied. \n A drive that has been damaged by shock presents a \nunique challenge: from the outside, the computer may \nlook fine. This is typical of many claims involving laptop \ncomputers damaged during an automobile collision. If the \n Don’t Touch That Computer! Data Recovery Following Fire, Flood, or Storm \n Here’s the quick version of how deleted files \nbecome “ obliterated ” and unrecoverable:\n Step 1: The MFT that contains a reference to your file \nmarks the deleted file space as available. \n Step 2: The actual sectors where the deleted file \nresides might be either completely or partially \noverwritten. \n Step 3: The MFT record for the deleted file is over-\nwritten and destroyed. Any ability to easily \nrecover your deleted file at this point is lost. \n Step 4: The sectors on the PC that contain your deleted \nfile are overwritten and eventually only slight traces \nof your lost file remain. Sometimes the MFT record \nmay be destroyed before the actual sectors are over-\nwritten. This happens a lot, and these files are recov-\nerable with a little extra work. \n It’s tremendously amazing how fast this process can \noccur. It’s equally amazing how slowly this process can \noccur. Recently I retrieved hundreds of files from a system \nthat had periodically been deleting files, and new files were \nwritten to the disk on a daily basis. Yet recovery software \nsuccessfully recovered hundreds of complete files from \nnearly six months ago! A lot of free space on the disk contri-\nbuted to this success. A disk that is in use and is nearly full will \nbe proportionally less likely to contain old, deleted files. \n The equipment and training investment to perform \nthese operations is high, so expect that labor costs will be \nhigher, but you can also expect some degree of success \nwhen attempting to recover files that have been acciden-\ntally deleted. Let me leave you with two dire warnings: \nDisks that have experienced surface damage (scored plat-\nters) are unrecoverable with current technology. And never, \never, ever disrupt power to your PC when it’s performing a \ndefrag. The results are disastrous. \n" }, { "page_number": 350, "text": "Chapter | 19 Computer Forensics\n317\ncomputer consultant can verify that the drive was powered \ndown at the time the accident occurred, most will be com-\nfortable attempting to power up a drive that has been in \na collision, to begin the data capture process. At the first \nsign of a change in the dynamics of the drive, a head click-\ning or a drive spinning down, power will be cut from the \ndrive and the restoration will continue in a clean room \nwhere the drive will be opened up and protected from \nharmful dust. \n The Importance of Off-Site Computer Backup \n One of the best ways to maximize computer data recov-\nery efforts is to have off-site computer backup. For adjust-\ners arriving at the scene, this should be one of the first \nquestions asked. An offsite backup can take many forms. \nSome involve the use of special data centers that synchro-\nnize data constantly. There are companies that provide this \nservice. Other backups, for smaller companies, may be as \nmundane (but effective) as removing a tape backup of criti-\ncal data from the site on a daily basis. With proper rota-\ntion and several tapes, a complete backup of data is always \noffsite. Prices for these services vary widely depending on \nhow much data needs to be backed up and how often. \n Case Studies \n Scenario 1 \n John ran a home-based IT business. After his home burned \ndown, John posted an insurance claim for a $500,000 loss \nfor lost income, damaged computer equipment, and lost \nwages. He also charged the insurance company $60,000 \nfor the three months he spent recovering data from the \ndrives. Because he was only able to recover 25% of the \ndata, he posted an additional claim for the cost of recon-\nstructing the Web sites that he hosted from his home. \n For the computer forensic consultant, this case raised \nseveral questions. As an IT professional, John should have \nknown better than to touch the hard drive and attempt \nto recover any of the data himself. Also, when a claim-\nant intentionally or unintentionally inflicts damage on his \nor her own property after an event, who is responsible ? \nThrough a thorough evaluation of the circumstances and \nintense questioning of the claimant, the claim was eventu-\nally reduced to a substantially smaller amount. \n Scenario 2 \n Sammie’s Flowers has 113 retail outlets and one cen-\ntral headquarters where they keep photography, custom \nsoftware, catalog masters, and the like. There is no offsite \nbackup. Everything is on CD-ROMs or on the hard drive of \nthe company’s server. \n One night lightning struck the headquarters building and \nit burned down. An IT appraiser lacking the appropriate \ncomputer forensic skills evaluated the computer equipment \nafter the fire. No attempts were made to recover data from \nthe hard drives or to start the computers; because of their \ndamaged physical condition, they were simply thrown into \na dumpster. \n One year later, the insured filed a claim for $37 million. \nUnder the terms of the insured’s policy, coverage for valu-\nable papers and business personal property was most per-\ntinent to the case. The policy limit for valuable papers is \nsmall and easily reached. The coverage limit for business \npersonal property, on the other hand, will cover the $37 mil-\nlion claim — if the court decides that the computer data that \nwas lost qualifies as “ business valuable papers. ” Though this \ncase is still pending, the cost of resolving this claim could \nbe astronomic, and had a computer data recovery expert \nbeen consulted, the claim amount could have been reduced \nby millions. \n Scenario 3 \n Alisa, a professional photographer, was in a car accident \nthat damaged her laptop computer. She had been using the \nPC at a rest stop prior to the accident and later reported \nto the adjuster that when she booted it up after the acci-\ndent she heard “ a strange clicking and clacking sound. ” \nUnfortunately for Alisa, that was the sound of data being \ndestroyed. She posted a $500,000 claim to her insurance \ncompany under her business policy — including the cost of \n2000 lost images and the cost of equipment, site, model, \nand agency fees for a one-day photography shoot. Had the \nPC, which had a noticeable crack in it caused by the acci-\ndent, been professionally handled, the chances are good \nthe data could have been recovered and the claim would \nhave been significantly reduced. \n Computer equipment is always at risk of being damaged — \nwhether by flood, fire, lightning, or other catastrophic \nmeans. However, damage does not always equal data loss. \nIndeed, companies and their adjusters can be quick to write \noff damaged storage media when, in fact, recovery may be \npossible. By taking the immediate measures of protecting \nthe computers from touch and power and by calling in pro-\nfessional computer forensic experts to assess the damage, \ninsurers can reap the benefits in reduced claim amounts. \n Password Recovery \n The following is a short list of the ways and types of \npasswords that can be recovered. Many useful tools \nthat can be downloaded from the Internet for free will \ncrack open system files that store passwords. Software \nprograms, such as Peachtree (a financial database), \nWindows 98, certain FTP programs, and the like all store \npasswords in a way that allows their easy retrieval. \n" }, { "page_number": 351, "text": "PART | II Managing Information Security\n318\n Recovering license keys for software is often an \nimportant step in reconstructing or virtualizing a disk \nimage. Like passwords, without a valid license key the \nsoftware won’t work. There are a number of useful pro-\ngrams that can recover software license keys from the \nregistry of a computer that can be found with a quick \nGoogle search. Understanding the mind of the user can \nalso be helpful in locating things such as password stor-\nage tools or simply thinking to search a computer for \nthe word password . Many, many Web sites will pass the \npassword down to a client machine through the Password \nfield in HTML documents. Some developers have wised \nup to this “ feature ” and they strip it out before it comes \ndown, but most of them do it on the client side. This \nmeans that with the proper intercepting tool, the pass-\nword can be captured midstream on its way down to the \nclient, before it gets stripped out. \n Password cracking can be achieved with a minimal \namount of skill and a great deal of patience. Having \nsome idea of what the password is before cracking it \nwill be helpful. You can also purchase both online serv-\nices as well as software tools that will strip the password \nright out of a file, removing it completely. Word docu-\nments are particularly vulnerable, and zip files are par-\nticularly invulnerable. However, there is a method (and a \nfree download) that can figure out the password of a zip \nfile if a sample of a file that is known to be in the zip can \nbe provided. \n File Carving \n In most investigations, the very first place a file system \nexamination begins is with live files. Live files are those \nfiles that still have MFT entries. Link file, trash bin, \nOutlook Temporary (OLK) folders, recent items, ISO \nlists, Internet history, TIF, and thumb databases all consti-\ntute a discernible, unique pattern of user activity. As such, \nthey hold particular interest. By exploring these files an \nexaminer can make determinations about file origins, the \nusage of files, the distribution of files, and of course the \ncurrent location of the files. But sometimes the subject \nhas been very clever and removed all traces of activity. Or \nthe suspect item may be a server, used merely as a reposi-\ntory for the files. Or maybe someone just wants to recover \na file that they deleted a long, long time ago (see sidebar, \n “ Oops, Did I Delete That? ” ). \n When such need arises, the vast graveyard called the \n unallocated clusters could hold the last hope that the \nfile can be recovered. By searching the unallocated clus-\nters using a search tool designed for such things, and by \nusing a known keyword in the file, one may locate the \nportion within the unallocated clusters where a file used \nto reside. Typically, search hits will be stored under a tab \nor in a particular area of the forensic toolset, and they \nmay be browsed, one by one, along with a small excerpt \nfrom the surrounding bits. By clicking on the search hit, \nanother pane of the software window may show a more \nexpanded view of the hit location. If it is a document, \nwith text, then that is great and you may see other words \nthat were also known to have been in the target file. Now, \nin TV shows like CSI , of course the document is always \nthere, and by running some reverse 128-bit decrytion \nsequencer to an inverted 12-bit decryption sequencer that \nreloops the hashing algorithm through a 256-bit decom-\npiler by rethreading it into a multiplexing file marker, \nthey can just right click and say “ export this ” and the file \nwill print out, even if it’s not even on the computer that is \nbeing examined and never was. (Yes, I made all that up.) \n In the real world, more often than not we find that \nour examinations are spurred and motivated and wholly \ncreated by someone’s abject paranoia. In these cases, no \namount of digging will ever create the evidence that they \nwant to see. That leaves only creative use of time stamps \non documents to attempt to create an aroma of guilt about \nthe subject piece. Sometimes we find that even after root-\ning through 300GB of unallocated clusters, leaving no \nstone unturned, the file just isn’t there. But sometimes, all \npessimism aside, we find little bits and pieces of interest-\ning things all salted around throughout the unallocated \nclusters. \n The first place to turn is the automated carvers. By \nfamiliarizing ourselves with the hexadecimal patterns \nof file signatures (and I’ve provided a nice table for you \nhere), we may view the hex of the unallocated clus-\nters in a hex editor or in the hex pane of the examina-\ntion tool. Or possibly we already know the type of file. \nLet’s say that we know the type of file because our client \ntold us that they only use Word as their document editor. \nWe scroll to the beginning of the section of text, which \nmight look like this: \n Figure sample file signature \n From the text pane view of EnCase: \n Ð Ï · à ¡ \u000b á ················ \u0005 ··· þ ÿ ········· \n From the Hex view: \n 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 4F 6F 70 73 \n21 20 20 44 69 64 \n 20 49 20 64 65 6C 65 74 65 20 74 68 61 74 3F 0D 42 79 \n20 53 63 6F 74 74 20 \n 52 2E 20 45 6C 6C 69 73 0D 73 65 6C 6C 69 73 40 75 73 \n2E 72 67 6C 2E 63 6F \n 6D 0D 0D 54 68 65 20 66 69 6C 65 20 69 73 20 67 6F 6E \n65 2E 20 20 49 74 92 \n" }, { "page_number": 352, "text": "Chapter | 19 Computer Forensics\n319\n 73 20 6E 6F 74 20 69 6E 20 74 68 65 20 72 65 63 79 63 \n6C 65 20 62 69 6E 2E \n 20 59 6F 75 92 76 65 20 64 6F 6E 65 20 61 20 63 6F 6D \n70 6C 65 74 65 20 73 \n Scrolling down in the text pane, we then find the \nfollowing: \n ···············Oops! Did I delete that? By Scott R. Ellis The file \nis gone. It’s not in the recycle bin. You’ve. . . . . . . \n By simply visually scanning the unallocated clus-\nters, we can pick up where the file begins and, if the file \nsignature isn’t in the provided list of signatures or if for \nsome reason the carving scripts in the forensic software \nare incorrectly pulling files, they may need to be manu-\nally set up. Truly, for Word files, that is all you need to \nknow. You need to be able to determine the end and the \nbeginning of a file. Some software will ignore data in the \nfile before and after the beginning and end of file signa-\ntures. This is true for many, many file types; I can’t tell \nyou which ones because I haven’t tried them all. There \nare some file types that need a valid end-of-file (EOF) \nmarker, but most don’t. However, if you don’t cap-\nture the true EOF (sensible marker or no), the file may \nlook like garbage or all the original formatting will be \nscrambled or it won’t open. Some JPEG viewers (such \nas Adobe Photoshop) will throw an error if the EOF is \nnot found. Others, such as Internet Explorer, won’t even \nnotice. Here’s the trick — and it is a trick, and don’t let \nanyone tell you differently; they might not teach this \nin your average university computer forensics class: \nStarting with the file signature, highlight as many of the \nunallocated clusters after the file signature that you think \nwould possibly be big enough to hold the entire file size. \nNow double that, and export it as raw data. Give it a \n.DOC extension and open it in Word. Voil á ! The file has \nbeen reconstructed. Word will know where the document \nends and it will show you that document. If you happen \nto catch a few extra documents at the end, or a JPG or \nwhatever, Word will ignore them and show only the first \ndocument. \n Unless some sort of drastic “ wiping action ” has taken \nplace, as in the use of a third-party utility to delete data, \nI have almost always found that a great deal of deleted \ndata is immediately available in EnCase (forensic soft-\nware) within 20 – 25 minutes after a hard disk image is \nmounted, simply by running “ recover folders ” and sit-\nting back and waiting while it runs. This is especially \ntrue when the drive has not been used at all since the \ntime the data was deleted. Preferably, counsel will have \ntaken steps to ensure that this is the case when a compu-\nter is the prime subject of an investigation. Often this is \nnot the case, however. Many attorneys, IT, and HR direc-\ntors “ poke around ” for information all on their own. \n It is conceivable that up to 80% of deleted data on \na computer may be readily available, without the neces-\nsity of carving, for up to two or three years, as long as \nthe computer hasn’t seen extreme use (large amounts of \nfiles, or large amounts of copying and moving of very \nlarge files) that could conceivably overwrite the data. \n Even so, searching unallocated clusters for file types \ntypically does not require the creation of an index. \nDepending on the size of the drive, it may take four or \nfive hours for the carving process to complete, and it \nmay or may not be entirely successful, depending on the \ntype of files that are being carved. For example, MPEG \nvideos do not carve well at all, but there are ways around \nthat. DOC and XLS files usually carve out quite nicely. \n Indexing is something that is done strictly for the \npurpose of searching massive amounts of files for large \nnumbers of keywords. We rarely use EnCase to search \nfor keywords; we have found it better to use Relativity, \nour review environment, to allow the people who are \ninterested in the keywords to do the keyword searching \nthemselves as they perform their review. Relativity is \nbuilt on an SQL platform on which indexing is a known \nand stable technology. \n In other words (as in the bottom line), spending 15 \nto 25 minutes with a drive, an experienced examiner can \nprovide a very succinct answer as to how long it would \ntake to provide the files that they want. And, very likely, \nthe answer could be, “ Another 30 minutes and it will \nbe yours. ” Including time to set up, extract, and copy to \ndisk, if everything is in perfect order, two hours is the \nupper limit. This is based on the foundation that the \ndeleted data they are looking for was deleted in the last \ncouple of weeks of the use of the computer. If they need \nto go back more than a couple of months, an examiner \nmay end up carving into the unallocated clusters to find \n “ lost ” files — these are files for which part of or all of the \nmaster file table entry has been obliterated and portions \nof the files themselves may be overwritten. \n Carving is considered one of the consummate \nforensic skills. Regardless of the few shortcuts that \nexist, carving requires a deep, disk-level knowledge of \nhow files are stored, and it requires a certain intuition \nthat cannot be “ book taught. ” Examiners gain this tal-\nent from years of looking at raw disk data. Regardless, \neven the most efficient and skilled of carvers will turn to \ntheir automated carving tools. Two things that the carv-\ning tools excel at is carving out images and print spool \nfiles (EMFs). What are they really bad at? The tools I \nuse don’t even begin to work properly to carve out email \n" }, { "page_number": 353, "text": "PART | II Managing Information Security\n320\nfiles. General regular program (GREP) searching doesn’t \nprovide for branching logic, so you can’t locate a quali-\nfied email header, every single time, and capture the end \nof it. The best you can do is create your own script to \ncarve out the emails. GREP does not allow for any sort \nof true logic that would be useful or even efficient at \ncapturing something as complex as the many variations \nof email headers that exist, but it does allow for many \nalterations of a single search term to be formulated with \na single expression. For example, the words house, hous-\ning, houses, and housed could all be searched for with \na single statement such as “ hous[(e) | (es) | (ing) | (ed)] ” . \nGREP can be useful, but it is not really a shortcut. Each \noption added to a GREP statement doubles the length of \ntime the search will take to run. Searching for house(s) \nhas the same run time as two separate keywords for \n house and houses . It also allows for efficient pattern \nmatching. For example, if you wanted to find all the \nphone numbers on a computer for three particular area \ncodes, you could formulate a GREP expression like this. \nUsing a test file and running the search each time, an \nexpression can be built that finds phone numbers in any \nof three area codes: \n (708) | (312) | (847) Checks for the three area codes \n [\\(]?(708) | (312) | (847)[\\-\\)\\.]? Checks for parentheses and \nother formatting \n [\\(]?(708) | (312) | (847)[\\-\\)\\.]?###[\\-\\.]?#### Checks for \nthe rest of the number \n This statement will find any 10-digit string that is for-\nmatted like a phone number, as well as any 10-digit \nstring that contains one of the three area codes. This last \noption, to check for any 10-digit number string, if run \nagainst an entire OS, will likely return numerous results \nthat aren’t phone numbers. The question marks render \nthe search for phone number formatting optional. \n The following are the characters that are used to \nformulate a GREP expression. Typically, the best use \nof GREP is its ability to formulate pattern-matching \nsearches. In GREP, the following symbols are used to \nformulate an expression: \n · \nThe period is a wildcard and means a space must \nbe occupied by any character. \n * The asterisk is a wildcard that means any charac-\nter or no character. It will match multiple repeti-\ntions of the character as well. \n ? The character preceding the question mark must \nrepeat 0 or 1 times. It provides instructions as to \nhow to search for the character or grouping that \nprecedes it. \n \u0002 This is like the question mark, only it must exist \nat least one or more times. \n # Matches a number. \n [ · ] Matches a list of characters. [hH]i matches hi and \n Hi (but not hHi! ). \n \u0005 This is a “ not ” and will exclude a part from a \nstring. \n [-] A range of characters such as (a-z) will find any \nsingle letter, a through z. \n \\ \nThis will escape the standard GREP search \nsymbols so that it may be included as part of the \nsearch. For example, a search string that has the \n( symbol in it (such as a phone number) needs to \nhave the parentheses escaped so that the string \ncan be included as part of the search. \n | \nThis is an “ or. ” See previous sample search for \narea codes. \n \\x Searches for the indicated hex string. \n By preceding a hex character with \\x marks the next \ntwo characters as hexadecimal characters. Using this to \nlocate a known hex string is more efficient than relying \non it to be interpreted from Unicode or UTF. \n Most forensic applications have stock scripts \nincluded that can carve for you. Many of the popular \ncomputer forensics applications can carve for you. They \nhave scripted modules that will run, and all you have to \ndo is select the signature you want and voil á , it carves \nit right out of the unallocated clusters for you. Sounds \npretty slick, and it is slick — when it works. The problem \nis that some files, such as MPEG video, don’t have a set \nsignature at the beginning and end of each file. So how \ncan we carve them? Running an MPEG carver will make \na mess. It’s a far better thing to do a “ carve ” by locating \nMPEG data, highlighting it, exporting it to a file, and \ngiving it an MPEG extension. \n Things to Know: How Time stamps Work \n Let’s take an example: Bob in accounting has been dis-\ncovered to be pilfering from the cash box. A forensics \nexaminer is called in to examine his computer system to \nsee if he has been engaging in any activities that would be \nagainst company policy and to see if he has been access-\ning areas of the network that he shouldn’t be. They want \nto know what he has been working on. A quick examina-\ntion of his PC turns up a very large cache of pornogra-\nphy. A casual glance at the Entry Modified time stamp \nshows that the images were created nearly one year \nbefore Bob’s employment, so automatically the investi-\ngator disregards the images and moves onto his search \n" }, { "page_number": 354, "text": "Chapter | 19 Computer Forensics\n321\n Experimental Evidence \n Examining and understanding how time stamps behave \non individual PCs and operating systems provide some \nof the greatest challenges facing forensic examiners. \nThis is not due to any great difficulty, but rather because \nof the difficulty in clearly explaining it to others. This \nexaminer once read a quote from a prosecutor in a local \nnewspaper that said, “ We will clearly show that he \nviewed the image on three separate occasions. ” In court \nthe defense’s expert disabused her of the notion she held \nthat Last Written, Last Accessed, Entry Modified, and \nDate Created time stamps were convenient little record-\nings of user activity. Rather, they are references mostly \nused by the operating system for its own arcane pur-\nposes. Table 19.1 compares the three known Windows \ntime stamps with the four time stamps in EnCase. \n XP \n A zip file was created using a file with a Date Created time \nstamp of 12/23/07 10:40:53AM (see ID 1 in Table 19.2 ). \nIt was then extracted and the time stamps were examined. \n Using Windows XP compressed folders, the file was \nthen extracted to a separate file on a different system \n(ID 2 in Table 19.2 ). Date Created and Entry Modified \ntime stamps, upon extraction, inherited the original Date \nCreated time stamp of 12/23/07 10:40:53AM and Last \nAccessed of 04/28/08 01:56:07PM. \n TABLE 19.2 Date Created Time stamp \n ID \n Name \n Last Accessed \n File Created \n Entry Modified \n 1 \n IMG_3521.CR2 \n 04/28/08 01:56:07PM \n 12/23/07 10:40:53AM \n 03/15/08 09:11:15AM \n 2 \n IMG_3521.CR2 \n 04/28/08 01:56:07PM \n 12/23/07 10:40:53AM \n 04/28/08 01:57:12PM \nfor evidence of copying and deleting sensitive files to \nhis local machine. The investigator begins to look at the \ndeleted files. His view is filtered, so he is not looking at \nanything but deleted files. He leaves the view in “ gallery ” \nview so that he can see telltale images that may give \nclues as to any Web sites used during the timeframe of \nthe suspected breaches. To his surprise, the investigator \nbegins seeing images from that porn cache. He notices \nnow that when a deleted file is overwritten, in the gallery \nview of the software the image that overwrote the deleted \nfile is displayed. He makes the logical conclusion that \nthe Entry Modified time stamp is somehow wrong. \n On a Windows XP machine, an archive file is \nextracted. Entry Modified time stamps are xx:xx:xx, even \nthough the archive was extracted to the file system on yy:\nyy:yy. Normally when a file is created on a system, it \ntakes on the system date as its Date Created time stamp. \nSuch is not the case with zip files. \n Entry Modified, in the world of computer forensics, \nis that illustrious time stamp that has cinched many a \ncase. It is a hidden time stamp that users never see, and \nfew of them actually know about it. As such, they can-\nnot change it. A very little-known fact about the Entry \nModified time stamp is that it is constrained. It can be no \nlater than the Date Created time stamp. (This is not true \nin Vista.) \n When a zip file is created, the Date Created and Date \nModified time stamps become the same. \n TABLE 19.1 Comparison of Three Known Windows Time stamps with the Four EnCase Time stamps \n Windows \n EnCase \n Purpose \n Date Created \n Date Created \n Typically this is the first time a file appeared on a system. It is not always \naccurate. \n Date Modified \n Last Written \n Usually this is the time when a system last finished writing or changing \ninformation in a file. \n Last Accessed \n Last Accessed \n This time stamp can be altered by any number of user and system \nactions. It should not be interpreted as the file having been opened and \nviewed. \n N/A \n Entry Modified \n This is a system pointer that is inaccessible to users through the Explorer \ninterface. It changes when the file changes size. \n" }, { "page_number": 355, "text": "PART | II Managing Information Security\n322\n The system Entry Modified (not to be confused with \nDate Modified) became 04/28/08 01:57:12PM. \n Various operating systems can perform various \noperations that will, en masse , alter the Entry Modified \ntime stamp (see Table 19.3 ). For example, a tape resto-\nration of a series of directories will create a time stamp \nadjustment in Entry Modified that corresponds to the \ndate of the restoration. The original file is on another \nsystem somewhere and is inaccessible to the investigator \n(because he doesn’t know about it). \n In Table 19.3 , Entry Modified becomes a part of \na larger pattern of time stamps after an OS event. On \na computer on which most of the time stamps have an \nEntry Modified time stamp that is sequential to a spe-\ncific timeframe, it is now more difficult to determine \nwhen the file actually arrived on the system. As long \nas the date stamps are not inherited from the overwrit-\ning file by the overwritten file, examining the files that \nwere overwritten by ID2 ( Table 19.3 ), can reveal a No \nLater Than time. In other words, the file could not have \nappeared on the system prior to the file that it overwrote. \n Vista \n A zip file was created using a file with a Date Created \ntime stamp of dd:mm:yyyy(a) and a date modified of \ndd:mm:yy(a). Using Windows Vista compressed fold-\ners, the file was then extracted to a separate file on the \nsame system. Date Modified time stamps, on extraction, \ninherited the original time stamp of dd:mm:yyyy(a), but \nthe Date Created time stamp reflected the true date. This \nis a significant change from XP. There are also tools \navailable that will allow a user to mass-edit time stamps. \nForensic examiners must always bear in mind that there \nare some very savvy users who research and understand \nantiforensics. \n Email Headers and Time stamps, Email \nReceipts, and Bounced Messages \n There is much confusion in the ediscovery industry and in \ncomputer forensics in general about how best to interpret \nemail time stamps. Though it might not offer the perfect \n “ every case ” solution, this section reveals the intricacies \nof dealing with time stamps and how to interpret them \ncorrectly. \n Regarding sources of email, SMTP has to relay email \nto its own domain. HELO/EHLO allows a user to con-\nnect to the SMTP port and send email. \n As most of us are aware, in 2007 the U.S. Congress \nenacted the Energy Policy Act of 2005 ( http://www.\nepa.gov/oust/fedlaws/publ_109-058.pdf , Section 110. \nDaylight Savings). This act was passed into law by \nPresident George W. Bush on August 8, 2005. Among \nother provisions, such as subsidies for wind energy, \nreducing air pollution, and providing tax breaks to \nhomeowners for making energy-conserving changes to \ntheir homes, it amended the Uniform Time Act of 1966 \nby changing the start and end dates for Daylight Savings \nTime (DST) beginning in 2007. Previously, clocks would \nbe set ahead by an hour on the first Sunday of April and \nset back on the last Sunday of October. The new law \nchanged this as follows: Starting in 2007 clocks were set \nahead one hour on the first Sunday of March and then \nset back on the first Sunday in November. Aside from \nthe additional confusion now facing everyone when we \nreview email and attempt to translate Greenwich Mean \nTime (GMT) to a sensible local time, probably the \nonly true noteworthy aspect of this new law is the extra \ndaylight time afforded to children trick-or-treating on \nHalloween. Many observers have questioned whether or \nnot the act actually resulted in a net energy savings. \n In a world of remote Web-based email servers, it has \nbeen observed that some email sent through a Web mail \ninterface will bear the time stamp of the time zone wherein \nthe server resides. Either your server is in the Central \nTime zone or the clock on the server is set to the wrong \ntime/time zone. Servers that send email mark the header \nof the email with the GMT stamp numerical value (noted \nin bold in the example that follows) as opposed to the \nactual time zone stamp. For example, instead of saying \n08:00 CST, the header will say 08:00 (-0600). The GMT \ndifferential is used so that every email client interprets that \nstamp based on the time zone and time setting of itself \n TABLE 19.3 Altering the Entry Modified Time stamp \n ID \n Name \n Last Accessed \n File Created \n Entry Modified \n 1 \n IMG_3521.CR2 \n 04/28/08 01:56:07PM \n 12/23/07 10:40:53AM \n 03/15/08 09:11:15AM \n 2 \n IMG_3521.CR2 \n 05/21/08 03:32:01PM \n 12/23/07 10:40:53AM \n 05/21/08 03:32:01PM \n" }, { "page_number": 356, "text": "Chapter | 19 Computer Forensics\n323\nand is able to account for things like Daylight Savings \nTime offsets. This is a dynamic interpretation; if I change \nthe time zone of my computer, it will change the way \nOutlook shows me the time of each email, but it doesn’t \nactually physically change the email itself. For example, if \nan email server is located in Colorado, every email I send \nappears to have been sent from the Mountain Time zone. \nMy email client interprets the Time Received of an email \nbased on when my server received the mail, not when my \nemail client downloads the email from my server. \n If a server is in the Central Time zone and the client \nis in Mountain Time, the normal Web mail interface will \nnot be cognizant of the client’s time zone. Hence those \nare the times you’ll see. I checked a Webmail account \non a server in California that I use and it does the same \nthing. Here I’ve broken up the header to show step by \nstep how it moved. Here is, first, the entire header in \nits original context, followed by a breakdown of how I \ninterpret each transaction in the header: \n ****************************************** \n Received: from p01c11m096.mxlogic.net (208.65.144.247) \nby mail.us.rgl.com \n (192.168.0.12) with Microsoft SMTP Server id 8.0.751.0; \nFri, 30 Nov 2007 \n 21:03:15 -0700 \n Received: from unknown [65.54.246.112] (EHLO bay0-\nomc1-s40.bay0.hotmail.com) \n by p01c11m096.mxlogic.net (mxl_mta-5.2.0-1) with ESMTP id \n 23cd0574.3307895728.120458.00-105.p01c11m096.\nmxlogic.net (envelope-from \n \u0004 timezone32@hotmail.com \u0005 ); Fri, 30 Nov 2007 20:59:46 -\n0700 (MST) \n Received: from BAY108-W37 ([65.54.162.137]) by bay0-\nomc1-s40.bay0.hotmail.com \n with Microsoft SMTPSVC(6.0.3790.3959); Fri, 30 Nov \n2007 19:59:46 -0800 \n Message-ID: \u0004 BAY108-W374BF59F8292A9D2C95F08B\nA720@phx.gbl \u0005 \n Return-Path: timezone32@hotmail.com \n Content-Type: multipart/alternative; boundary \u0003 “ \u0003\n _reb-r538638D0-t4750DC32 ” \n X-Originating-IP: [71.212.198.249] \n From: Test Account \u0004 timezone3@hotmail.com \u0005 \n To: Bill Nelson \u0004 attorney@attorney12345.com \u0005 , Scott \nEllis \u0004 sellis@us.rgl.com \u0005 \n Subject: FW: Norton Anti Virus \n Date: Fri, 30 Nov 2007 21:59:46 -0600 \n Importance: Normal \n In-Reply-To: \u0004 BAY108-W26EE80CDDA1C4C632124ABA\n720@phx.gbl \u0005 \n References: \u0004 BAY108-W26EE80CDDA1C4C632124ABA7\n20@phx.gbl \u0005 \n MIME-Version: 1.0 \n X-OriginalArrivalTime: 01 Dec 2007 03:59:46.0488 \n(UTC) FILETIME \u0003 [9CAC5B80:01C833CE] \n X-Processed-By: Rebuild v2.0-0 \n X-Spam: \n[F \u0003 0.0038471784; \nB \u0003 0.500(0); \nspf \u0003 0.500; \nCM \u0003 0.500; \nS \u0003 0.010(2007110801); \nMH \u0003 0.500(2007113048); \nR \u0003 0.276(1071030201529); \nSC \u0003 none; SS \u0003 0.500] \n X-MAIL-FROM: \u0004 timezone3@hotmail.com \u0005 \n X-SOURCE-IP: [65.54.246.112] \n X-AnalysisOut: [v \u0003 1.0 c \u0003 0 a \u0003 Db0T9Pbbji75CibVOCAA:9 \na \u0003 rYVTvsE0vOPdh0IEP8MA:] \n X-AnalysisOut: [7 a \u0003 TaS_S6-EMopkTzdPlCr4MVJL5DQA:4 \na \u0003 NCG-xuS670wA:10 a \u0003 T-0] \n X-AnalysisOut: [QtiWyBeMA:10 a \u0003 r9zUxlSq4yJzxRie7pAA:7 \na \u0003 EWQMng83CrhB0XWP0h] \n X-AnalysisOut: [vbCEdheDsA:4 a \u0003 EfJqPEOeqlMA:10 \na \u0003 37WNUvjkh6kA:10] \n ************************************** \n Looks like a bunch of garbage, right? Here it is, step \nby step, transaction by transaction, in reverse chronolog-\nical order: \n 1. My server in Colorado receives the email (GMT dif-\nferential is in bold): \n Received: from p01c11m096.mxlogic.net (208.65.144.247) \nby mail.us.rgl.com \n (192.168.0.12) with Microsoft SMTP Server id 8.0.751.0; \nFri, 30 Nov 2007 21:03:15 -0700 \n 2. Prior to that, my mail-filtering service in Colorado \nreceives the email: \n Received: from unknown [65.54.246.112] (EHLO bay0-\nomc1-s40.bay0.hotmail.com) \n by p01c11m096.mxlogic.net (mxl_mta-5.2.0-1) with ESMTP id \n 23cd0574.3307895728.120458.00-105.p01c11m096.\nmxlogic.net (envelope-from \n \u0004 timezone3@hotmail.com \u0005 ); Fri, 30 Nov 2007 20:59:46 \n -0700 (MST) \n – The email server receives the sender’s email in \nthis next section. On most networks, the mail \nserver is rarely the same machine on which a user \ncreated the email. This next item in the header of \nthe email shows that the email server is located in \nthe Pacific Time zone. 65.54.246.112, the x-ori-\ngin stamp, is the actual IP address of the compu-\nter that sent the email: \n Received: from BAY108-W37 ([65.54.162.137]) by bay0-\nomc1-s40.bay0.hotmail.com \n with Microsoft SMTPSVC(6.0.3790.3959); Fri, 30 Nov \n2007 19:59:46 -0800 \n" }, { "page_number": 357, "text": "PART | II Managing Information Security\n324\n Message-ID: \u0004 BAY108-W374BF59F8292A9D2C95F08B\nA720@phx.gbl \u0005 \n Return-Path: timezone310@hotmail.com \n – This content was produced on the server where the \nWebmail application resides. Technically, the email \nwas created on the Web client application with only \none degree of separation between the originating IP \nand the sender IP. By examining the order and type \nof IP addresses logged in the header, a trail can be \ncreated that shows the path of mail servers that the \nemail traversed before arriving at its destination. \nThis machine is the one that is likely in the Central \nTime zone, since it can be verified by the -0600 in \nthe following. The X-originating IP address is the \nIP address of the sender’s external Internet con-\nnection IP address in her house and the X-Source \nIP address is the IP address of the Webmail server \nshe logged into on this day. This IP address is \nalso subject to change because they have many \nWebmail servers as well. In fact, comparisons to \nolder emails sent on different dates show that it is \ndifferent. Originating IP address is also subject to \nchange since a DSL or cable Internet is very likely \na dynamic account, but it (likely) won’t change as \nfrequently as the X-source: \n Content-Type: multipart/alternative; boundary \u0003 “ \u0003 _reb-\nr538638D0-t4750DC32 ” \n X-Originating-IP: [71.212.198.249] \n From: Test Account \u0004 @hotmail.com \u0005 \n To: Bill Nelson \u0004 attorney@attorney12345.com \u0005 , Scott \nEllis \u0004 sellis@us.rgl.com \u0005 \n Subject: FW: Norton Anti Virus \n Date: Fri, 30 Nov 2007 21:59:46 -0600 \n Importance: Normal \n In-Reply-To: \u0004 BAY108-W26EE80CDDA1C4C632124ABA\n720@phx.gbl \u0005 \n References: \u0004 BAY108-W26EE80CDDA1C4C632124ABA7\n20@phx.gbl \u0005 \n MIME-Version: 1.0 \n X-OriginalArrivalTime: 01 Dec 2007 03:59:46.0488 \n(UTC) FILETIME \u0003 [9CAC5B80:01C833CE] \n X-Processed-By: Rebuild v2.0-0 \n X-Spam: [F \u0003 0.0038471784; B \u0003 0.500(0); spf \u0003 0.500; \nCM \u0003 0.500; S \u0003 0.010(2007110801); MH \u0003 0.500(2007113\n048); R \u0003 0.276(1071030201529); SC \u0003 none; SS \u0003 0.500] \n X-MAIL-FROM: \u0004 timezone310@hotmail.com \u0005 \n X-SOURCE-IP: [65.54.246.112] \n X-AnalysisOut: [v \u0003 1.0 c \u0003 0 a \u0003 Db0T9Pbbji75CibVOCAA:9 \na \u0003 rYVTvsE0vOPdh0IEP8MA:] \n X-AnalysisOut: [7 a \u0003 TaS_S6-EMopkTzdPlCr4MVJL5DQA:4 \na \u0003 NCG-xuS670wA:10 a \u0003 T-0] \n X-AnalysisOut: [QtiWyBeMA:10 a \u0003 r9zUxlSq4yJzxRie7p\nAA:7 a \u0003 EWQMng83CrhB0XWP0h] \n X-AnalysisOut: [vbCEdheDsA:4 a \u0003 EfJqPEOeqlMA:10 \na \u0003 37WNUvjkh6kA:10] \n From: Test Account [ mailto:timezone310@hotmail.com ] \n Sent: Friday, November 30, 2007 10:00 PM \n To: Bill Nelson; Scott Ellis \n Subject: FW: Norton Anti Virus \n Bill and Scott, \n By the way, it was 8:57 my time when I sent the last email, \nhowever, my hotmail shows that it was 9:57 pm. Not sure if \ntheir server is on Central time or not. Scott, can you help \nwith that question? Thanks. \n Anonymous \n From: timezone310@hotmail.com \n To: attorney@attorney12345.com ; sellis@us.rgl.com \n CC: timezone310@hotmail.com \n Subject: Norton Anti Virus \n Date: Fri, 30 Nov 2007 21:57:16 -0600 \n Bill and Scott, \n I am on the computer now and have a question for you. \nCan you please call me? \n Anonymous \n Steganography “ Covered Writing ” \n Steganography tools provide a method that allows a \nuser to hide a file in plain sight. For example, there are a \nnumber of stego software tools that allow the user to hide \none image inside another. Some of these do it by simply \nappending the “ hidden ” file at the tail end of a JPEG file \nand then add a pointer to the beginning of the file. The \nmost common way that steganography is discovered on \na machine is through the detection of the steganography \nsoftware on the machine. Then comes the arduous task \nof locating 11 of the files that may possibly contain hid-\nden data. Other, more manual stego techniques may be \nas simple as hiding text behind other text. In Microsoft \nWord, text boxes can be placed right over the top of other \ntext, formatted in such a way as to render the text unde-\ntectable to a casual observer. Forensic tools will allow \nthe analyst to locate this text, but on opening the file the \ntext won’t be readily visible. Another method is to hide \nimages behind other images using the layers feature of \nsome photo enhancement tools, such as Photoshop. \n StegAlyzerAS is a tool created by Backbone Security \nto detect steganography on a system. It works by both \nsearching for known stego artifacts as well as by searching \n" }, { "page_number": 358, "text": "Chapter | 19 Computer Forensics\n325\nfor the program files associated with over 650 steganog-\nraphy toolsets. Steganography hash sets are also available \nwithin the NIST database of hash sets. Hash sets are data-\nbases of MD5 hashes of known unique files associated \nwith a particular application. \n 5. FIRST PRINCIPLES \n In science, first principles refer to going back to the \nmost basic nature of a thing. For example, in physics, an \nexperiment is ab initio (from first principles) if it only \nsubsumes a parameterization of known irrefutable laws \nof physics. The experiment of calculation does not make \nassumptions through modeling or assumptive logic. \n First principles, or ab initio, may or may not be \nsomething that a court will understand, depending on the \ncourt and the types of cases it tries. Ultimately the very \nbest evidence is that which can be easily duplicated. In \nobservation of a compromised system in its live state, \neven if the observation photographed or videoed may \nbe admitted as evidence but the events viewed cannot be \nduplicated, the veracity of the events will easily be ques-\ntioned by the opposition. \n During an investigation of a defendant’s PC, an \nexaminer found that a piece of software on the computer \nbehaved erratically. This behavior had occurred after the \ncomputer had been booted from a restored image of the PC. \nThe behavior was photographed and introduced in court \nas evidence. The behavior was mentioned during a cross-\nexamination and had not, originally, been intended as use \nfor evidence; it was simply something that the examiner \nrecalled seeing during his investigation, that the list of files \na piece of software would display would change. The pros-\necution was outraged because this statement harmed his \ncase to a great deal. The instability and erratic behavior of \nthe software was one of the underpinnings of the defense. \nThe examiner, in response to the prosecutor’s accusations \nof ineptitude, replied that he had a series of photographs \nthat demonstrated the behavior. The prosecutor requested \nthe photos, but the examiner didn’t have them in court. He \nbrought them the next day, at which time, when the jury \nwas not in the room, the prosecutor requested the photos, \nreviewed them, and promptly let the matter drop. \n It would have been a far more powerful thing to have \nproduced the photographs at the time of the statement; \nbut it may have also led the prosecution to an ab initio \neffort — one that may have shown that the defense expert’s \nfindings were irreproducible. In an expert testimony, \nthe more powerful and remarkable a piece of evidence, \nthe more likely it is to be challenged by the opposition. \nIt is an intricate game because such a challenge may \nultimately destroy the opposition’s case, since a corrobo-\nrative result would only serve to increase the veracity \nand reliability of the expert’s testimony. Whether you are \ndefense, prosecution, or plaintiff, the strongest evidence \nis that which is irrefutable and relies on first principles. \nAristotle defined it as those circumstances where “ for the \nsame (characteristic) simultaneously to belong and not \nbelong to the same (object) in the same (way) is impos-\nsible. ” In less obfuscating, 21st-century terms, the fol-\nlowing interpretation is applicable: One thing can’t be \ntwo different things at the same time in the same circum-\nstance; there is only one truth, and it is self-evidentiary \nand not open to interpretation. For example, when a com-\nputer hard drive is imaged, the opposition may also image \nthe same hard drive. If proper procedures are followed, \nthere is no possible way that different MD5 hashes could \nresult. Black cannot be white. \n The lesson learned? Never build your foundation on \nirreproducible evidence. To do so is tantamount to build-\ning the case on “ circumstantial ” evidence. \n 6. HACKING A WINDOWS XP \nPASSWORD \n There are many, many methods to decrypt or “ hack ” a \nWindows password. This section lists some of them. One \nof the more interesting methods of cracking passwords \nthrough the use of forensic methods is hacking the Active \nDirectory. It is not covered here, but suffice it to say that \nthere is an awesome amount of information stored in the \nActive Directory file of a domain server. With the correct \ntools and settings in place, Bitlocker locked PCs can be \naccessed and passwords can be viewed in plaintext with \njust a few simple, readily available scripts. \n Net User Password Hack \n If you have access to a machine, this is an easy thing, and \nthe instructions to do it can easily be found on YouTube. \nType net users at the Windows command line. Pick a \nuser. Type net user username * . (You have to type the \nasterisk or it won’t work.) You will then, regardless of \nyour privileges, be allowed to change any password, \nincluding the local machine administrator password. \n Lanman Hashes and Rainbow Tables \n ● The following procedure can be used to “ reverse-\nengineer ” the password from where it is stored in \nWindows. Lan Manager (or Lanman, or LM) has \n" }, { "page_number": 359, "text": "PART | II Managing Information Security\n326\nbeen used by Windows, in versions prior to Windows \nVista, to store passwords that are shorter than 15 \ncharacters. The vast majority of passwords are stored \nin this format. LM hashes are computed via a short \nseries of actions. The following items contribute to \nthe weakness of the hash: \n ● Password is converted to all uppercase. \n ● Passwords longer than seven characters are divided \ninto two halves. By visual inspection of the hash, \nthis allows us to determine whether the second \nhalf is padding. We can do this by viewing all the \nLM hashes on a system and observing whether the \nsecond halves of any of the hashes are the same. This \nwill speed up the process of decrypting the hash. \n ● There is no salt. In cryptography, salt is random bits \nthat are thrown in to prevent large lookup tables of \nvalues from being developed. \n Windows will store passwords using the Lanman \nhash. Windows Vista has changed this. For all ver-\nsions of Windows except Vista, about 70GB of what \nare called rainbow tables can be downloaded from the \nInternet. Using a tool such as the many that are found \non Backtrack will capture the actual hashes that are \nstored for the password on the physical disk. Analysis \nof the hashes will show whether or not the hashes are in \nuse as passwords. Rainbow tables, which can be down-\nloaded from the Web in a single 70GB table, are simply \nlookup tables of every possible iteration of the hashes. \nBy entering the hash value, the password can be eas-\nily and quickly reverse-engineered and access to files \ncan be gained. A favorite method of hackers is to install \ncommand-line software on remote machines that will \nallow access to the Lanman hashes and will send them \nvia FTP to the hacker. Once the hacker has admin rights, \nhe owns the machine. \n Password Reset Disk \n Emergency Boot CD (EBCD) is a Linux-based tool \nthat allows you to boot a computer that has an unknown \npassword. Using this command-line tool, you can reset \nthe administrator password very easily. It will not tell \nyou the plaintext of the password, but it will clear it so \nthat the machine can be accessed through something like \nVMware with a blank password. \n Memory Analysis and the Trojan Defense \n One method of retrieving passwords and encryption keys \nis through memory analysis — physical RAM. RAM can be \nacquired using a variety of relatively nonintrusive methods. \nHBGary.com offers a free tool that will capture RAM with \nvery minimal impact. In addition to extracting encryp-\ntion keys, RAM analysis can be used to either defeat or \ncorroborate the Trojan defense. The Responder tool from \nHBGary (single-user license) provides in-depth analysis \nand reporting on the many malware activities that can be \ndetected in a RAM environment. The Trojan defense is \ncommonly used by innocent and guilty parties to explain \nunlawful actions that have occurred on their computers. \nThe following items represent a brief overview of the types \nof things that can be accomplished through RAM analysis: \n ● A hidden driver is a 100% indicator of a bad guy. \nHidden drivers can be located through analysis of the \nphysical memory. \n ● Using tools such as FileMon, TCPView, and \nRegMon, you can usually readily identify malware \ninfections. There is a small number of advanced \nmalwares that are capable of doing things such as \nrolling up completely (poof, it’s gone!) when they \ndetect the presence of investigative tools or that are \ncapable of escaping a virtualized host. All the same, \nwhen conducting a malware forensic analysis, be \nsure to isolate the system from the network. \n ● RAM analysis using a tool such as HBGary’s \nResponder can allow reverse-engineering of the \nprocesses that are running and can uncover potential \nmalware behavioral capabilities. As this science \nprogresses, a much greater ability to easily and \nquickly detect malware can be expected. \n User Artifact Analysis \n There is nothing worse than facing off against an oppos-\ning expert who has not done his artifact analysis on a \ncase. Due to an increasing workload in this field, experts \nare often taking shortcuts that, in the long run, really \nmake more work for everyone. In life and on computers, \nthe actions people take leave behind artifacts. The fol-\nlowing is a short list of artifacts that are readily viewed \nusing any method of analysis: \n ● Recent files \n ● OLK files \n ● Shortcuts \n ● Temporary Internet Files (TIF) \n ● My Documents \n ● Desktop \n ● Recycle Bin \n ● Email \n ● EXIF data \n" }, { "page_number": 360, "text": "Chapter | 19 Computer Forensics\n327\n Users create all these artifacts, either knowingly or \nunknowingly, and aspects of them can be reviewed and \nunderstood to indicate that certain actions on the compu-\nter took place — for example, a folder in My Documents \ncalled “ fast trains ” that contains pictures of Europe’s \nTGV and surrounding countryside, TIF sites that show \nthe user booking travel to Europe, installed software for \na Casio Exilim digital camera, EXIF data that shows the \nphotos were taken with a Casio Exilim, and email con-\nfirmations and discussions about the planned trip all \nwork together to show that the user of that account on \nthat PC did very likely take a trip to Europe and did take \nthe photos. Not that there is anything wrong with taking \npictures of trains, but if the subject of the investigation \nis a suspected terrorist and he has ties with a group that \nwas discovered to be planning an attack on a train, this \nevidence would be very valuable. \n It is the sum of the parts that matters the most. A sin-\ngle image of a train found in the user’s TIF would be vir-\ntually meaningless. Multiple pictures of trains in his TIF \ncould also be meaningless; maybe he likes trains or maybe \nsomeone sent him a link that he clicked to take him to a \nWeb site about trains. It’s likely he won’t even remember \nhaving visited the site. It is the forensic examiner’s first \npriority to ensure that all the user artifacts are considered \nwhen making a determination about any behavior. \n Recovering Lost and Deleted Files \n Unless some sort of drastic “ wiping action ” has taken \nplace, as in the use of a third-party utility to delete data \nor if the disk is part of a RAIDed set, I have almost \nalways found that deleted data is immediately available \nin EnCase (forensic software I use) within 20 to 25 min-\nutes after a hard disk image is mounted. This is espe-\ncially true when the drive has not been used at all since \nthe time the data was deleted. \n Software Installation \n Nearly every software installation will offer to drop \none on your desktop, in your Start menu, and on your \nquick launch tool bar at the time of program installa-\ntion. Whenever a user double-clicks on a file, a link \nfile is created in the Recent folder located at the root of \nDocuments and Settings. This is a hidden file. \n Recent Files \n In Windows XP (and similar locations exist in other ver-\nsions), link files are stored in the Recent folder under \nDocuments and Settings. Whenever a user double-clicks \non a file, a link file is created. Clicking the Start button \nin Windows and navigating to the My Recent Documents \nlink will show a list of the last 15 documents that a user \nhas clicked on. What most users don’t realize is that the \nC:\\Documents and Settings\\$user name$\\Recent folder \nwill potentially reveal hundreds of documents that have \nbeen viewed by the user. This list is indisputably a list \nof documents that the user has viewed. Interestingly, in \nWindows 2000, if the Preserve History feature of the \nWindows Media Player is turned off, no link files will be \ncreated. The only way to make any legitimate determina-\ntion about the use of a file is to view the Last Accessed \ntime, which has been shown in several cases to be \ninconsistent and unreliable in certain circumstances. Be \nvery careful when using this time stamp as part of your \ndefense or prosecution. It is a loaded weapon, ready to \ngo off. \n Start Menu \n The Start menu is built on shortcuts. Every item in the \nStart file has a corresponding .LNK file. Examining Last \nAccessed or Date Created time stamps may shed light on \nwhen software was installed and last used. \n Email \n Extracting email is an invaluable tool for researching \nand finding out thoughts and motives of a suspect in any \ninvestigation. Email can be extracted from traditional \nclient-based applications such as Outlook Express, Lotus \nNotes, Outlook, Eudora, and Netscape Mail as well as \nfrom common Webmail apps such as Gmail, Hotmail, \nYahoo Mail, and Excite. Reviewing log files from server-\nbased applications such as Outlook Webmail can show \na user, for example, accessing and using his Webmail \nafter employment termination. It is important that com-\npanies realize that they should terminate access to such \naccounts the day a user’s employment is terminated. \n Internet History \n Forensic analysis of a user’s Internet history can reveal \nmuch useful information. It can also show the exact \ncode that may have downloaded on a client machine \nand resulted in an infection of the system with a virus. \nForensic examiners should actively familiarize them-\nselves with the most recent, known exploits. \n Typed URLs is a registry key. It will store the last \n10 addresses that a user has typed into a Web browser \n" }, { "page_number": 361, "text": "PART | II Managing Information Security\n328\naddress field. I once had a fed try to say that every-\nthing that appeared in the drop-down window was a \n “ typed ” URL. This is not the case. The only definitive \nsource of showing the actual typed URLs is the regis-\ntry key. Just one look at the screen shown in Figure 19.4 \nshould clearly demonstrate that the user never would \nhave “ typed ” all these entries. Yet that is exactly what \na Department of Homeland Security Agent sat on the \nwitness stand and swore, under oath, was true. In Figure \n19.4 , simply typing in fil spawns a list of URLs that \nwere never typed but rather are the result of either the \nuser having opened a file or a program having opened \none. The highlighted file entered the history shown as \na result of installing the software, not as a result of the \nuser “ typing ” the filename. Many items in the history \nwind their way into it through regular software use, with \nfiles being accessed as an indirect result of user activity. \n 7. NETWORK ANALYSIS \n Many investigations require a very hands-off approach \nin which the only forensics that can be collected is net-\nwork traffic. Every machine is assigned an IP address \nand a MAC address. It is like an IP address on layer 3, \nbut the MAC address sits on Layer 2. It is quite like a \nphone number in that it is unique. Software that is used \nto examine network traffic is categorized as a sniffer . \nTools such as Wireshark and Colasoft are two examples \nof sniffers. They can be used to view, analyze, and cap-\nture all the IP traffic that comes across a port. \n Switches are not promiscuous, however. To view the \ntraffic coming across a switch you can either put a hub in \nline with the traffic (between the target and the switch) \nand plug the sniffer into the hub with it, or ports can be \nspanned. Spanning, or mirroring, allows one port on a \nswitch to copy and distribute network traffic in a way \nsuch that the sniffer can see everything. The argument \ncould be made in court that the wrong port was acci-\ndentally spanned, but this argument quickly falls apart \nbecause all network packets contain both the machine \nand IP address of a machine. ARP poisoning is the prac-\ntice of spoofing another user’s IP address, however. This \nwould be the smartest defense, but if a hub is used on a \nswitched port, with the hub wired directly to the port, a \ngreater degree of forensic certainty can be achieved. The \nonly two computers that should be connected to the hub \nare the examiner and target machines. \n Protocols \n In the world of IP, the various languages of network traf-\nfic that are used to perform various tasks and operations \nare called protocols . Each protocol has its own special \nway of organizing and forming its packets. Unauthorized \nprotocols viewed on a port are a good example of a type \nof action that might be expected from a rogue machine \nor employee. Network sniffing for email (SMTP and \nPOP) traffic and capturing it can, depending on the \nsniffer used, allow the examiner to view live email traffic \ncoming off the target machine. Viewing the Web (HTTP) \nprotocol allows the examiner to capture images and text \nfrom Web sites that the target is navigating, in real time. \n Analysis \n Once the capture of traffic has been completed, analysis \nmust take place. Colasoft offers a packet analyzer that \ncan carve out and filter out the various types of traffic. \nA good deal of traffic can be eliminated as just “ noise ” \n FIGURE 19.4 Spawning a list of URLs that were never typed (the Google one is deleted). \n" }, { "page_number": 362, "text": "Chapter | 19 Computer Forensics\n329\non the line. Filters can be created that will capture the \nspecific protocols, such as VoIP. Examination of the pro-\ntocols for protocol obfuscation can (if it is not found) \neliminate the possibility that a user has a malware infec-\ntion, and it can identify a user that is using open ports on \nthe firewall to transmit illicit traffic. They sneak legiti-\nmate traffic over open ports, masking their nefarious \nactivities over legitimate ports, knowing that the ports \nare open. This can be done with whitespace inside of \nan existing protocol, with HTTP, VoIP, and many oth-\ners. The thing to look for, which will usually be clearly \nshown, is something like: \n VOIP \u0005 SMTP \n This basically means that VOIP is talking to a mail \nserver. This is not normal. 3 \n Another thing to look for is protocols coming off \na box that isn’t purposed for that task. It’s all context: \nwho should be doing what with whom. Why is the \nworkstation suddenly popping a DNS server? A real-\nworld example is when a van comes screaming into your \nneighborhood. Two guys jump out and break down the \ndoor of your house and grab your wife and kids and drag \nthem out of the house. Then they come and get you. \nSeems a little fishy, right? But it is a perfectly normal \nthing to have happen if the two guys are firemen, the van \nwas a fire truck, and your house is on fire. \n 8. COMPUTER FORENSICS APPLIED \n This section details the various ways in which computer \nforensics is applied professionally. By no means does this \ncover the extent to which computer forensics is becom-\ning one of the hottest computer careers. It focuses on the \nconsulting side of things, with less attention to corpo-\nrate or law enforcement applications. Generally speak-\ning, the average forensic consultant handles a broader \nvariety of cases than corporate or law enforcement \ndisciplines, with a broader applicability. \n Tracking, Inventory, Location of Files, \nPaperwork, Backups, and So On \n These items are all useful areas of knowledge in pro-\nviding consultative advisement to corporate, legal, and \nlaw-enforcement clients. During the process of discovery \nand warrant creation, knowledge of how users store and \naccess data at a very deep level is critical to success. \n Testimonial \n Even if the work does not involve the court system \ndirectly — for example, a technician that provides forensic \nbackups of computers and is certified — you may some-\nday be called to provide discovery in a litigation matter. \nSubsequently, you may be required to testify. \n Experience Needed \n In computer forensics, the key to a successful technolo-\ngist is experience. Nothing can substitute for experience, \nbut a good system of learned knowledge that represents \nat least the last 10 years is welcome. \n Job Description, Technologist \n Practitioners must possess extreme abilities in adapting to \nnew situations. The environment is always changing. \n Job description: \n Senior Forensic Examiner and eDiscovery Specialist \n Prepared by Scott R. Ellis, 11/01/2007 \n ● Forensics investigative work which includes imaging \nhard drives, extracting data and files for ediscov-\nery production, development of custom scripts as \nrequired to extract or locate data. On occasion this \nincludes performing detailed analyses of user activ-\nity, images, and language that may be of an undesir-\nable, distasteful, and potentially criminal format. For \nthis reason, a manager must be notified immediately \nupon the discovery of any such materials. \n ● Creation of detailed reports that lay out findings in a \nmeaningful and understandable format. All reports \nwill be reviewed and OK’d by manager before deliv-\nery to clients. \n ● Use of software tools such as FTK, EnCase, \nVMware, Recovery for Exchange, IDEA, LAW, and \nRelativity. \n ● Processing ediscovery and some paper \ndiscovery. \n ● Be responsive to opportunities for publication such \nas papers, articles, blog, or book chapter requests. \nAll publications should be reviewed by manager and \nmarketing before being submitted \nto requestor. \n ● Use technology such as servers, email, time report-\ning, and scheduling systems to perform job duties \nand archive work in client folders. \n 3 M. J. Staggs, FireEye, Network Analysis talk at CEIC 2008. \n" }, { "page_number": 363, "text": "PART | II Managing Information Security\n330\n ● Managing lab assets (installation of software, \nWindows updates, antivirus, maintaining backup \nstrategy, hardware installation, tracking hardware \nand software inventory). \n ● Some marketing work \n ● Work week will be 40 hours per week, with \noccasional weekends as needed to meet customer \ndeadlines. \n ● Deposition or testimony as needed. \n ● Occasional evenings and out of town to \naccommodate client schedules for forensic \ninvestigative work. \n ● Occasional evenings and out of town to attend \nseminars, CPE or technology classes as suggested by \nself or by manager, and marketing events. \n ● Other technology related duties as may be assigned \nby manager in the support of the company mission \nas it relates to technology or forensic technology \nmatters \n Job Description Management \n A manager in computer forensics is usually a working \nmanager. He is responsible for guiding and developing \nstaff as well as communicating requirements to the exec-\nutive level. His work duties will typically encompass \neverything mentioned in the previous description. \n Commercial Uses \n Archival, ghosting images, dd, recover lost partitions, \netc. are all applications of computer forensics at a com-\nmercial level. Data recovery embraces a great many of \nthe practices typically attributed to computer forensics. \nArchival and retrieval of information for any number of \npurposes, not just litigation, is required as is a forensic \nlevel of system knowledge. \n Solid Background \n To become a professional practitioner of computer foren-\nsics, there are three requirements to a successful career. \nCertainly there are people, such as many who attain \ncertification through law-enforcement agencies, that \nhave skipped or bypassed completely the professional \nexperience or scientific training necessary to be a true \ncomputer forensic scientist. That is not to degrade the law-\nenforcement forensic examiner. His mission is tradition-\nally quite different from that of a civilian, and these pros \nare frighteningly adept and proficient at accomplishing \ntheir objective, which is to locate evidence of criminal \nconduct and prosecute in court. Their lack of education \nin the traditional sense should never lead one to a des-\nultory conclusion. No amount of parchment will ever \nbroaden a mind; the forensic examiner must have a broad \nmind that eschews constraints and boxed-in thinking. \n The background needed for a successful career in \ncomputer forensics is much like that of any other except \nthat, as a testifying expert, publication will give greater \ncredence to a testimony than even the most advanced \npedigree. The exception would be the computer foren-\nsic scientist who holds a Ph. D. and happened to write \nher doctoral thesis on just the thing that is being called \ninto question on the case. Interestingly, at this time, this \nauthor has yet to meet anyone with a Ph. D. (or any uni-\nversity degree for that matter), in computer forensics. We \ncan then narrow down the requirements to these three \nitems. Coincidentally, these are also the items required \nto qualify as an expert witness in most courts: \n ● Education \n ● Programming and Experience \n ● Publications \n The weight of each of these items can vary. To what \ndegree depends on who is asking, but suffice it to say \nthat a deficiency in any area may be overcome by \nstrengths in the other two. The following sections pro-\nvide a more in-depth view of each requirement. \n Education/Certification \n A strong foundation at the university level in mathemat-\nics and science provides the best mental training that can \nbe obtained in computer forensics. Of course, anyone \nwith an extremely strong understanding of computers \ncan surpass and exceed any expectations in this area. Of \nspecial consideration are the following topics. The best \nforensic examiner has a strong foundation in these areas \nand can qualify not just as a forensic expert with limited \nability to testify as to the functions and specific mechan-\nical abilities of software, but as a computer expert who \ncan testify to the many aspects of both hardware and \nsoftware. \n Understand how database technologies, including \nMS SQL, Oracle, Access, My SQL, and others , interact \nwith applications and how thin and fat clients interact \nand transfer data. Where do they store temporary files? \nWhat happens during a maintenance procedure? How are \nindexes built, and is the database software disk aware? \n" }, { "page_number": 364, "text": "Chapter | 19 Computer Forensics\n331\n Programming and Experience \n Background in computer programming is an essential \npiece. The following software languages must be under-\nstood by any well-rounded forensic examiner: \n ● Java \n ● JavaScript \n ● ASP/.NET \n ● HTML \n ● XML \n ● Visual Basic \n ● SQL \n Develop a familiarity with the purpose and operation \nof technologies that have not become mainstream but \nhave a devoted cult following. At one time such things \nas virtualization, Linux, and even Windows lived on the \nbleeding edge, but from being very much on the “ fringe ” \nhave steadily become more mainstream. \n ● If it runs on a computer and has an installed base of \ngreater than 10,000 users, it is worth reviewing. \n ● Internet technologies should be well understood. \nJavaScript, Java, HTML, ASP, ASPRX, cold fusion, \ndatabases, etc. are all Internet technologies that \nmay end up at the heart of a forensic examiner’s \ninvestigation. \n ● Experience. Critical to either establishing oneself in \na career as a corporate computer forensic examiner \nor as a consultant, experience working in the field \nprovides the confidence and knowledge-base needed \nto successfully complete a forensic examination. \nFrom cradle to grave, from the initial interviews with \nthe client to forensic collection, examination, report-\ning, and testifying, experience will guide every step. \nNo suitable substitute exists. Most forensic examin-\ners come into the career later in life after serving as \na network or software consultant. Some arrive in this \nfield after years in law enforcement where almost \nanyone who can turn on a computer winds up taking \nsome computer forensic training. \n Communications \n ● Computer forensics is entirely about the ability to \nlook at a computer system and subsequently explain, \nin plain English, the analysis. A typical report may \nconsist of the following sections: \n – Summary \n – Methodology \n – Narrative \n – Healthcare information data on system \n – User access to credit-card numbers \n – Date range of possible breach and order handlers \n – Distinct list of operators \n – Russell and Crist Handlers \n – All other users logged in during Crist/Russel \nLogins \n ● Login failures activity coinciding with account \nactivity \n ● All Users \n – User access levels possible ingress/egress \n – Audit trail \n – Login failures \n – Conclusion \n – Contacts/examiners \n Each section either represents actual tables, images, \nand calculated findings, or it represents judgments, \nimpressions, and interpretations of those findings. \nFinally, the report should contain references to contacts \ninvolved in the investigation. A good report conveys \nthe big picture, and translates findings into substan-\ntial knowledge without leaving any trailing questions \nasked and unanswered. Sometimes findings are arrived \nat through a complex procedure such as a series of \nSQL queries. Conclusions that depend on such find-\nings should be as detailed as necessary so that opposing \nexperts can reconstruct the findings without difficulty. \n Almost any large company requires some measure \nof forensic certified staff. Furthermore, the forensic col-\nlection and ediscovery field continues to grow. Virtually \nevery branch of law enforcement — FBI, CIA, Homeland \nSecurity, and state and local agencies — all use compu-\nter forensics to some degree. Accounting firms and law \nfirms of almost any size greater than 20 need certified \nforensic and ediscovery specialists that can both support \ntheir forensic practice areas as well as grow business. \n Publications \n Publishing articles in well-known trade journals goes a \nlong way toward establishing credibility. The following \nthings are nearly always true: \n ● A long list of publications not only creates in a \njury the perception that the expert possess special \nknowledge that warrants publication; it also \nshows the expert’s ability to communicate. Articles \npublished on the Internet typically do not count \nunless they are for well-known publications that have \na printed publication as well as an online magazine. \n" }, { "page_number": 365, "text": "PART | II Managing Information Security\n332\n ● Publishing in and of itself creates a certain amount of \nrisk. Anything that an expert writes or says or posts \nonline may come back to haunt him in court. Be sure \nto remember to check, double check, and triple check \nanything that could be of questionable interpretation. \n ● When you write, you get smarter. Writing forces an \nauthor to conduct research and refreshes the memory \non long unused skills. \n Getting published in the first place is perhaps the \nmost difficult task. Make contact with publishers and \neditors at trade shows. Ask around, and seek to make \ncontact and establish relationships with published \nauthors and bloggers. Most important, always seek to \ngain knowledge and deeper understanding of the work. \n 9. TESTIFYING AS AN EXPERT \n Testifying in court is difficult work. As with any type of \nperformance, the expert testifying must know her mate-\nrial, inside and out. She must be calm and collected \nand have confidence in her assertions. Often, degrees \nof uncertainty may exist within a testimony. It is the \nexpert’s duty to convey those “ gray ” areas with clarity \nand alacrity. She must be able to confidently speak of \nthings in terms of degrees of certainty and clear prob-\nabilities, using language that is accessible and readily \nunderstood by the jury. \n In terms of degrees of certainty, often we find our-\nselves discussing the “ degree of difficulty ” of per-\nforming an operation. This is usually when judges ask \nwhether or not an operation has occurred through direct \nuser interaction or through an automated, programmatic, \nor normal maintenance procedure. For example, it is \nwell within the normal operation of complex database \nsoftware to reindex or compact its tables and reorganize \nthe way the data is arranged on the surface of the disk. It \nis not, however, within the normal operation of the data-\nbase program to completely obliterate itself and all its \nprogram, help, and system files 13 times over a period \nof three weeks, all in the time leading up to requests for \ndiscovery from the opposition. Such information, when \nforensically available, will then be followed by the ques-\ntion of “ Can we know who did it? ” And that question, if \nthe files exist on a server where security is relaxed, can \nbe nearly impossible to answer. \n Degrees of Certainty \n Most computer forensic practitioners ply their trade in \ncivil court. A typical case may involve monetary damages \nor loss. From a computer forensics point of view, evi-\ndence that you have extracted from a computer may be \nused by the attorneys to establish liability, that the plain-\ntiff was damaged by the actions of the defendant. Your \nwork may be the lynchpin of the entire case. You cannot \nbe wrong. The burden to prove the amount of damages is \nless stringent once you’ve established that damage was \ninflicted, and since a single email may be the foundation \nfor that proof, its provenance should prevail under even \nthe most expert scrutiny. Whether or not the damage \nwas inflicted may become a point of contention that the \ndefense uses to pry and crack open your testimony. \n The following sections may prove useful in your \nanswers. The burden of proof will fall on the defense to \nshow that the alleged damages are not accurate. There \nare three general categories of “ truth ” that can be used \nto clarify for a judge, jury, or attorney the weight of evi-\ndence. See the section on “ Rules of Evidence ” for more \non things such as relevance and materiality. \n Generally True \n Generally speaking, something is generally true if under \nnormal and general use the same thing always occurs. \nFor example, if a user deletes a file, generally speaking \nit will go into the recycle bin. This is not true if: \n ● The user holds down a Shift key when deleting \n ● The recycle bin option “ Do not move files to \nthe recycle bin. Remove files immediately when \ndeleted, ” is selected \n ● An item is deleted from a server share or from \nanother computer that is accessing a local user share \n Reasonable Degree of Certainty \n If it smells like a fish and looks like a fish, gener-\nally speaking, it is a fish. However, without dissec-\ntion and DNA analysis, there is the possibility that it is \na fake, especially if someone is jumping up and down \nand screaming that it is a fake. Short of expensive test-\ning, one may consider other factors. Where was the fish \nfound? Who was in possession of the fish when it was \nfound? We begin to rely on more than just looking at the \nfish to see if it is a fish. \n Computer forensic evidence is much the same. For \nexample, in an employment dispute, an employee may \nbe accused of sending sensitive and proprietary docu-\nments to her personal Webmail account. The employer \nintroduces forensic evidence that the files were sent from \nher work email account during her period of employ-\nment on days when she was in the office. \n" }, { "page_number": 366, "text": "Chapter | 19 Computer Forensics\n333\n Pretty straightforward, right? Not really. Let’s go \nback in time to two months before the employee was \nfired. Let’s go back to the day after she got a very bad \nperformance review and left for the day because she was \nso upset. Everyone knew what was going on, and they \nknew that her time is limited. Two weeks later she filed \nan EEOC complaint. The IT manager in this organiza-\ntion, a seemingly mild-mannered, helpful savant, was \ngetting ready to start his own company as a silent part-\nner in competition with his employer. He wanted infor-\nmation. His partners want information in exchange for \na 20% stake. As an IT manager, he had administrative \nrights and could access the troubled employee’s email \naccount and began to send files to her Webmail account. \nAs an IT administrator, he had system and network \naccess that would easily allow him to crack her Webmail \naccount and determine the password. All he had to do \nwas spoof her login page and store it on a site where he \ncould pick it up from somewhere else. If he was par-\nticularly interested in keeping his own home and work \nsystems free of the files, he could wardrive her house, \nhack her home wireless (lucky him, it is unsecured), and \nthen use terminal services to access her home computer, \nlog into her Webmail account (while she is home), and \nview the files to ensure that they appear as “ read. ” This \nscenario may seem farfetched but it is not; this is not \nhard to do. \n For anyone with the IT manager’s level of knowl-\nedge, lack of ethics, and opportunity, this is likely the \n only way that he would go about stealing information. \nThere is no other way that leaves him so completely out \nof the possible running of suspects. \n There is no magic wand in computer forensics. The \ndata is either there or it isn’t, despite what Hollywood \nsays about the subject. If an IT director makes the brash \ndecision to reinstall the OS on a system that has been \ncompromised and later realizes he might want to use a \nforensic investigator to find out what files were viewed, \nstolen, or modified, he can’t just dial up his local foren-\nsics tech and have him pop in over tea, wave his magic \nwand, and recover all the data that was overwritten. \nHere is the raw, unadulterated truth: If you have 30GB \nof unallocated clusters and you copy the DVD Finding \nNemo onto the drive until the disk is full, nobody (and I \nreally mean this), nobody will be able to extract a com-\nplete file from the unallocated clusters. Sure, they might \nfind a couple of keyword hits in file slack and maybe, \njust maybe, if the stars align and Jupiter is in retrograde \nand Venus is rising, maybe they can pull a tiny little \ncomplete file out of the slack or out of the MFT. Small \nfiles, less than 128 bytes, are stored directly in the MFT \nand won’t ever make it out to allocated space. This can \nbe observed by viewing the $MFT file. \n When making the determination “ reasonable degree \nof forensic certainty, ” all things must be considered. \nEvery possible scenario that could occur must flash \nbefore the forensic practitioner’s eyes until only the most \nreasonable answer exists, an answer that is supported by \nall the evidence, not just part of it. This is called inter-\npretation , and it is a weigh of whether or not a prepon-\nderance of evidence actually exists. The forensic expert’s \njob is not to decide whether a preponderance of evidence \nexists. His job is to fairly, and truthfully, present the facts \nand his interpretation of the individual facts. Questions \nan attorney might ask a computer forensics practitioner \non the stand: \n ● Did the user delete the file? \n ● Could someone else have done it? \n ● Could an outside process have downloaded the file to \nthe computer? \n ● Do you know for certain how this happened? \n ● Did you see Mr. Smith delete the files? \n ● How do you know he did? \n ● Isn’t it true, Mr. Expert, that you are being paid to be \nhere today? \n These are not “ Yes or no ” questions. Here might be the \nanswers: \n ● I’m reasonably certain he did. \n ● Due to the security of this machine, it’s very unlikely \nthat someone else did it. \n ● There is evidence to strongly indicate that the photos \nwere loaded to the computer from a digital camera, \nnot the Internet. \n ● I am certain, without doubt, that the camera in \nExhibit 7a is the same camera that was used to take \nthese photos and that these photos were loaded to the \ncomputer while username CSMITH was logged in. \n ● I have viewed forensic evidence that strongly \nsuggests that someone with Mr. Smith’s level of \naccess and permissions to this system did, in fact, \ndelete these files on 12/12/2009. \n ● I’m not sure I didn’t just answer that. \n ● I am an employee of The Company. I am receiv-\ning my regular compensation and today is a normal \nworkday for me. \n Be careful, though. Reticence to answer in a yes-or-\nno fashion may be interpreted by the jury as uncertainty. \nIf certainty exists, say so. But it is always better to be \nhonest and admit uncertainty than to attempt to inflate \none’s own ego by expressing certainty where none \n" }, { "page_number": 367, "text": "PART | II Managing Information Security\n334\nexists. Never worry about the outcome of the case. You \ncan’t care about the outcome of the trial. Guilt or inno-\ncence cannot be a factor in your opinion. You must focus \non simply answering the questions in front of you that \ndemand to be answered. \n Certainty without Doubt \n Some things on a computer can happen only in one fash-\nion. For example, if a deleted item is found in the recy-\ncle bin, there are steps that must be taken to ensure that \n “ it is what it is ” : \n ● Was the user logged in at the time of deletion? \n ● Are there link files that support the use or viewing \nof the file? \n ● Was the user with access to the user account at work \nthat day? \n ● What are the permissions on the folder? \n ● Is the deleted file in question a system file or a \nuser-created file located in the user folders? \n ● Is the systems administrator beyond reproach or \noutside the family of suspects? \n If all of these conditions are met, you may have \narrived at certainty without doubt. Short of reliable wit-\nnesses paired with physical evidence, it is possible that \nthere will always be other explanations, that uncertainty \nto some degree can exist. It is the burden of the defense \nand the plaintiff to understand and resolve these issues \nand determine if, for example, hacker activity is a plau-\nsible defense. I have added the stipulation of reliable \nwitnesses because, in the hands of a morally corrupt \nforensic expert with access to the drive and a hex edi-\ntor, a computer can be undetectably altered. Files can be \ncopied onto a hard drive in a sterile manner that, to most \ntrained forensic examiners, could appear to be original. \nEven advanced users are capable of framing the evi-\ndence in such a way as to render the appearance of best \nevidence, but a forensic examination may lift the veil \nof uncertainty and show that data has been altered, or \nthat something isn’t quite right. For example, there are \ncertain conditions that can be examined that can show \nwith certainty that the time stamps on a file have been \nsomehow manipulated or are simply incorrect, and it is \nlikely that the average person seeking to plant evidence \nwill not know these tricks (see Table 19.4 ). The dates \non files that were overwritten may show that “ old ” files \noverwrote newer files. This is impossible. \n As shown here, the file kitty.jpg overwrites files that \nappear to have been created on the system after it. Such \nan event may occur when items are copied from a CD-\nROM or unzipped from a zip file. \n 10. BEGINNING TO END IN COURT \n In most courts in the world, the accuser goes first and \nthen the defense presents its case. This is for the very \nlogical reason that if the defense goes first, nobody \nwould know what they are talking about. The Boulder \nBar has a Bar manual located at www.boulder-bar.org 4 \nthat provides a more in-depth review of the trial process \nthan can be presented here. Most states and federal rules \nare quite similar, but nothing here should be taken as \nlegal advice; review the rules for yourself for the courts \nwhere you will be testifying. The knowledge isn’t neces-\nsary, but the less that you seem to be a fish out of water, \nthe better. This section does not replace true legal advice, \nit is strictly intended for the purpose of education. The \nmanual located at the Boulder Bar Web site was created \nfor a similar reason that is clearly explained on the site. \nThe manual there was specifically developed without \nany “ legalese, ” which makes it very easy to understand. \n Defendants, Plaintiffs, and Prosecutors \n When someone, an individual or an organization, \ndecides it has a claim of money or damages against \nanother individual or entity, they file a claim in court. \n 4 Boulder Bar, www.boulder-bar.org/bar_media/index.html . \n TABLE 19.4 Example of a Forensic View of Files Showing Some Alteration or Masking of Dates \n Filename \n Date Created \n File Properties \n Original Path \n Kitty.jpg \n 5/12/2007 \n File, Archive \n \n summaryReport.doc \n 7/12/2007 \n File, Deleted, Overwritten \n Kitty.jpg \n Marketing_flyer.pdf \n 2/12/2008 \n File, Deleted, Overwritten \n Kitty.jpg \n" }, { "page_number": 368, "text": "Chapter | 19 Computer Forensics\n335\nThe group filing the claim is the plaintiff, the other par-\nties are the defendants. Experts may find themselves \nworking for strictly defendants, strictly plaintiffs, or a \nlittle bit of both. In criminal court, charges are “ brought ” \nby an indictment, complaint, information, or a summons \nand complaint. \n Pretrial Motions \n Prior to the actual trial, there may be many, many pretrial \nmotions and hearings. When a motion is filed, such as \nwhen the defense in a criminal case is trying to prevent \ncertain evidence from being seen or heard by the jury, \na hearing is called and the judge decides whether the \nmotion has any merit. In civil court, it may be a hearing \nto decide whether the defense has been deleting, with-\nholding, or committing other acts of discovery abuse. \nComputer forensic practitioners will find that they may \nbe called to testify at any number of hearings prior to the \ntrial, and then they may not be needed at the trial at all, \nor the defense and plaintiff may reach a settlement and \nthere will be no trial at all. \n Trial: Direct and Cross-Examination \n Assuming that there is no settlement or plea agreement, \na case will go to trial. The judge will first ask the pros-\necutor or plaintiff whether they want to make an open-\ning statement. Then the defense will be asked. Witnesses \nwill be called, and if it is the first time appearing in the \nparticular trial as an expert, the witness will be qualified, \nas discussed in a moment. The party putting on the wit-\nness will conduct a direct examination and the other side \nwill very likely cross-examine the witness afterward. \nFrequently the two sides will have reviewed the expert \nreport and will ask many questions relating directly to it. \n “ Tricks ” may be pulled by either attorney at this point. \nCertain prosecutors have styles and techniques that are \nused in an attempt to either rattle the expert’s cage or \nsimply intimidate him into a state of nervousness so that \nhe will appear uncertain and unprepared. These prosecu-\ntors are not interested in justice. They are interested in \nwinning because their career rotates around their record \nof wins/losses. \n Rebuttal \n After a witness has testified and undergone direct exami-\nnation and cross-examination, the other side may decide \nto bring in an expert to discuss or refute. The defendant \nmay then respond to the prosecution’s witness in rebut-\ntal . In criminal court, when the state or the government \n(sometimes affectionately referred to by defense attor-\nneys as “ The G ” ) brings a case against an individual or \norganization, the attorneys that prosecute the case are \ncalled the state’s attorney, district attorney, assistant U.S. \nattorney (AUSA), or simply prosecutor. \n Surrebuttal \n This is the plaintiff (or prosecutor’s!) response to rebut-\ntal. Typically the topics of surrebuttal will be limited to \nthose topics that are broached in rebuttal, but the rules \nfor this are probably best left to attorneys to decipher; \nthis author has occasionally been asked questions that \nshould have been deemed outside the bounds of the \nrebuttal. This could most likely be attributed to a lack of \ntechnical knowledge on the part of the attorneys involved \nin the case. \n Testifying: Rule 702. Testimony by Experts \n Rule 702 is a federal rule of civil procedure that governs \nexpert testimony. The judge is considered the “ gate-\nkeeper, ” and she alone makes the decision as to whether \nor not the following rule is satisfied: \n If scientific, technical, or other specialized knowledge will \nassist the trier of fact to understand the evidence or to \ndetermine a fact in issue, a witness qualified as an expert \nby knowledge, skill, experience, training, or education, may \ntestify thereto in the form of an opinion or otherwise, if (1) \nthe testimony is based upon sufficient facts or data, (2) the \ntestimony is the product of reliable principles and methods, \nand (3) the witness has applied the principles and methods \nreliably to the facts of the case (U.S. Courts, Federal Rules \nof Evidence, Rule 702) . \n There are certain rules for qualifying as an expert. In \ncourt, when an expert is presented, both attorneys may \nquestion the expert on matters of background and exper-\ntise. This process is referred to as qualification and, if \nan “ expert ” does not meet the legal definition of expert, \nhe may not be allowed to testify. This is a short bullet \nlist of items that will be asked in qualifying as an expert \nwitness: \n ● How long have you worked in the field? \n ● What certifications do you hold in this field? \n ● Where did you go to college? \n ● Did you graduate? What degrees do you have? \n ● What are your publications? \n" }, { "page_number": 369, "text": "PART | II Managing Information Security\n336\n You may also be asked if you have testified in other \nproceedings. It is important to always be honest when \non the stand. Perjury is a very serious crime and can \nresult in jail time. All forensics experts should famil-\niarize themselves with Federal Rules 701 – 706 as well \nas understand the purpose, intent, and results of a suc-\ncessful Daubert challenge, wherein an expert’s opinion \nor the expert himself may be challenged and, if certain \ncriteria are met, may have his testimony thrown out. It is \naccepted and understood that experts may have reason-\nably different conclusions given the same evidence. \n When testifying, stay calm (easier said than done). \nIf you’ve never done it, approaching the bench and tak-\ning the witness stand may seem like a lot of fun. In a \ncase where a lot is on the line and it all depends on the \nexpert’s testimony, nerves will shake and cages will be \nrattled. The best advice is to stay calm. Drink a lot of \nwater. Drinking copious amounts of water will, accord-\ning to ex-Navy Seal Mike Lukas, dilute the affect of \nadrenalin in the bloodstream. \n Testifying in a stately and austere court of law may \nseem like it is the domain of professors and other ivory \ntower enthusiasts. However, it is something that pretty \nmuch anyone can do that has a valuable skill to offer, \nregardless of educational background. Hollywood often \npaints a picture of the expert witness as a consummate \nprofessorial archetype bent on delivering “ just the facts. ” \nIt is true that expert witnesses demand top dollar in the \nconsulting world. Much of this is for the reason that a \ngreat deal is at stake once the expert takes the stand. \nThere is also a very high inconvenience factor. When on \nthe stand for days at a time, one can’t simply take phone \ncalls, respond to emails, or perform other work for other \nclients. There is a certain amount of business interrup-\ntion that the fees must make up for somehow. \n Testifying is interesting. Distilling weeks of intense, \ntechnical investigation into a few statements that can be \nunderstood by everyone in the courtroom is no small task. \nIt is a bit nerve wracking, and one can expect to have high \nblood pressure and overactive adrenalin glands for the day. \nDrinking lots of water will ease the nerves better than any-\nthing (nonpharmaceutical). It can have other side effects, \nhowever, but it is better to be calm and ask the judge for \na short recess as needed than to have shaky hands and a \ntrembling voice from nerves. Caffeine is a bad idea. \n Correcting Mistakes: Putting Your Head in \nthe Sand \n The interplay of examiner and expert in the case of \ncomputer forensics can be difficult. Often the forensic \nexaminer can lapse into speech and explanations that \nare so commonplace to her that she doesn’t realize she \nis speaking fluent geek-speak. The ability to understand \nhow she sounds to someone who doesn’t understand \ntechnology must be cultivated. Practice by explaining to \nany six-year-old what it is you do. \n Direct Testimony \n Under direct questioning, your attorney will ask ques-\ntions to which you will know the answer. A good \nexpert will, in fact, have prepared a list of questions and \nreviewed the answers with the attorney and explained the \ncontext and justification for each question thoroughly. If \nit is a defense job, you will likely go first and testify as \nto what your investigation has revealed. Avoid using big \nwords and do make some eye contact with the jury if you \nfeel you need to explain something to them, but gener-\nally speaking, you should follow the attorney’s lead. If \nshe wants you to clarify something for the jury, you \nshould do so, and at that time you should look at the jury. \nGenerally, the rule of thumb is to look at the attorney who \nasked you the question. If the judge says, “ Please explain \nto the jury . . . ” then by all means, look at the jury. \n Cross-Examination \n The purpose of a cross-examination is to get the testi-\nfying expert to make a mistake or to discredit him. \nSometimes (rarely) it is actually used to further under-\nstand and clarify things that were discussed in direct. In \nmost cases, the attorney will do this by asking you ques-\ntions about your experience or about the testimony the \nexpert gave under direct. But there is another tactic that \nattorneys use. They ask a question that is completely \nunrelated to the topic you talked about. They know that \nthe vast majority of time you spend is on the issues in \nyour direct. For example, you may give a testimony \nabout Last Accessed time stamps. Your entire testimony \nmay be about Last Accessed time stamps. It may be the \nonly issue in the case you are aware of. Then, on cross, \nthe attorney asks a question about the behavior of the \nmail icon that appears on each line next to the subject \nline in an email. “ Great, ” the expert thinks. “ They recog-\nnize my expertise and are asking questions. ” \n Stop. They are about to ask you an arcane question \nabout a behavior in a piece of software in the hopes that \nyou are overconfident and will answer from the hip and \nget the answer wrong. Because if you get this wrong, \nthen everything else you said must be wrong, too. \n Whenever you are asked a question that does not relate \n" }, { "page_number": 370, "text": "Chapter | 19 Computer Forensics\n337\nto your previous testimony, pause. Pause for a long time. \nGive your attorney time to object. He might not know \nthat he should object, but the fact is that you might get \nthe answer wrong and even if there is no doubt in your \nmind that you know the answer, you should respond that \nyou had not prepared to answer that question and would \nlike to know more details. For example, what is the ver-\nsion of the software? What is the operating system? \nWhat service pack? If it is Office, what Office service \npack? You need to make it clear that you need more \ninformation before you answer the question because, \nfrankly, if the opposition goes down this road, they will \ntry to turn whatever you say into the wrong answer. \n As a forensic examiner, you may find yourself think-\ning that the reason they are asking these questions in \nsuch a friendly manner is because they forgot to ask their \nown expert and are trying to take advantage of your time \nbecause possibly this came up in a later conversation. \nThis could very well be the case. Maybe the counselor \nis not trying to trip up the expert. Maybe the Brooklyn \nBridge can be purchased for a dollar, too. \n What is the best response to a question like this? If \nyou give it a 10 count and your attorney hasn’t objected, \nand the asking attorney has not abandoned the ques-\ntion, you may have to answer. There are many schools \nof thought on this. It is best to understand that a number \nof responses can facilitate an answer to difficult ques-\ntions. One such response might be, “ That is not really \na forensics question ” (if it’s not), or “ I’m not sure how \nthat question relates back to the testimony I just gave. ” \nOr, if it is a software question, you can say, “ Different \nsoftware behaves differently, and I don’t think I can \nanswer that question without more details. I don’t typi-\ncally memorize the behavior of every piece of software, \nand I’m afraid that if I answer from memory I may not \nbe certain. ” At the end of the day, the expert should only \nspeak to what he knows with certainty. There is very \nlittle room for error. Attorneys can back-pedal a cer-\ntain amount to “ fix ” a mistake, but a serious mistake \ncan follow you for a very long time and can hamper \nfuture testimony. For example, if you make statements \nabout time stamps in one trial, and then in the next trial \nyou make statements that interpret them differently, \nthere is a good chance that the opposition will use this \nagainst you. \n Fortunately, when computer forensic experts testify \nin a defense trial, the testimony can last a number of \nhours. Great care is usually taken by the judge to ensure \nthat understanding of all the evidence is achieved. This \ncan create a very long transcript that is difficult to read, \nunderstand, and recall with accuracy. For this reason, \nrarely will bits and pieces of testimony be used against \na testifying expert in a future trial. This is not, of course, \nto say that it can’t or won’t happen. \n It is important, in terms of both setting expectations \nand understanding legal strategy, for an expert wit-\nness to possess passable knowledge of the trial process. \nStrong familiarity with the trial process can benefit both \nthe expert and the attorneys as well as the judge and the \ncourt reporter. \n" }, { "page_number": 371, "text": "This page intentionally left blank\n" }, { "page_number": 372, "text": "339\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Network Forensics \n Yong Guan \n Iowa State University \n Chapter 20 \n Today’s cyber criminal investigator faces a formida-\nble challenge: tracing network-based cyber criminals. \nThe possibility of becoming a victim of cyber crime \nis the number-one fear of billions of people. This con-\ncern is well founded. The findings in the annual CSI/\nFBI Computer Crime and Security Surveys confirm \nthat cyber crime is real and continues to be a signifi-\ncant threat. Traceback and attribution are performed \nduring or after cyber violations and attacks, to identify \nwhere an attack originated, how it propagated, and what \ncomputer(s) and person(s) are responsible and should be \nheld accountable. The goal of network forensics capabil-\nities is to determine the path from a victimized network \nor system through any intermediate systems and commu-\nnication pathways, back to the point of attack origina-\ntion or the person who is accountable. In some cases, the \ncomputers launching an attack may themselves be com-\npromised hosts or be controlled remotely. Attribution is \nthe process of determining the identity of the source of a \ncyber attack. Types of attribution can include both digital \nidentity (computer, user account, IP address, or enabling \nsoftware) and physical identity (the actual person using \nthe computer from which an attack originated). \n Cyber crime has become a painful side effect of the \ninnovations of computer and Internet technologies. With \nthe growth of the Internet, cyber attacks and crimes are \nhappening every day and everywhere. It is very impor-\ntant to build the capability to trace and attribute attacks \nto the real cyber criminals and terrorists, especially in \nthis large-scale human-built networked environment. \n In this chapter, we discuss the current network forensic \ntechniques in cyber attack traceback. We focus on the cur-\nrent schemes in IP spoofing traceback and stepping-stone \nattack attribution. Furthermore, we introduce the traceback \nissues in Voice over IP, Botmaster, and online fraudsters. \n 1. SCIENTIFIC OVERVIEW \n With the phenomenal growth of the Internet, more and \nmore people enjoy and depend on the convenience of \nits provided services. The Internet has spread rapidly \nalmost all over the world. Up to December 2006, the \nInternet had been distributed to over 233 countries and \nworld regions and had more than 1.09 billion users. 1 \nUnfortunately, the wide use of computers and the \nInternet also opens doors to cyber attackers. There are \ndifferent kinds of attacks that an end user of a compu-\nter or Internet can meet. For instance, there may be vari-\nous viruses on a hard disk, several backdoors open in an \noperating system, or a lot of phishing emails in an email-\nbox. According to the annual Computer Crime Report \nof the Computer Security Institute (CSI) and the U.S. \nFederal Bureau of Investigation (FBI), released in 2006, \ncyber attacks cause massive money losses each year. \n However, the FBI/CSI survey results also showed \nthat a low percentage of cyber crime cases have been \nreported to law enforcement (in 1996, only 16%; in \n2006, 25%), which means that in reality, the vast major-\nity of cyber criminals are never caught or prosecuted. \nReaders may ask why this continues to happen. Several \nfactors contribute to this fact: \n ● In many cases, businesses are often reluctant to report \nand publicly discuss cyber crimes related to them. The \nconcern of negative publicity becomes the number-\none reason because it may attract other cyber attack-\ners, undermine the confidence of customers, suppliers, \nand investors, and invite the ridicule of competitors. \n ● Generally, it is much harder to detect cyber crimes \nthan crimes in the physical world. There are \n 1 Internet World Stats, www.internetworldstats.com . \n" }, { "page_number": 373, "text": "PART | II Managing Information Security\n340\nvarious antiforensics techniques that can help cyber \ncriminals evade detection, such as information-\nhiding techniques (steganography, covert channels), \nanonymity proxies, stepping stones, and botnets. Even \nmore challenging, cyber criminals are often insiders \nor employees of the organizations themselves. \n ● Attackers may walk across the boundaries of mul-\ntiple organizations and even countries. To date, the \nlack of effective solutions has significantly hindered \nefforts to investigate and stop the rapidly growing \ncyber criminal activities. It is therefore crucial to \ndevelop a forensically sound and efficient solution to \ntrack and capture these criminals. \n Here we discuss the basic principles and some specific \nforensic techniques in attributing real cyber criminals. \n 2. THE PRINCIPLES OF NETWORK \nFORENSICS \n Network forensics can be generally defined as a science \nof discovering and retrieving evidential information in \na networked environment about a crime in such a way \nas to make it admissible in court. Different from intru-\nsion detection, all the techniques used for the purpose of \nnetwork forensics should satisfy both legal and techni-\ncal requirements. For example, it is important to guar-\nantee whether the developed network forensic solutions \nare practical and fast enough to be used in high-speed \nnetworks with heterogeneous network architecture and \ndevices. More important, they need to satisfy general \nforensics principles such as the rules of evidence and \nthe criteria for admissibility of novel scientific evidence \n(such as the Daubert criteria). 2 , 3 , 4 The five rules are that \nevidence must be: \n ● Admissible . Must be able to be used in court or \nelsewhere. \n ● Authentic . Evidence relates to incident in relevant way. \n ● Complete . No tunnel vision, exculpatory evidence \nfor alternative suspects. \n ● Reliable . No question about authenticity and \nveracity. \n ● Believable . Clear, easy to understand, and believable \nby a jury. \n The evidence and the investigative network forensics \ntechniques should satisfy the criteria for admissibility of \nnovel scientific evidence ( Daubert v. Merrell ): \n ● Whether the theory or technique has been reliably \ntested \n ● Whether the theory or technique has been subject to \npeer review and publication \n ● What is the known or potential rate of error of the \nmethod used? \n ● Whether the theory or method has been generally \naccepted by the scientific community \n The investigation of a cyber crime often involves \ncases related to homeland security, corporate espionage, \nchild pornography, traditional crime assisted by compu-\nter and network technology, employee monitoring, or \nmedical records, where privacy plays an important role. \n There are at least three distinct communities within \ndigital forensics: law enforcement, military, and busi-\nness and industry, each of which has its own objectives \nand priorities. For example, prosecution is the primary \nobjective of the law enforcement agencies and their prac-\ntitioners and is often done after the fact. Military opera-\ntions ’ primary objective is to guarantee the continuity of \nservices, which often have strict real-time requirements. \nBusiness and industry’s primary objectives vary signifi-\ncantly, many of which want to guarantee the availability \nof services and put prosecution as a secondary objective. \n Usually there are three types of people who use \ndigital evidence from network forensic investigations: \npolice investigators, public investigators, and private \ninvestigators. The following are some examples: \n ● Criminal prosecutors. Incriminating documents \nrelated to homicide, financial fraud, drug-related \nrecords. \n ● Insurance companies. Records of bill, cost, services \nto prove fraud in medical bills and accidents. \n ● Law enforcement officials. Require assistance in \nsearch warrant preparation and in handling seized \ncomputer equipment. \n ● Individuals. To support a possible claim of \nwrongful termination, sexual harassment, or age \ndiscrimination. \n The primary activities of network forensics are inves-\ntigative in nature. The investigative process encompasses \nthe following: \n ● Identification \n ● Preservation \n ● Collection \n 2 G. Palmer, “A road map for digital forensic research,” Digital \nForensic Research Workshop (DFRWS), Final Report, Aug. 2001. \n 3 C. M. Whitcomb, “An historical perspective of digital evidence: \nA forensic scientist’s view,” IJDE, 2002. \n 4 S. Mocas, “Building theoretical underpinnings for digital forensics \nresearch,” Digital Investigation , Vol. 1, pp. 61 – 68, 2004. \n" }, { "page_number": 374, "text": "Chapter | 20 Network Forensics\n341\n ● Examination \n ● Analysis \n ● Presentation \n ● Decision \n In the following discussion, we focus on several \nimportant network forensic areas. \n 3. ATTACK TRACEBACK AND \nATTRIBUTION \n When we face the cyber attacks, we can detect them and \ntake countermeasures. For instance, an intrusion detec-\ntion system (IDS) can help detect attacks; we can update \noperating systems to close potential backdoors; we can \ninstall antivirus software to defend against many known \nviruses. Although in many cases we can detect attacks and \nmitigate their damage, it is hard to find the real attackers/\ncriminals. However, if we don’t trace back to the attack-\ners, they can always conceal themselves and launch new \nattacks. If we have the ability to find and punish the attack-\ners, we believe this will help significantly reduce the attacks \nwe face every day. \n Why is traceback difficult in computer networks? \nOne reason is that today’s Internet is stateless. There is \ntoo much data in the Internet to record it all. For exam-\nple, a typical router only forwards the passed pack-\nets and does not care where they are from; a typical \nmail transfer agent (MTA) simply relays emails to the \nnext agent and never minds who is the sender. Another \nreason is that today’s Internet is almost an unauthor-\nized environment. Alice can make a VoIP call to Bob \nand pretend to be Carol; an attacker can send millions \nof emails using your email address and your mailbox \nwill be bombed by millions of replies. Two kinds of \nattacks are widely used by attackers and also interesting\nto researchers all over the world. One is IP spoofing; the \nother is the stepping-stone attack. Each IP packet header con-\ntains the source IP address. Using IP spoofing, an attacker \ncan change the source IP address in the header to that of a \ndifferent machine and thus avoid traceback. \n In a stepping-stone attack, the attack flow may travel \nthrough a chain of stepping stones (intermediate hosts) \nbefore it reaches the victim. Therefore, it is difficult for \nthe victim to know where the attack came from except \nthat she can see the attack traffic from the last hop of the \nstepping-stone chain. Figure 20.1 shows an example of \nIP stepping-stone attack. \n Next we introduce the existing schemes to trace back \nIP spoofing attacks, then we discuss current work on \nstepping-stone attack attribution. \n IP Traceback \n Here we review major existing IP traceback schemes \nthat have been designed to trace back to the origin of \nIP packets through the Internet. We roughly categorize \nthem into four primary classes: \n ● Active probing 5 , 6 \n ● ICMP traceback 7 , 8 , 9 \n 5 H. Burch and B. Cheswick, “ Tracing anonymous packets to their \napproximate source, ” in Proceedings of USENIX LISA 2000 , Dec. \n2000, pp. 319 – 327. \n 6 R. Stone, “ Centertrack: An IP overlay network for tracking DoS \nfl oods, ” in Proceedings of the 9th USENIX Security Symposium , Aug. \n2000, pp. 199 – 212. \n 7 S. M. Bellovin, “ICMP traceback messages,” Internet draft, 2000. \n 8 A. Mankin, D. Massey, C.-L. Wu, S. F. Wu, and L. Zhang, “ On \ndesign and evaluation of ‘ Intention-Driven ’ ICMP traceback, ” in \n Proceedings of 10th IEEE International Conference on Computer \nCommunications and Networks , Oct. 2001. \n 9 S. F. Wu, L. Zhang, D. Massey, and A. Mankin, “ Intention-driven \nICMP trace-back, ” Internet draft, 2001. \nStepping Stones\nAttacker\nVictim\nWho is the real\nattacker?\nIP Networks\nIP Networks\nIP\nNetworks\nIP Networks\nIP\nNetworks\n FIGURE 20.1 Stepping-stone attack attribution. \n" }, { "page_number": 375, "text": "PART | II Managing Information Security\n342\n ● Packet marking 10 , 11 , 12 , 13 , 14 \n ● Log-based traceback 15 , 16 , 17 , 18 \n Active Probing \n Stone 19 proposed a traceback scheme called CenterTrack , \nwhich selectively reroutes the packets in question directly \nfrom edge routers to some special tracking routers. The \ntracking routers determine the ingress edge router by \nobserving from which tunnel the packet arrives. This \napproach requires the cooperation of network administra-\ntors, and the management overhead is considerably large. \n Burch and Cheswick 20 outlined a technique for trac-\ning spoofed packets back to their actual source without \nrelying on the cooperation of intervening ISPs. The victim \nactively changes the traffic in particular links and observes \nthe influence on attack packets, and thus can determine \nwhere the attack comes from. This technique cannot work \nwell on distributed attacks and requires that the attacks \nremain active during the time period of traceback. \n ICMP Traceback (iTrace) \n Bellovin 21 proposed a scheme named iTrace to trace back \nusing ICMP messages for authenticated IP marking. In this \nscheme, each router samples (with low probability) the for-\nwarding packets, copies the contents into a special ICMP \ntraceback message, adds its own IP address as well as \nthe IP of the previous and next-hop routers, and forwards \nthe packet to either the source or destination address. By \ncombining the information obtained from several of these \nICMP messages from different routers, the victim can then \nreconstruct the path back to the origin of the attacker. \n A drawback of this scheme is that it is much more \nlikely that the victim will get ICMP messages from rout-\ners nearby than from routers farther away. This implies \nthat most of the network resources spent on generat-\ning and utilizing iTrace messages will be wasted. An \nenhancement of iTrace, called Intention-Driven iTrace , \nhas been proposed. 22 , 23 By introducing an extra “ intention-\nbit, ” it is possible for the victim to increase the probability \nof receiving iTrace messages from remote routers. \n Packet Marking \n Savage et al. 24 proposed a Probabilistic Packet Marking \n(PPM) scheme. Since then several other PPM-based \nschemes have been developed. 25 , 26 , 27 The baseline idea \nof PPM is that routers probabilistically write partial path \ninformation into the packets during forwarding. If the \nattacks are made up of a sufficiently large number of \npackets, eventually the victim may get enough informa-\ntion by combining a modest number of marked packets \nto reconstruct the entire attack path. This allows victims \nto locate the approximate source of attack traffic without \nrequiring outside assistance. \n 16 S. Matsuda, T. Baba, A. Hayakawa, and T. Nakamura, “ Design and \nimplementation of unauthorized access tracing system, ” in Proceedings \nof the 2002 Symposium on Applications and the Internet (SAINT 2002), \nJan. 2002. \n 17 K. Shanmugasundaram, H. Brönnimann, and N. Memon, “ Payload \nattribution via hierarchical Bloom fi lters, ” in Proceedings of the 11th \nACM Conference on Computer and Communications Security, Oct. 2004. \n 18 A.C. Snoeren, C. Partridge, L. A. Sanchez, C. E. Jones, F. Tchakountio, \nB. Schwartz, S. T. Kent, and W. T. Strayer, “ Single-packet IP traceback, ” \n IEEE/ACM Transactions on Networking , Vol. 10, No. 6, pp. 721 – 734, \nDec. 2002. \n 19 R. Stone, “ Centertrack: An IP overlay network for tracking DoS \nfl oods, ” in Proceedings of the 9th USENIX Security Symposium , Aug. \n2000, pp. 199 – 212. \n 20 H. Burch and B. Cheswick, “ Tracing anonymous packets to their \napproximate source, ” in Proceedings of USENIX LISA 2000 , Dec. \n2000, pp. 319 – 327. \n 21 S. M. Bellovin, “ICMP traceback messages,” Internet draft, 2000. \n 22 A. Mankin, D. Massey, C.-L. Wu, S. F. Wu, and L. Zhang, “ On \ndesign and evaluation of ‘ Intention-Driven ’ ICMP traceback, ” in \n Proceedings of 10th IEEE International Conference on Computer \nCommunications and Networks , Oct. 2001. \n 23 S. F. Wu, L. Zhang, D. Massey, and A. Mankin, “ Intention-driven \nICMP trace back, ” Internet draft, 2001. \n 24 S. Savage, D. Wetherall, A. Karlin, and T. Anderson, “ Network \nsupport for IP traceback, ” IEEE /ACM Transactions on Networking , \nVol. 9, No. 3, pp. 226 – 237, June 2001. \n 25 D. Song and A. Perrig, “ Advanced and authenticated marking \nschemes for IP traceback, ” in Proceedings of IEEE INFOCOM 2001, \nApr. 2001. \n 26 K. Park and H. Lee, “ On the effectiveness of probabilistic \npacket marking for IP traceback under denial of service attack, ” in \n Proceedings of IEEE INFOCOM 2001 , Apr. 2001, pp. 338 – 347. \n 27 D. Dean, M. Franklin, and A. Stubblefi eld, “ An algebraic approach \nto IP traceback, ” Information and System Security , Vol. 5, No. 2, \npp. 119 – 137, 2002. \n 10 A. Belenky and N. Ansari, “ IP traceback with deterministic packet \nmarking, ” IEEE Communications Letters , Vol. 7, No. 4, pp. 162 – 164, \nApril 2003. \n 11 D. Dean, M. Franklin, and A. Stubblefi eld, “ An algebraic approach \nto IP traceback, ” Information and System Security , Vol. 5, No. 2, pp. \n119 – 137, 2002. \n 12 K. Park and H. Lee, “ On the effectiveness of probabilistic \npacket marking for IP traceback under denial of service attack, ” in \n Proceedings of IEEE INFOCOM 2001 , Apr. 2001, pp. 338 – 347. \n 13 S. Savage, D. Wetherall, A. Karlin, and T. Anderson, “ Network \nsupport for IP traceback, ” IEEE /ACM Transactions on Networking , \nVol. 9, No. 3, pp. 226 – 237, June 2001. \n 14 D. Song and A. Perrig, “ Advanced and authenticated marking \nschemes for IP traceback, ” in Proceedings of IEEE INFOCOM 2001, \nApr. 2001. \n 15 J. Li, M. Sung, J. Xu, and L. Li, “ Large-scale IP traceback in high-speed \nInternet: Practical techniques and theoretical foundation, ” in Proceedings of \n2004 IEEE Symposium on Security and Privacy, May 2004. \n" }, { "page_number": 376, "text": "Chapter | 20 Network Forensics\n343\n The Deterministic Packet Marking (DPM) scheme \nproposed by Belenky and Ansari 28 involves marking each \nindividual packet when it enters the network. The packet is \nmarked by the interface closest to the source of the packet \non the edge ingress router. The mark remains unchanged \nas long as the packet traverses the network. However, \nthere is no way to get the whole paths of the attacks. \n Dean et al. 29 proposed an Algebraic Packet Marking \n(APM) scheme that reframes the traceback problem as a \npolynomial reconstruction problem and uses techniques \nfrom algebraic coding theory to provide robust meth-\nods of transmission and reconstruction. The advantage \nof this scheme is that it offers more flexibility in design \nand more powerful techniques that can be used to filter \nout attacker-generated noise and separate multiple paths. \nBut it shares similarity with PPM in that it requires a \nsufficiently large number of attack packets. \n Log-Based Traceback \n The basic idea of log-based traceback is that each router \nstores the information (digests, signature, or even the \npacket itself) of network traffic through it. Once an \nattack is detected, the victim queries the upstream rout-\ners by checking whether they have logged the attack \npacket in question. If the attack packet’s information is \nfound in a given router’s memory, that router is deemed \nto be part of the attack path. Obviously, the major chal-\nlenge in log-based traceback schemes is the storage \nspace requirement at the intermediate routers. \n Matsuda et al. 30 proposed a hop-by-hop log-based \nIP traceback method. Its main features are a logging \n packet feature that is composed of a portion of the \npacket for identification purposes and an algorithm \nusing a data-link identifier to identify the routing of a \npacket. However, for each received packet, about 60 \nbytes of data should be recorded. The resulting large \nmemory space requirement prevents this method \nfrom being applied to high-speed networks with heavy \ntraffic. \n Although today’s high-speed IP networks suggest that \nclassical log-based traceback schemes would be too pro-\nhibitive because of the huge memory requirement, log-\nbased traceback became attractive after Bloom filter-based \n(i.e., hash-based) traceback schemes were proposed. \n Bloom filters were presented by Burton H. Bloom 31 in \n1970 and have been widely used in many areas such as \ndatabase and networking. 32 A Bloom filter is a space-\nefficient data structure for representing a set of elements \nto respond to membership queries. It is a vector of bits \nthat are all initialized to the value 0. Then each element \nis inserted into the Bloom filter by hashing it using sev-\neral independent uniform hash functions and setting the \ncorresponding bits in the vector to value 1. Given a query \nas to whether an element is present in the Bloom filter, \nwe hash this element using the same hash functions and \ncheck whether all the corresponding bits are set to 1. If \nany one of them is 0, then undoubtedly this element is \nnot stored in the filter. Otherwise, we would say that it is \npresent in the filter, although there is a certain probability \nthat the element is determined to be in the filter though it \nis actually not. Such false cases are called false positives . \n The space-efficiency of Bloom filters is achieved at \nthe cost of a small, acceptable false-positive rate. Bloom \nfilters were introduced into the IP traceback area by \nSnoeren et al. 33 They built a system named the Source \nPath Isolation Engine (SPIE), which can trace the ori-\ngin of a single IP packet delivered by the network in the \nrecent past. They demonstrated that the system is effec-\ntive, space-efficient, and implementable in current or \nnext-generation routing hardware. Bloom filters are used \nin each SPIE-equipped router to record the digests of all \npackets received in the recent past. The digest of a packet \nis exactly several hash values of its nonmutable IP header \nfields and the prefix of the payload. Strayer et al. 34 \nextended this traceback architecture to IP-v6. However, \nthe inherent false positives of Bloom filters caused by \nunavoidable collisions restrain the effectiveness of these \n 28 A. Belenky and N. Ansari, “ IP traceback with deterministic packet \nmarking, ” IEEE Communications Letters , Vol. 7, No. 4, pp. 162 – 164, \nApril 2003. \n 29 D. Dean, M. Franklin, and A. Stubblefi eld, “ An algebraic approach \nto IP traceback, ” Information and System Security , Vol. 5, No. 2, \npp. 119 – 137, 2002. \n 30 S. Matsuda, T. Baba, A. Hayakawa, and T. Nakamura, “ Design and \nimplementation of unauthorized access tracing system, ” in Proceedings \nof the 2002 Symposium on Applications and the Internet (SAINT 2002), \nJan. 2002. \n 31 B. H. Bloom, “ Space/time trade-offs in hash coding with allowable \nerrors, ” Communications of the ACM , Vol. 13, No. 7, pp. 422 – 426, July \n1970. \n 32 A. Broder and M. Mitzenmacher, “ Network applications of Bloom \nfi lters: A survey, ” Proceedings of the 40th Annual Allerton Conference \non Communication , Control, and Computing, Oct. 2002, pp. 636 – 646. \n 33 A.C. Snoeren, C. Partridge, L. A. Sanchez, C. E. Jones, \nF. Tchakountio, B. Schwartz, S. T. Kent, and W. T. Strayer, “ Single-\npacket IP traceback, ” IEEE/ACM Transactions on Networking , Vol. 10, \nNo. 6, pp. 721 – 734, Dec. 2002. \n 34 W.T. Strayer, C. E. Jones, F. Tchakountio, and R. R. Hain, “ SPIE-\nIPv6: Single IPv6 packet traceback, ” in Proceedings of the 29th IEEE \nLocal Computer Networks Conference (LCN 2004) , Nov. 2004. \n" }, { "page_number": 377, "text": "PART | II Managing Information Security\n344\nsystems. To reduce the impact of unavoidable collisions \nin Bloom filters, Zhang and Guan 35 propose a topology-\naware single-packet IP traceback system, namely TOPO. \nThe router’s local topology information, that is, its \nimmediate predecessor information, is utilized. The \nperformance analysis shows that TOPO can reduce the \nnumber and scope of unnecessary queries and signifi-\ncantly decrease false attributions. When Bloom filters are \nused, it is difficult to decide their optimal control param-\neters a priori . They designed a k -adaptive mechanism \nthat can dynamically adjust the parameters of Bloom fil-\nters to reduce the false-positive rate. \n Shanmugasundaram et al. 36 proposed a payload \nattribution system (PAS) based on a hierarchical Bloom \nfilter (HBF). HBF is a Bloom filter in which an ele-\nment is inserted several times using different parts of \nthe same element. Compared with SPIE, which is a \npacket-digesting scheme, PAS only uses the payload \nexcerpt of a packet. It is useful when the packet header \nis unavailable. \n Li et al. 37 proposed a Bloom filter-based IP trace-\nback scheme that requires an order of magnitude smaller \nprocessing and storage cost than SPIE, thereby being able \nto scale to much higher link speed. The baseline idea of \ntheir approach is to sample and log a small percentage of \npackets, and 1 bit packet marking is used in their sampling \nscheme. Therefore, their traceback scheme combines \npacket marking and packet logging together. Their simula-\ntion results showed that the traceback scheme can achieve \nhigh accuracy and scale well to a large number of attack-\ners. However, as the authors also pointed out, because of \nthe low sampling rate, their scheme is no longer capable \nof tracing one attacker with only one packet. \n Stepping-Stone Attack Attribution \n Ever since the problem of detecting stepping stones was \nfirst proposed by Staniford-Chen and Heberlein, 38 several \napproaches have been proposed to detect encrypted step-\nping-stone attacks. \n The ON/OFF based approach proposed by Zhang and \nPaxson 39 is the first timing-based method that can trace \nstepping stones, even if the traffic were to be encrypted. In \ntheir approach, they calculated the correlation of different \nflows by using each flow’s OFF periods. A flow is consid-\nered to be in an OFF period when there is no data traffic \non it for more than a time period threshold. Their approach \ncomes from the observation that two flows are in the same \nconnection chain if their OFF periods coincide. \n Yoda and Etoh 40 presented a deviation-based approach \nfor detecting stepping-stone connections. The deviation is \ndefined as the difference between the average propagation \ndelay and the minimum propagation delay of two connec-\ntions. This scheme comes from the observation that the \ndeviation for two unrelated connections is large enough \nto be distinguished from the deviation of connections in \nthe same connection chain. \n Wang et al. 41 proposed a correlation scheme using \ninterpacket delay (IPD) characteristics to detect stepping \nstones. They defined their correlation metric over the \nIPDs in a sliding window of packets of the connections \nto be correlated. They showed that the IPD characteris-\ntics may be preserved across many stepping stones. \n Wang and Reeves 42 presented an active watermark \nscheme that is designed to be robust against certain \ndelay perturbations. The watermark is introduced into a \nconnection by slightly adjusting the interpacket delays \nof selected packets in the flow. If the delay perturbation \nis not quite large, the watermark information will remain \nalong the connection chain. This is the only active stepping-\nstone attribution approach. \n Strayer et al. 43 presented a State-Space algorithm that \nis derived from their work on wireless topology discov-\nery. When a new packet is received, each node is given \n 36 S. Savage, D. Wetherall, A. Karlin, and T. Anderson, “ Network \nsupport for IP traceback, ” IEEE /ACM Transactions on Networking , \nVol. 9, No. 3, pp. 226 – 237, June 2001. \n 37 J. Li, M. Sung, J. Xu, and L. Li, “ Large-scale IP traceback in high-speed \nInternet: Practical techniques and theoretical foundation, ” in Proceedings of \n2004 IEEE Symposium on Security and Privacy, May 2004. \n 38 S. Staniford-Chen and L. T. Heberlein, “ Holding intruders account-\nable on the Internet, ” in Proceedings of the 1995 IEEE Symposium on \nSecurity and Privacy, May 1995. \n 39 Y. Zhang and V. Paxson, “ Detecting stepping stones, ” in \n Proceedings of the 9th USENIX Security Symposium, Aug. 2000, pp. \n171¨C184. \n 40 K. Yoda and H. Etoh, “ Finding a connection chain for tracing \nintruders, ” in Proceedings of the 6th European Symposium on Research \nin Computer Security (ESORICS 2000), Oct. 2000. \n 41 X. Wang, D. S. Reeves, and S. F. Wu, “ Inter-packet delay based \ncorrelation for tracing encrypted connections through stepping stones, ” \nin Proceedings of the 7th European Symposium on Research in \nComputer Security (ESORICS 2002) , Oct. 2002. \n 42 X. Wang and D. S. Reeves, “ Robust correlation of encrypted \nattack traffi c through stepping stones by manipulation of interpacket \ndelays, ” in Proceedings of the 10th ACM Conference on Computer and \nCommunications Security (CCS 2003), Oct. 2003. \n 43 W. T. Strayer, C. E. Jones, I. Castineyra, J. B. Levin, and R. R. Hain, \n “ An integrated architecture for attack attribution, ” BBN Technologies, \nTech. Rep. BBN REPORT-8384, Dec. 2003. \n 35 L. Zhang and Y. Guan, “ TOPO: A topology-aware single packet attack \ntraceback scheme, ” in Proceedings of the 2nd IEEE Communications \nSociety/CreateNet International Conference on Security and Privacy in \nCommunication Networks (SecureComm 2006), Aug. 2006. \n" }, { "page_number": 378, "text": "Chapter | 20 Network Forensics\n345\na weight that decreases as the elapsed time from the last \npacket from that node increases. Then the connections \non the same connection chain will have higher weights \nthan other connections. \n However, none of these previous approaches can \neffectively detect stepping stones when delay and chaff \nperturbations exist simultaneously. Although no experi-\nmental data is available, Donoho et al. 44 indicated that \nthere are theoretical limits on the ability of attackers to \ndisguise their traffic using evasions for sufficiently long \nconnections. They assumed that the intruder has a maxi-\nmum delay tolerance, and they used wavelets and similar \nmultiscale methods to separate the short-term behavior \nof the flows (delay or chaff) from the long-term behav-\nior of the flows (the remaining correlation). However, \nthis method requires the intrusion connections to remain \nfor long periods, and the authors never experimented to \nshow the effectiveness against chaff perturbation. These \nevasions consist of local jittering of packet arrival times \nand the addition of superfluous packets. \n Blum et al. 45 proposed and analyzed algorithms for \nstepping-stone detection using ideas from computational \nlearning theory and the analysis of random walks. They \nachieved provable (polynomial) upper bounds on the \nnumber of packets needed to confidently detect and iden-\ntify stepping-stone flows with proven guarantees on the \nfalse positives and provided lower bounds on the amount \nof chaff that an attacker would have to send to evade \ndetection. However, their upper bounds on the number of \npackets required is large, while the lower bounds on the \namount of chaff needed for attackers to evade detection is \nvery small. They did not discuss how to detect stepping \nstones without enough packets or with large amounts of \nchaff and did not show experimental results. \n Zhang et al. 46 proposed and analyzed algorithms that \nrepresent that attackers cannot always evade detection \nonly by adding limited delay and independent chaff per-\nturbations. They provided the upper bounds on the number \nof packets needed to confidently detect stepping-stone \nconnections from nonstepping stone connections with \nany given probability of false attribution. \n Although there have been many stepping-stone attack \nattribution schemes, there is a lack of comprehensive \nexperimental evaluation of these schemes. Therefore, \nthere are no objective, comparable evaluation results on \nthe effectiveness and limitations of these schemes. Xin \net al. 47 designed and built a scalable testbed environment \nthat can evaluate all existing stepping-stone attack attri-\nbution schemes reproducibly, provide a stable platform \nfor further research in this area, and be easily reconfig-\nured, expanded, and operated with a user-friendly inter-\nface. This testbed environment has been established in \na dedicated stepping-stone attack attribution research \nlaboratory. An evaluation of proposed stepping-stone \ntechniques is currently under way. \n A group from Iowa State University proposed the first \neffective detection scheme to detect attack flows with \nboth delay and chaff perturbations. A scheme named \n “ datatick ” is proposed that can handle significant packet \nmerging/splitting and can attribute multiple application \nlayer protocols (e.g., X-Windows over SSH, Windows \nRemote Desktop, VNC, and SSH). A scalable testbed \nenvironment is also established that can evaluate all exist-\ning stepping-stone attack attribution schemes reproduc-\nibly. A group of researchers from North Carolina State \nUniversity and George Mason University utilizes timing-\nbased watermarking to trace back stepping-stone attacks. \nThey have proposed schemes to handle repacketization \nof the attack flow and a “ centroid-based ” watermarking \nscheme to detect attack flows with chaff. A group from \nJohns Hopkins University demonstrates the feasibility of \na “ post-mortem ” technique for traceback through indirect \nattacks. A group from Telcordia Technologies proposed a \nscheme that reroutes the attack traffic from uncooperative \nnetworks to cooperative networks such that the attacks can \nbe attributed. The BBN Technologies’ group integrates sin-\ngle-packet traceback and stepping-stone correlation. A dis-\ntributed traceback system called FlyTrap is developed for \nuncooperative and hostile networks. A group from Sparta \nintegrates multiple complementary traceback approaches \nand tests them in a TOR anonymous system. \n A research project entitled Tracing VoIP Calls \nthrough the Internet, led by Xinyuan Wang from George \nMason University, aims to investigate how VoIP calls \ncan be effectively traced. Wang et al. proposed to use the \n 44 D. L. Donoho, A. G. Flesia, U. Shankar, V. Paxson, J. Coit, and S. \nStaniford, “ Multiscale stepping-stone detection: Detecting pairs of jit-\ntered interactive streams by exploiting maximum tolerable delay, ” in \n Proceedings of the 5th International Symposium on Recent Advances \nin Intrusion Detection (RAID 2002), Oct. 2002. \n 45 A. Blum, D. Song, and S. Venkataraman, “ Detection of interactive \nstepping stones: Algorithms and confi dence bounds, ” in Proceedings \nof the 7th International Symposium on Recent Advances in Intrusion \nDetection (RAID 2004), Sept. 2004. \n 46 L. Zhang, A. G. Persaud, A. Johnson, and Y. Guan, “ Detection of \nStepping Stone Attack under Delay and Chaff Perturbations, ” in 25th \nIEEE International Performance Computing and Communications \nConference (IPCCC 2006), Apr. 2006. \n 47 J. Xin, L. Zhang, B. Aswegan, J. Dickerson, J. Dickerson, \nT. Daniels, and Y. Guan, “ A testbed for evaluation and analysis of step-\nping stone attack attribution techniques, ” in Proceedings of TridentCom \n2006, Mar. 2006. \n" }, { "page_number": 379, "text": "PART | II Managing Information Security\n346\nwatermarking technology in stepping-stone attack attri-\nbution into VoIP attribution and showed that VoIP calls \ncan still be attributed. 48 \n Strayer et al. has been supported by the U.S. Army \nResearch Office to research how to attribute attackers \nusing botnets. Their approach for detecting botnets is to \nexamine flow characteristics such as bandwidth, dura-\ntion, and packet timing, looking for evidence of botnet \ncommand and control activity. 49 \n 4. CRITICAL NEEDS ANALYSIS \n Although large-scale cyber terrorism seldom happens, \nsome cyber attacks have already shown their power in \ndamaging homeland security. For instance, on October \n21, 2002, all 13 Domain Name System (DNS) root name \nservers sustained a DoS attack. 50 Some root name servers \nwere unreachable from many parts of the global Internet \ndue to congestion from the attack traffic. Even now, we \ndo not know the real attacker and what his intention was. \n Besides the Internet itself, many sensitive institu-\ntions, such as the U.S. power grid, nuclear power plants, \nand airports, may also be attacked by terrorists if they \nare connected to the Internet, although these sites have \nbeen carefully protected physically. If the terrorists want \nto launch large-scale attacks targeting these sensitive \ninstitutions through the Internet, they will probably have \nto try several times to be successful. If we only sit here \nand do not fight back, they will finally find our vulner-\nabilities and reach their evil purpose. However, if we can \nattribute them to the source of attacks, we can detect and \narrest them before they succeed. \n Although there have been a lot of traceback and \nattribution schemes on IP spoofing and stepping-stone \nattacks, we still have a lot of open issues in this area. \nThe biggest issue is the deployment of these schemes. \nMany schemes (such as packet marking and log-based \ntraceback) need the change of Internet protocol on each \nintermediate router. Many schemes need many network \nmonitors placed all over the world. These are very diffi-\ncult to implement in the current Internet without support \nfrom government, manufacturers, and academics. It is \nnecessary to consider traceback demands when designing \nand deploying next-generation networks. \n 5. RESEARCH DIRECTIONS \n There are still some open problems in attack traceback \nand attribution. \n VoIP Attribution \n Like the Internet, the Voice over Internet Protocol (VoIP) \nalso provides unauthorized services. Therefore, some secu-\nrity issues existing in the Internet may also appear in VoIP \nsystems. For instance, a phone user may receive a call \nwith a qualified caller ID from her credit-card company, \nso she answers the critical questions about Social Security \nnumber, data of birth, and so on. However, this call actu-\nally comes from an attacker who fakes the caller ID using \na computer. Compared with a Public Switched Telephone \nNetwork (PSTN) phone or mobile phone, IP phones lack \nmonitoring. Therefore, it is desirable to provide schemes \nthat can attribute or trace back to the VoIP callers. \n Tracking Botnets \n A botnet is a network of compromised computers, \nor bots, commandeered by an adversarial botmaster. \nBotnets usually spread through viruses and communicate \nthrough the IRC channel. With an army of bots, bot con-\ntrollers can launch many attacks, such as spam, phish-\ning, key logging, and denial of service. Today more and \nmore scientists are interested in how to detect, mitigate, \nand trace back botnet attacks. \n Traceback in Anonymous Systems \n Another issue is that there exist a lot of anonymous sys-\ntems available all over the world, such as Tor. 51 Tor is a \ntoolset for anonymizing Web browsing and publishing, \ninstant messaging, IRC, SSH, and other applications that \nuse TCP. It provides anonymity and privacy for legal \nusers, and at the same time, it is a good platform via \nwhich to launch stepping-stone attacks. Communications \nover Tor are relayed through several distributed serv-\ners called onion routers . So far there are more than 800 \nonion routers all over the world. Since Tor may be seen \nas a special stepping-stone attack platform, it is interest-\ning to consider how to trace back attacks over Tor. \n 50 P. Vixie, G. Sneeringer, and M. Schleifer, “Events of Oct. 21, 2002”, \nNovember 24, 2002, www.isc.org/ops/f-root/october21.txt . \n 51 Tor system, http://tor.eff.org . \n 48 X. Wang, S. Chen, and S. Jajodia “ Tracking anonymous peer-\nto-peer VoIP calls on the Internet, ” In Proceedings of the 12th ACM \nConference on Computer Communications Security (CCS 2005), \nNovember 2005. \n 49 W.T. Strayer, R. Walsh, C. Livadas, and D. Lapsley, “ Detecting \nbotnets with tight command and control, ” Proceedings of the 31st \nIEEE Conference on Local Computer Networks (LCN), November \n15 – 16, 2006. \n" }, { "page_number": 380, "text": "Chapter | 20 Network Forensics\n347\n Online Fraudster Detection \nand Attribution \n One example is the auction frauds on eBay-like auction \nsystems. In the past few years, Internet auctions have \nbecome a thriving and very important online business. \nCompared with traditional auctions, Internet auctions \nvirtually allow everyone to sell and buy anything at any-\ntime from anywhere, at low transaction fees. However, \nthe increasing number of cases of fraud in Internet auc-\ntions has harmed this billion-dollar worldwide market. \nDue to the inherent limitation of information asymmetry \nin Internet auction systems, it is very hard, if not impos-\nsible, to discover all potential (i.e., committed and soon \nto be committed) frauds. Auction frauds are reported as \nascending in recent years and have become serious prob-\nlems. The Internet Crime Complaint Center (IC3) and \nInternet FraudWatch have both reported Internet auction \nfrauds as the most prevalent type of Internet fraud. 52 , 53 \nInternet FraudWatch reported that auction fraud rep-\nresented 34% (the highest percentage) of total Internet \nfrauds in 2006, resulting in an average loss of $1331. \nInternet auction houses had tried to prevent frauds by \nusing certain types of reputation systems, but it has been \nshown that fraudulent users are able to manipulate these \nreputation systems. It is important that we develop the \ncapability to detect and attribute auction fraudsters. \n Tracing Phishers \n Another serious problem is the fraud and identity theft that \nresult from phishing, pharming, and email spoofing of all \ntypes. Online users are lured to a faked web site and tricked \nto disclose sensitive credentials such as passwords, Social \nSecurity numbers, and credit-card numbers. The phishers \ncollect these credentials in order to illegitimately gain \naccess to the user’s account and cause financial loss or \nother damages to the user. In the past, phishing attacks \noften involve various actors as a part of a secret criminal \nnetwork and take approaches similar to those of money \nlaundering and drug trafficking. Tracing phishers is a \nchallenging forensic problem and the solutions thereof \nwould greatly help law enforcement practitioners and \nfinancial fraud auditors in their investigation and deter-\nrence efforts. \n Tracing Illegal Content Distributor \nin P2P Systems \n Peer-to-peer (P2P) file sharing has gained popularity and \nachieved a great success in the past 10 years. Though the \nwell-known and popular P2P file sharing applications such \nas BitTorrent (BT), eDonkey, and Foxy may vary from \nregion to region, the trend of using P2P networks can be \nseen almost everywhere. In North America, a recent report \nstated that around 41%–44% of all bandwidth was used up \nby P2P file transfer traffic. With the increasing amount of \nsensitive documents and files accidentally shared through \nP2P systems, it is important to develop forensic solutions \nfor locating initial illegal content uploaders in P2P sys-\ntems. However, one technique would not be applicable to \nall P2P systems due to their architectural and algorithmic \ndifferences among different P2P systems. There are many \nlegal and technical challenges for tracing illegal content \ndistributors in P2P systems. \n 52 Internet Crime Complaint Center, Internet crime report, 2006 ic3 \nannual report, 2006. \n 53 Internet National Fraud Information Center, 2006 top 10 Internet \nscam trends from NCL’s fraud center, 2006. \n" }, { "page_number": 381, "text": "This page intentionally left blank\n" }, { "page_number": 382, "text": "349\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Firewalls \n Dr . Errin W. Fulp \n Wake Forest University \n Chapter 21 \n Providing a secure computing environment continues to \nbe an important and challenging goal of any computer \nadministrator. The difficulty is in part due to the increas-\ning interconnectivity of computers via networks, which \nincludes the Internet. Such interconnectivity brings great \neconomies of scale in terms of resources, services, and \nknowledge, but it has also introduced new security risks. \nFor example, interconnectivity gives illegitimate users \nmuch easier access to vital data and resources from \nalmost anywhere in the world. \n In a secure environment it is important to main-\ntain the privacy, integrity, and availability of data and \nresources. Privacy refers to limiting information access \nand disclosures to authorized users and preventing access \nby or disclosure to illegitimate users. In the United \nStates, a range of state and federal laws — for example, \nFERPA, FSMA, and HIPAA — define the legal terms of \nprivacy. Integrity is the trustworthiness of information. \nIt includes the idea of data integrity , which means data \nhas not been changed inappropriately. It can also include \n source integrity , which means the source of the data is \nwho it claims to be. Availability is the accessibility of \nresources. Of course these security definitions can also \nform the basis of reputation , which is vital to businesses. \n 1. NETWORK FIREWALLS \n Network firewalls are a vital component for maintain-\ning a secure environment and are often the first line \nof defense against attack. Simply stated, a firewall is \nresponsible for controlling access among devices, such \nas computers, networks, and servers. Therefore the most \ncommon deployment is between a secure and an inse-\ncure network (for example, between the computers you \ncontrol and the Internet), as shown in Figure 21.1 . This \nchapter refers to the secure network as the internal net-\nwork; the insecure network is the external network. \n The purpose of the firewall and its location is to \nhave network connections traverse the firewall, which \ncan then stop any unauthorized packets. A simple fire-\nwall will filter packets based on IP addresses and ports. \nA useful analogy is filtering your postal mail based \nonly on the information on the envelope. You typically \naccept any letter addressed to you and return any letter \naddressed to someone else. This act of filtering is essen-\ntially the same for firewalls. \n However, in response to the richer services pro-\nvided over modern networks (such as multimedia and \nencrypted connections), the role of the firewall has \ngrown over time. Advanced firewalls may also perform \nNetwork Address Translation (NAT), which allows mul-\ntiple computers to share a limited number of network \naddresses (explained later in this chapter). Firewalls \nmay provide service differentiation, giving certain traf-\nfic priority to ensure that data is received in a timely \nfashion. Voice over IP (VoIP) is one type of application \nthat needs differentiation to ensure proper operation. \nThis idea is discussed several times in this chapter, since \nthe use of multimedia services will only continue to \nincrease. Assuming that email and VoIP packets arrive \nInternal Network\nFirewall\nInternet\n(External Network)\n FIGURE 21.1 Example network consisting of an internal network \n(which is to be secured) and an external network (not trusted). The \nfirewall controls access between these two networks, allowing and \ndenying packets according to a security policy. \n" }, { "page_number": 383, "text": "PART | II Managing Information Security\n350\nat the firewall at the same time, VoIP packets should be \nprocessed first because the application is more suscepti-\nble to delays. \n Firewalls may also inspect the contents (the data) of \npackets. This can be done to filter other packets (learn \nnew connections), block packets that contain offensive \ninformation, and/or block intrusion attempts. Using the \nmail analogy again, in this case you open letters and \ndetermine what to accept based on what is inside. For \nexample, you unfortunately have to accept bills, but you \ncan deny credit-card solicitations. \n The remainder of this chapter provides an overview \nof firewall policies, designs, features, and configura-\ntions. Of course, technology is always changing, and \nnetwork firewalls are no exception. However, the intent \nof this chapter is to describe aspects of network firewalls \nthat tend to endure over time. \n 2. FIREWALL SECURITY POLICIES \n When a packet arrives at a firewall, a security policy is \napplied to determine the appropriate action. Actions include \naccepting the packet, which means the packet is allowed to \ntravel to the intended destination. A packet can be denied, \nwhich means the packet is not permitted to travel to the \nintended destination (it is dropped or possibly is bounced \nback). The firewall may also log information about the \npacket, which is important to maintain certain services. \n It is easy to consider a firewall policy as an ordered list \nof rules, as shown in Table 21.1 . Each firewall rule consists \nof a set of tuples and an action. Each tuple corresponds to \na field in the packet header, and there are five such fields \nfor an Internet packet: Protocol, Source Address, Source \nPort, Destination Address, and Destination Port. \n The firewall rule tuples can be fully specified or con-\ntain wildcards (*) in standard prefix format. However, \neach tuple represents a finite set of values; therefore, the \nset of all possible packets is also finite. (A more concise \nmathematical model will be introduced later in the chap-\nter.) It is possible to consider the packet header consist-\ning of tuples, but each tuple must be fully specified. \n As packets pass through a firewall, their header \ninformation is sequentially compared to the fields of \na rule. If a packet’s header information is a subset of a \nrule, it is said to be a match, and the associated action, to \naccept or reject, is performed. Otherwise, the packet is \ncompared to the next sequential rule. This is considered \na first-match policy since the action associated with the \nfirst rule that is matched is performed. Other matching \nstrategies are discussed at the end of this section. \n For example, assume that a packet has the follow-\ning values in the header: The protocol is TCP, source \nIP is 210.1.1.1, source port is 3080, destination IP is \n220.2.33.8, and destination port is 80. When the packet \narrives it is compared to the first rule, which results in \nno match since the rule is for UDP packets. The fire-\nwall then compares the packet second rule, which \nresults in no match since the source IP is different. The \npacket does not match the third rule, but it does match \nthe fourth rule. The rule action is performed and so the \npacket is allowed to pass the firewall. \n A default rule, or catch-all, is often placed at the end of \na policy with action reject. The addition of a default rule \nmakes a policy comprehensive, indicating that every packet \nwill match at least one rule. In the event that a packet \nmatches multiple rules, the action of the first matching rule \nis taken. Therefore the order of rules is very important. \n If a default rule (a rule that matches all possible pack-\nets) is placed at the beginning of a first-match policy, no \n TABLE 21.1 A Security Policy Consisting of Six Rules, Each of \nWhich Has Five Parts (Tuples) \n No. \n Protocol \n Source \n Destination \n Action \n \n \n IP \n Port \n IP \n Port \n \n 1 \n UDP \n 190.1.1.* \n * \n * \n 80 \n deny \n 2 \n TCP \n 180.* \n * \n 180.* \n 90 \n accept \n 3 \n UDP \n 210.1.* \n * \n * \n 90 \n accept \n 4 \n TCP \n 210.* \n * \n 220.* \n 80 \n accept \n 5 \n UDP \n 190.* \n * \n * \n 80 \n accept \n 6 \n * \n * \n * \n * \n * \n deny \n" }, { "page_number": 384, "text": "Chapter | 21 Firewalls\n351\nother rule will match. This situation is an anomaly referred \nto as shadowing . We’ll talk more about policy anoma-\nlies later in this chapter. Policies that employ this form of \nshort-circuit evaluation are called first-match policies and \naccount for the majority of firewall implementations. \n Rule-Match Policies \n Multiple rules of a single firewall policy may match a \npacket — for example, a packet could match rules 1, 5, \nand 6 of the policy in Table 21.1. Given multiple possi-\nble matches, the rule-match policy describes the rule the \nfirewall will apply to the packet. The previous section \ndescribed the most popular match policy, first match, \nwhich will apply the first rule that is a match. \n Other match policies are possible, including best \nmatch and last match. For best-match policies, the packet \nis compared against every rule to determine which rule \nmost closely matches every tuple of the packet. Note \nthat the relative order of the rules in the policy does not \nimpact determining the best-match result; therefore shad-\nowing is not an issue. It is interesting to note that best \nmatch is the default criterion for IP routing, which is not \nsurprising since firewalls and routers do perform similar \ntasks. If a packet matches multiple rules with a last-match \ncriterion, the action of the last rule matched is performed. \nNote that rule order is important for a last-match policy. \n 3. A SIMPLE MATHEMATICAL MODEL \nFOR POLICIES, RULES, AND PACKETS \n At this point it is perhaps useful to describe firewall \npolicies, firewall rules, and network packets using set \ntheory. 1 The previous section defined the parts and fields \nof rules and packets as tuples . A tuple can be modeled \nas a set. For example, assume the tuple for IP source \naddresses is 198.188.150.*. Then this tuple represents \nthe set of 256 addresses that range from 198.188.150.0 \nto 198.180.150.255. Each tuple of a packet consists of a \nsingle value, which is expected, since a packet only has \none source and one destination. \n The tuples (which are sets) that form a rule collective \ndefine a set of packets that match. For example, consider \nthe following rule: \n Proto \u0003 TCP, SIP \u0003 190.150.140.38, SP \u0003 188, \nDIP \u0003 190.180.39.* DP \u0003 80, action \u0003 accept \n This rule defines a set of 256 unique TCP packet \nheaders with source address 190.150.140.38 and source \nport 188 destined for any of the 256 computers with des-\ntination port 80 and destination IP address 190.180.39.0 \nthrough 190.180.39.255, perhaps a Web server farm. \nTherefore the rule describes a set of 256 packets that \nwill be accepted. If the source port was defined as *, the \nrule would describe a set of 16,777,216 different packet \nheaders. \n \n 2\n2\n65 536\n256\n16 777 216\n16\n8\n\u0007\n\u0003\n\u0007\n\u0003\n,\n,\n,\n \n \n Using set theory also provides a simple definition of \na match. A match occurs when every tuple of a packet is \na proper subset of the corresponding rule. In this chap-\nter a proper set can be thought of as one set completely \ncontained within another. For example, every tuple in the \nfollowing packet is a proper subset of the preceding rule; \ntherefore it is considered a match. \n Proto \u0003 TCP, SIP \u0003 190.150.140.38, SP \u0003 188, \nDIP \u0003 190.180.39.188 DP \u0003 80 \n A set model can also be used to describe a firewall \npolicy. The list of rules in a firewall policy collectively \ndescribes a set of packets. There are three distinct (nono-\nverlapping) sets of possible packets. The first set, A(R) , \ndescribes packets that will be accepted by the policy R . \nThe second set, D(R) , defines the set of packets that will \nbe dropped by the policy. The last set, U(R) , is the set of \npackets that do not match any rule in the policy. Since \nthe sets do not overlap, the intersection of A(R) , D(R) , \nand U(R) should be the empty set. \n Using set theory we can also define the set P that \ndescribes all possible packet headers, of which there \nare approximately 7.7 \u0007 10 25 possible packet headers. \nA packet is a single element in this large set. \n Using accept, drop, nonmatch, and possible packet \nsets, we can describe useful attributes of a firewall pol-\nicy. A firewall policy R is considered comprehensive if \nany packet from P will match at least one rule. In other \nwords, the union of A(R) and D(R) equals P (therefore \n A ( R ) D ( R ) \u0003 P ), or U(R) is the empty set (therefore \n U ( R ) \u0003 Ø ). Of course, it is better if a policy is compre-\nhensive, and generally the last rule (catch-all) makes this \ntrue. \n Finally, these mathematical models also allow the \ncomparison of policies, the most important reason for \nintroducing a somewhat painful section. Assume two fire-\nwall policies R and S exist. We can say the two polices \nare equivalent if the accept, drop, and nonmatch sets are \nthe same. This does not imply that the two policies have \nthe same rules, just that given a packet, both policies will \n 1 Errin W. Fulp, “ Optimization of network fi rewall policies using \ndirected acyclical graphs, ” In Proceedings of the IEEE Internet \nManagement Conference , 2005. \n" }, { "page_number": 385, "text": "PART | II Managing Information Security\n352\nhave the same action. This is an important property that \nwill be mentioned again and again in this chapter. \n 4. FIRST-MATCH FIREWALL POLICY \nANOMALIES \n As described in the previous sections, for most firewalls \nthe first rule that matches a packet is typically applied. \nGiven this match policy, more specific rules (those that \nmatch few packets) typically appear near the beginning of \nthe policy, whereas more general rules are located at the \nend. Using the set theory model, the number of elements \nin the rules sets increases as you move toward the last rule. \n Unfortunately, it is easy to introduce anomalies when \ndeveloping and managing a firewall policy. This is espe-\ncially true as the policy grows in size (number of rules) \nand complexity. An anomaly is an unintended conse-\nquence of adding rules in a certain order. \n A simple and very common anomaly is rule shadow-\ning. Shadowing occurs when an earlier rule r i matches \nevery packet that another lower rule r j matches, where \n i and j are rule numbers. Assume rules are numbered \nsequentially starting at the first rule and i \u0004 j . Using \nthe mathematical model, shadowing occurs when every \ntuple in r j is a proper subset of r i . \n For example, shadowing occurs between the follow-\ning two rules: \n Proto \u0003 TCP, SIP \u0003 190.150.140.38, SP \u0003 188, \nDIP \u0003 190.180.39.* DP \u0003 80, action \u0003 accept \n Proto \u0003 TCP, SIP \u0003 190.150.140.38, SP \u0003 188, \nDIP \u0003 190.180.39.180 DP \u0003 80, action \u0003 drop \n What is the problem? Nothing, if the two rules have \nthe same action (there is a performance issue described \nin the next section). However, if the rules have differ-\nent actions, there is a potential issue. In the preceding \nexample the second rule is never matched; therefore the \npacket [Proto \u0003 TCP, SIP \u0003 190.150.140.38, SP \u0003 188, \nDIP \u0003 190.180.39.180 \nDP \u0003 80] \nwill \nalways \nbe \naccepted. Was this the intent? If so, the second rule \nshould be removed. \n Another policy anomaly is half shadowing, where \nonly a portion of the packets of a later rule matches an \nearlier rule (although not necessarily half of the pack-\nets in the set). For example, consider the following two \nrules: \n Proto \u0003 TCP, SIP \u0003 190.150.140.38, SP \u0003 188, \nDIP \u0003 190.180.39.* DP \u0003 80, action \u0003 accept \n Proto \u0003 TCP, SIP \u0003 190.150.140.38, SP \u0003 *, \nDIP \u0003 190.180.39.180 DP \u0003 80, action \u0003 drop \n In this example, the second rule is partially shadowed \nby the first rule. By itself, the second rule will drop any \nTCP packet arriving from the address 190.150.140.38 \nand destined for the Web server (because of destination \nport 80) 190.180.39.180. When the first rule is added, a \npacket from the address 190.150.140.38 and port 188 will \nbe accepted. Was this the intent? Only the firewall admin-\nistrator would know. Regardless, it is difficult to detect. \n Other firewall policy anomalies are possible. \nUnfortunately, detecting these problems is not easy, \nsince the anomaly may be introduced on purpose (then \ntechnically it is not an anomaly). This has created a new \narea of research, and some software packages are availa-\nble to help find problems. However, only the administra-\ntor can ultimately determine whether the rule ordering is \ncorrect. Note that best-match policies do not have these \nissues, and this reason is often used to promote their use. \nHowever, best-match policies are typically considered \ndifficult for the administrator to manage. \n 5. POLICY OPTIMIZATION \n Given that a network firewall will inspect all packets \ntransmitted between multiple networks, these devices \nneed to determine the appropriate match with minimal \ndelay. Often the number of firewall rules in a policy will \nimpact the firewall performance. Given that every rule \nrequires some processing time, more rules will require \nmore time, on average. There are a few ways to improve \nfirewall performance with regard to the security policy. \nNote that this section is more applicable to software-\nbased than hardware-based firewalls. \n Policy Reordering \n Given a security policy, it may be possible to reorder \nthe rules such that more popular rules appear earlier. 2 \n More popular refers to how often the rule is a match. \nFor example, over time it is possible to determine how \nmany times a rule is matched. Dividing this number by \nthe total number of packets matched for the entire policy \nyields the probability that this rule is considered the first \nmatch. \n If the match policy is first match, then placing more \npopular rules earlier in the policy will reduce the average \nnumber of rule comparisons. The average number of rule \n 2 Errin W. Fulp, “ Optimization of network fi rewall policies using \ndirected acyclical graphs, ” In Proceedings of the IEEE Internet \nManagement Conference , 2005. \n" }, { "page_number": 386, "text": "Chapter | 21 Firewalls\n353\ncomparisons performed, E[n] , is given by the following \nequation: \n E n\ni\npi\ni\nn\n[ ] \u0003\n\u0007\n\u00031∑\n \n where n is the number of rules in the policy and p i is the \nprobability that rule i is the first match. Although reor-\ndering is advantageous, it must be done so that the poli-\ncy ’ s integrity is maintained. \n Policy integrity refers to the policy intent, so the \npolicy will accept and deny the same packets before and \nafter the reorganization of rules. For example, rule six \nin Table 21.1 may be the most popular rule (the default \ndeny), but placing it at the beginning of the policy does \nnot maintain integrity. However, if rule two is more pop-\nular than rule one, it could be placed at the beginning \nof the policy and integrity will be maintained. Therefore \nthe order between certain rules must be maintained. \n This can be described mathematically using the mod-\nels introduced in the earlier section. Assume a firewall \npolicy R exists. After reordering the rules, let’s call the \nfirewall policy S . If A ( R ) \u0003 A ( S ) and D ( R ) \u0003 D ( S ), then \nthe policies R and S are equivalent and integrity is main-\ntained. As a result S can be used in place of R in the fire-\nwall, which should improve performance. \n Although a simple concept, reordering rules to main-\ntain integrity is provably difficult for large policies. 3 , 4 \nFortunately, commercial software packages are now \navailable to optimize rules to improve performance. \n Combining Rules \n Another method for improving firewall performance is \nremoving unnecessary rules. This can be accomplished \nby first removing redundant rules (rules that are shad-\nowed with the same action). For example, the second rule \nhere is unnecessary: \n Proto \u0003 TCP, SIP \u0003 190.150.140.38, SP \u0003 188, \nDIP \u0003 190.180.39.* DP \u0003 80, action \u0003 drop \n Proto \u0003 TCP, SIP \u0003 190.150.140.38, SP \u0003 188, \nDIP \u0003 190.180.39.180 DP \u0003 80, action \u0003 drop \n This is because the first rule matches any packet the \nsecond rule does, and the first rule has the same action \n(different actions would be an anomaly, as described in \nthe earlier sections). \n Another example occurs when two nonshadowing \nrules can be combined into a single rule. Consider the fol-\nlowing two rules: \n Proto \u0003 TCP, SIP \u0003 190.150.140.38, SP \u0003 188, \nDIP \u0003 190.180.39.* DP \u0003 80, action \u0003 accept \n Proto \u0003 UDP, SIP \u0003 190.150.140.38, SP \u0003 188, \nDIP \u0003 190.180.39.* DP \u0003 80, action \u0003 accept \n These two rules can be combined into the follow-\ning rule, which substitutes the wildcard for the protocol \nfield: \n Proto \u0003 *, SIP \u0003 190.150.140.38, SP \u0003 188, \nDIP \u0003 190.180.39.* DP \u0003 80, action \u0003 accept \n Combining rules to form a smaller policy is better \nin terms of performance as well as management in most \ncases, since fewer rules should be easier for the admin-\nistrator to understand. Finding such combinations takes \npractice; fortunately, there are some software packages \navailable to help. \n Default Accept or Deny? \n It may be worth a few lines to discuss whether a default \naccept policy provides better performance than a default \ndeny. This debate occurs from time to time; generally \nspeaking, the question is better answered with regard \nto management of the policy and security. Is it easier to \ndefine the appropriate policy in terms of what is denied \nor what should be accepted? \n Assuming that the administrator defines one (accepted \nor denied), the default behavior becomes the other. A \n “ define what is accepted and default deny ” is the most \ncommon. It can be considered pessimistic, since it \nassumes that if you are not certain about a packet, then \ndrop it. \n 6. FIREWALL TYPES \n Firewalls can be categorized into three general classes: \npacket filters, stateful firewalls, and application layer fire-\nwalls. 5 Each type provides a certain type of security and \nis best described within the context of a network layer \nmodel — for example, the Open Systems Interconnect \n(OSI) or TCP/IP model, as shown in Figure 21.2 . \n 3 Errin W. Fulp, “ Optimization of network fi rewall policies using \ndirected acyclical graphs, ” In Proceedings of the IEEE Internet \nManagement Conference , 2005. \n 4 M. Yoon and Z. S. Zhang, “ Reducing the size of rule set in a fi re-\nwall, ” In Proceedings of the IEEE International Conference on \nCommunications , 2007. \n 5 J.R. Vacca and S. R. Ellis, Firewalls Jumpstart for Network and \nSystems Administrators , Elsevier, 2005. \n" }, { "page_number": 387, "text": "PART | II Managing Information Security\n354\n Recall that the TCP/IP model consists of four basic \nlayers: data link, networking (IP), transport (TCP and \nUDP), and application. Each layer is responsible for pro-\nviding a certain service to the layer above it. The first \nlayer (data link) is responsible for transmitting infor-\nmation across the local area network (LAN); examples \ninclude Ethernet and 802.11 networks. The network layer \n(routing, implemented IP) concerns routing information \nacross interconnected LANs. The third layer (transport, \nimplemented as TCP and UDP) concerns the end-to-end \nconnection between communicating devices. The highest \nlayer (application) is the application using the network. \n Packet Filter \n A packet filter is the most basic type of a firewall since \nit only filters at the network and transport layers (layers \ntwo and three). Therefore a packet filter’s operations are \nsimilar to a network router’s. The packet filter receives \na packet, determines the appropriate action based on \nthe policy, then performs the action on the packet. This \nwill be based on the information from the network and \ntransport layers. Therefore, a packet filter only considers \nthe IP addresses (layer two information), the port num-\nbers (layer one information), and the transport protocol \ntype (layer three information). Furthermore, since all this \ninformation resides in the packet header, there is no need \nto inspect the packet data (payload). It is possible to filter \nbased on the data link layer, but this chapter only consid-\ners the network layer and above. Another important note \nis that the packet filter has no memory (or state) regard-\ning the packets that have arrived and departed. \n Stateful Packet Firewalls \n Stateful firewalls perform the same operations as packet \nfilters but also maintain state about the packets that have \narrived. Given this additional functionality, it is now \npossible to create firewall rules that allow network ses-\nsions (sender and receiver are allowed to communicate), \nwhich is critical given the client/server nature of most \ncommunications (that is, if you send packets, you prob-\nably expect something back). Also note the change in \nterminology from packet filter to firewall. Many people \nsay that when state is added to a packet filter, it becomes \na firewall. This is really a matter of opinion. \n For example, assume a user located in the inter-\nnal (protected) network wants to contact a Web server \nlocated in the Internet. The request would be sent from \nthe user to the Web server, and the Web server would \nrespond with the requested information. A packet filter \nwould require two rules, one allowing departing packets \n(user to Web server) and another allowing arriving pack-\nets (Web server to user). There are several problems with \nthis approach, since it is difficult to determine in advance \nwhat Web servers a user will connect to. Consider hav-\ning to add a new rule for every Web server that is or \nwould ever be contacted. \n A stateful firewall allows connection tracking, \nwhich can allow the arriving packets associated with an \naccepted departing connection. Recall that a connection \nor session can be considered all the packets belonging \nto the conversation between computers, both sender to \nreceiver, and vice versa. Using the Web server example, \na single stateful rule can be created that accepts any Web \nrequests from the secure network and the associated \nreturn packets. A simple way to add this capability is to \nhave the firewall add to the policy a new rule allowing \nreturn packets. Of course, this new rule would be elimi-\nnated once the connection is finished. Knowing when a \nconnection is finished is not an easy task, and ultimately \ntimers are involved. Regardless, stateful rules were a sig-\nnificant advancement for network firewalls. \n Application Layer Firewalls \n Application layer firewalls can filter traffic at the net-\nwork, transport, and application layer. Filtering at the \napplication layer also introduces new services, such as \nproxies. Application proxies are simply intermediaries \nfor network connections. Assume that a user in the inter-\nnal network wants to connect to a server in the external \nnetwork. The connection of the user would terminate at \nthe firewall; the firewall would then create a connection \nto the Web server. It is important to note that this occurs \nseamlessly to the user and server. \n As a result of the proxy the firewall can potentially \ninspect the contents of the packets, which is similar to \nHTTP and SMTP\nExample\nNetwork\nLayer\nApplication\nTransport\nNetwork\nData Link\nTCP and UDP\nIPv4 and IPv6\nIEEE 802.3 and IEEE 802.11\n1\n2\n3\n4\n FIGURE 21.2 Layered model for computer networks and example \nimplementations for each layer. \n" }, { "page_number": 388, "text": "Chapter | 21 Firewalls\n355\nan intrusion detection system (IDS). This is increasingly \nimportant since a growing number of applications, as well \nas illegitimate users, are using nonstandard port numbers \nto transmit data. Application layer firewalls are also nec-\nessary if an existing connection may require the establish-\nment of another connection — for example, the Common \nObject Resource Broker Architecture (CORBA). \n Increasingly, firewalls and other security devices are \nbeing merged into a single device that can simplify man-\nagement. For example, an intrusion prevention system (IPS) \nis a combination firewall and IDS. An IPS can filter packets \nbased on the header, but it can also scan the packet contents \n(payload) for viruses, spam, and certain types of attacks. \n 7. HOST AND NETWORK FIREWALLS \n Firewalls can also be categorized based on where they \nare implemented or what they are intended to protect —\n host or network. 6 Host firewalls typically protect only \none computer. Host firewalls reside on the computer they \nare intended to protect and are implemented in software \n(this is described in the next section). \n In contrast, network firewalls are typically stan-\ndalone devices. Located at the gateway(s) of a network \n(for example, the point at which a network is connected \nto the Internet), a network firewall is designed to protect \nall the computers in the internal network. As a result, a \nnetwork firewall must be able to handle high bandwidth, \nas fast as the incoming connection, and process packets \nquickly. A network firewall gives administrators a single \npoint at which to implement and manage security, but it \nis also a single point of failure. \n There are many different network configurations that \ninvolve firewalls. Each provides different levels of secu-\nrity and management complexity. These configurations \nare described in detail in a later section. \n 8. SOFTWARE AND HARDWARE \nFIREWALL IMPLEMENTATIONS \n As described in the previous sections, a firewall applies a \npolicy to an arriving packet to determine the appropriate \nmatch. The policy is an ordered list of rules, and typically \nthe first rule that matches the packet is performed. This \noperation can be performed primarily in either software \nor hardware. Performance is the principal reason to \nchoose one implementation. \n Software firewalls are application software that can \nexecute on commercial hardware. Most operating sys-\ntems provide a firewall to protect the host computer \n(often called a host firewall ). For example, iptables is \nthe firewall application provided as a part of the Linux \noperating system. Several major firewall companies offer \na software version of their network firewall. It is possi-\nble to buy off-the-shelf hardware (for example, a server) \nand run the firewall software. The advantage of software \nfirewalls is their ability to upgrade without replacing the \nhardware. In addition, it is easier to add new features —\n for example, iptables can easily perform stateful filtering, \nNATing, and quality-of-service (QoS) operations. It is as \nsimple as updating and configuring the firewall software. \n Hardware firewalls rely on hardware to perform \npacket filtering. The policy and matching operation is \nperformed in dedicated hardware — for example, using \na field-programmable gate array (FPGA). The major \nadvantages of a hardware firewall are increased band-\nwidth and reduced latency. Note that bandwidth is the \nnumber of packets a firewall can process per unit of time, \nand latency is the amount of time require to process a \npacket. They are not the same thing, and IETF RFC 3511 \nprovides a detailed description of the process of testing \nfirewall performance. 7 \n Hardware firewalls can operate at faster bandwidths, \nwhich translates to more packets per second (10 Gbps \nis easily achieved). In addition, hardware firewalls can \noperate faster since processing is performed in dedi-\ncated hardware. The firewall operates almost at wireline \nspeeds; therefore, very little delay is added to accepted \npackets. This is important since more applications, \nsuch as multimedia, need QoS for their operation. The \ndisadvantage is that upgrading the firewall may require \nreplacement of hardware, which can be more expensive. \n 9. CHOOSING THE CORRECT FIREWALL \n The previous sections have described several categories \nof firewalls. Firewalls can be packet filters or stateful \nfirewalls and/or provide application layer processing; \nimplemented at the host or network or implemented in \nsoftware or hardware. Given the possible combinations, \nit can be difficult to choose the appropriate technology. \n When determining the appropriate technology, it is \nimportant to first understand the current and future secu-\nrity needs of the computer system being protected. Given \na large number of hosts, a network firewall is probably \n 6 J.R. Vacca and S. R. Ellis, Firewalls Jumpstart for Network and \nSystems Administrators , Elsevier, 2005. \n 7 B. Hickman, D. Newman, S. Tadjudin, and T. Martin, Benchmarking \nMethodology for Firewall Performance , IETF RFC 3511, 2003. \n" }, { "page_number": 389, "text": "PART | II Managing Information Security\n356\nthe easiest to manage. Requiring and relying on every \ncomputer in an internal network to operate a host fire-\nwall may not be realistic. \n Furthermore, updating the policy in a multiple host-\nbased firewall system would be difficult. However, a \nsingle network firewall may imply that a single policy \nis suitable for all computers in the internal network. This \ngenerally is not the case when there are servers and com-\nputers in the internal network. More expensive network \nfirewalls will allow the implementation of multiple poli-\ncies or objects (described in more detail in the next sec-\ntion). Of course, if speed is an issue, a hardware firewall \nmay justify the generally higher cost. \n If scanning for viruses and spam and/or discovering \nnetwork attacks are also requirements, a more advanced \nfirewall is needed. Sometimes called an intrusion preven-\ntion system (IPS), these advanced devices filter based on \npacket headers and inspect the data transmitted for certain \nsignatures. In addition, these devices can monitor traffic \n(usage and connection patterns) for attacks. For example, \na computer that attempts to connect to a range of ports on \nanother computer is probably port scanning . This can be \ndone to determine what network-oriented programs are \nrunning and in some cases even the operating system can \nbe determined. It is a good idea to block this type of net-\nwork reconnaissance, which an advanced firewall can do. \n Although already introduced in this chapter, it is \nworth mentioning IETF RFC 3511 again. This document \ndescribes how firewalls should be tested to measure per-\nformance. This information helps the buyer understand \nthe performance numbers cited by manufacturers. It \nis also important to ask whether the device was tested \nunder RFC 3511 conditions. \n 10. FIREWALL PLACEMENT \nAND NETWORK TOPOLOGY \n A simple firewall typically separates two networks: \none trusted (internal — for example, the corporate net-\nwork) and one untrusted (external — for example, the \nInternet). In this simple arrangement, one security pol-\nicy is applied to secure all the devices connected to the \ninternal network. This may be sufficient if all the com-\nputers perform the same duties, such as desktop comput-\ners; however, if the internal network consists of different \ntypes of computers (in terms of the services provided), a \nsingle policy or level of protection is not sufficient or is \ndifficult to create and maintain. \n For example, the security policy for a Web server \nwill be different from the security policy for a desktop \ncomputer. This is primarily due to the type of external \nnetwork access each type of computer needs. Web serv-\ners would probably accept almost any unsolicited HTTP \n(port 80) requests arriving from the Internet. However, \ndesktop computers probably do not serve Web pages and \nshould not be subject to such requests. \n Therefore it is reasonable to expect that different classes \nof computers will need different security policies. Assume \nan internal network consists of one Web server and several \ndesktop computers. It is possible to locate the Web server \non the outside, on the firewall (on the side of the external \nnetwork), but that would leave the Web server without any \nfirewall protection. Furthermore, given that the Web server \nis on the outside, should the administrator trust it? \n Of course, the Web server could be located in the inter-\nnal network (see Figure 21.3 ) and a rule can be added to \nthe policy to allow Web traffic to the Web server (often \nInternal Network\nInternal Network\nInternal Network\nFirewall\nFirewall\nFirewall 1\nFirewall 2\nWeb Server\nWeb Server\nWeb Server\nDMZ\nDMZ\nInternet\n(External Network)\nInternet\n(External Network)\nInternet\n(External Network)\n FIGURE 21.3 Example firewall configurations. Left configuration has a Web server outside the internal network. The middle configuration has \nthe Web server in a demilitarized zone. The right configuration is another example of a demilitarized zone. \n" }, { "page_number": 390, "text": "Chapter | 21 Firewalls\n357\ncalled poking a hole ). However, if the Web server is com-\npromised, the remaining computers in the internal network \nare vulnerable. Most attacks are multistage, which means \nthe first target of attack is rarely the objective. Most attack-\ners use one computer to compromise another until the \nobjective is achieved. Therefore it is a good practice to sep-\narate machines and services, even in the internal network. \n Demilitarized Zones \n Another strategy often employed to provide differ-\nent types of protection is a demilitarized zone (DMZ), \nas shown in Figure 21.3 . Assume the firewall has three \nconnections (sometimes called a multihomed device) —\n one for the external network (Internet), one for the Web \nserver, and another for the internal network. For each \nconnection to the firewall (referred to as an interface ), a \ndifferent firewall policy can be enforced, providing dif-\nferent forms of protection. The connection for the Web \nserver is called the DMZ, and it prevents users from the \nexternal network getting direct access to the other com-\nputers in the internal network. Furthermore, if the Web \nserver is compromised, the internal network still has \nsome protection, since the intruder would have to cross \nthe firewall again to access the internal network. \n If the firewall only supports two interfaces (or just \none policy), multiple firewalls can be used to achieve \nthe same DMZ effect. The first firewall would be placed \nbetween the external network and the Web server. The \nsecond firewall would connect the Web server to the \ninternal network. Given this design, the first firewall \npolicy would be less restrictive than the second. Again, \ndifferent levels of security are now possible. \n Grouping machines together based on similar firewall \nsecurity needs is increasingly common and is seen as a \ngood practice. Large networks may have server farms or \na group of servers that perform similar services. As such, \neach farm is connected to a firewall and given a unique \nsecurity policy. For example, users from the internal net-\nwork may have access to administrative servers, but Web \nservers may have no access to the administrative servers. \nSuch groupings are also referred to as enclaves . \n Perimeter Networks \n A perimeter network is a subnetwork of computers \nlocated outside the internal network. 8 Given this defi-\nnition, a DMZ can be considered a type of perimeter \nnetwork. The primary difference between a DMZ and \na perimeter network is the way packets arriving and \ndeparting the subnetwork are managed. \n In a perimeter network, the device that connects \nthe external network to the perimeter network is a \n router , whereas a DMZ uses a firewall to connect to the \nInternet. For a DMZ, a firewall policy will be applied to \nall packets arriving from the external network (Internet). \nThe firewall can also perform advanced services such as \nNATing and packet payload inspection. Therefore it is \neasy to see that a DMZ offers a higher level of protec-\ntion to the computers that are part of the perimeter and \ninternal networks. \n Two-Router Configuration \n Another interconnection of subnetworks is the two-\nrouter configuration. 9 This system consists of an external \nrouter, a bastion host, an internal router, and an internal \nnetwork. The bastion host is a computer that serves as a \nfilter and/or proxy for computers located in the internal \nnetwork. \n Before describing the specifics of the two-router \nconfiguration, let’s define the duties of a bastion host. \nA bastion host is the first device any external computer \nwill contact before accessing a computer in the internal \nnetwork. Therefore the bastion host is fully exposed to \nthe Internet and should be made as secure as possible. \nThere are several types of bastion hosts, including victim \nmachines that provide insecure but necessary services. \nFor our discussion the bastion host will provide proxy \nservices, shielding (to a limited degree) internal comput-\ners from external threats. \n For the two-router configuration, the external net-\nwork connects to the external router, which connects to \nthe bastion host. The bastion host then connects to the \ninternal router, which also connects to the internal net-\nwork. The routers can provide limited filtering, whereas \nthe bastion host provides a variety of proxy services —\n for example, HTTP, SSH, IRC, and FTP. This provides \nsome level of security, since attackers are unaware of \nsome internal network details. The bastion host can be \nviewed as a part of a very small perimeter network. \n Compared to the DMZ, a two-router system provides \nless security. If the bastion host is compromised, the com-\nputers in the internal network are not immediately vulner-\nable, but it would only be a matter of time before they \nwere. Therefore the two-router design should be limited \n 8 J.R. Vacca and S. R. Ellis, Firewalls Jumpstart for Network and \nSystems Administrators , Elsevier, 2005. \n 9 J.R. Vacca and S. R. Ellis, Firewalls Jumpstart for Network and \nSystems Administrators , Elsevier, 2005. \n" }, { "page_number": 391, "text": "PART | II Managing Information Security\n358\nto separating internal subnetworks. If the internal router \nis a firewall, the design is considerably more secure. \n Dual-Homed Host \n A dual-homed host system consists of a single computer \nseparating the external network from internal comput-\ners. 10 Therefore the dual-homed computer needs at least \ntwo network interface cards (NICs). One NIC connects \nto the external network; the other connects to the inter-\nnal network — hence the term dual-homed . The internal \nconnection is generally a switch that connects the other \ninternal computers. \n The dual-homed computer is the location where all \ntraffic arriving and departing the internal network can \nbe processed. The dual-homed computer can perform \nvarious tasks such as packet filtering, payload inspec-\ntion, NAT, and proxy services. Given the simple design \nand low cost, this setup is popular for home networks. \nUnfortunately, the dual-homed approach introduces \na single point of failure. If the computer fails, then the \ninternal network is isolated from the external network. \nTherefore this approach is not appropriate for businesses \nthat rely on the Internet. \n Network Configuration Summary \n This section described various network configurations \nthat can be used to provide varying levels of security. \nThere are certainly variations, but this part of the chapter \nattempted to describe the most prevalent: \n ● Demilitarized zones (DMZs) . When correctly config-\nured, DMZs provide a reasonable level of security. \nServers that need to be available to the external \nnetwork are placed outside the internal network \nbut have a firewall between them and the external \nnetwork. \n ● Perimeter networks. A perimeter network consists of \na subnetwork of systems (again, those that need to \nbe available to the external network) located outside \nthe internal network. The perimeter subnetwork is \nseparated from the external network by a router that \ncan provide some basic packet filtering. \n ● Two-router configuration. The two-router \nconfiguration places a bastion host between the \ninternal and external networks. One router is placed \nbetween the internal network and bastion host, and \nthe other router is placed between the bastion host \nand the external network. The bastion host provides \nproxy services, which affords some security (but \nnot much). \n ● Dual-homed configuration. A dual-homed \nconfiguration has one computer that has at least \ntwo network connections — one connected to the \nexternal network and another to the internal network. \nAll traffic must transmit through the dual-homed \nsystem; thus is can act as a firewall, NAT, and/or \nIDS. Unfortunately, this system has a single point \nof failure. \n 11. FIREWALL INSTALLATION AND \nCONFIGURATION \n Before a firewall is actually deployed, it is important to \ndetermine the required services and realize the vulner-\nabilities that may exist in the computer system that is to \nbe secured. Determining the services requires a detailed \nunderstanding of how the computers in the network are \ninterconnected, both physically and from a service-ori-\nented perspective. This is commonly referred to as object \ndiscovery . \n For example, given a database server, which serv-\nices should the server provide? Which computers should \nbe allowed to connect? Restated, which ports should \nbe open and to whom? Often object discovery is diffi-\ncult since it is common that a server will be asked to do \nvarious tasks over time. Generally a multiservice server \nis cheaper (one server providing Web, email, and data-\nbase), but it is rarely more secure. For example, if a \nmultiservice server is compromised via one service, the \nother services are vulnerable to attack. In other words, \nthe rules in the firewall policy are usually established by \nthe list of available services and secure computers. \n Scanning for vulnerabilities is also helpful when \nyou’re installing a firewall. Several open-source tools are \navailable to detect system vulnerabilities, including net-\nstat, which shows open services. Why not simply patch \nthe vulnerability? Perhaps the patch is not available yet, \nor perhaps the application is deemed necessary but it is \nsimply insecure (FTP is an example). Network mappers \nsuch as Nessus are also valuable in showing what infor-\nmation about the internal network is available from the \noutside. Knowing the internal network layout is invalu-\nable in attacking a system, since must modern attacks \nare multistaged. This means that one type of system vul-\nnerability is typically leveraged to gain access elsewhere \nwithin the network. \n 10 J.R. Vacca and S. R. Ellis, Firewalls Jumpstart for Network and \nSystems Administrators , Elsevier, 2005. \n" }, { "page_number": 392, "text": "Chapter | 21 Firewalls\n359\n A simple and unfortunately common security risk is \na Web server that is connected to another internal server \nfor data. Assume that Network File System (NFS) is used \nto gain access to remote data. If the Web server is com-\npromised, which will probably occur at some time, then \nall the data inside the data server may be at risk (depend-\ning on how permissions have been set) and access to the \ndata could be the true objective of the attacker. Therefore, \nunderstanding the interconnection of internal machines \ncan help identify possible multistage attacks. \n Of course the process of determining services, access \nrights, and vulnerabilities is not a one-time occurrence. \nThis process should repeat over time as new comput-\ners, operating systems, users, and so on are introduced. \nFurthermore, firewall changes can cause disruption \nto legitimate users; these cases require tracing routes, \ndefining objects, and reading policies. Managing a fire-\nwall and its policy requires constant vigilance. \n 12. SUPPORTING OUTGOING SERVICES \nTHROUGH FIREWALL CONFIGURATION \n As described in the first section, a firewall and the policy \ngovern access to and from an internal network (the net-\nwork being administered). A firewall applies a policy to \narriving packets, then determines the type of access. The \npolicy can be represented as an ordered set of rules; again, \nassume that the first-match criterion is used. When a \npacket arrives, it is compared to the first rule to determine \nwhether it is a match. If it is, then the associated action \nis performed; otherwise the next rule is tested. Actions \ninclude accepting, denying, and logging the packet. \n For a simple packet filter, each rule in the policy will \ndescribe a certain range of packet headers that it will \nmatch. This range of packets is then defined by describ-\ning certain parts of the packet header in the rule. For the \nInternet (TCP/IP networks) there are five such parts that \ncan be described: source IP, source port, destination IP, \ndestination port, and protocol. \n Recall that the source IP is the address of the com-\nputer that originated the packet. The source port is the \nnumber associated with the application that originated \nthe packet. Given the IP address and port number, it \nis possible to determine the machine and application, \nwithin reason. The destination IP and port number \ndescribe the computer and the program that will receive \nthe packet. Therefore, given these four pieces of infor-\nmation, it is possible to control the access to and from a \ncertain computer and program. The fifth piece of infor-\nmation is the communication protocol, UDP or TCP. \n At this point it is important to also consider the direc-\ntion of traffic. When referring to a packet, did it come \nfrom the external network and is it destined for an inter-\nnal computer, or vice versa? If the packet is considered \ninbound, the source and destination addresses are in one \norder; outbound would reverse the order. Unfortunately, \nmany firewalls will consider any arriving packet as \ninbound, regardless of where it originated (external or \ninternal network), so the administrator must consider the \ndirection when designing the policy. For example, ipta-\nbles considers packets as locally or nonlocally generated. \nLocally generated packets are created at the computer \nrunning the firewall; all others are nonlocal, regardless \nof the source network. \n Many firewalls can go beyond the five tuples (TCP/\nIP packet header parts) described. It is not uncommon \nto have a rule check the Medium Access Control (MAC) \naddress or hardware address. This can be applied to filter-\nspoofed addresses. Filtering on the Type of Service \n(ToS) field is also possible to treat packets differently —\n for better service, for example. \n As previously described, maintaining the state of a \nconnection is important for filtering traffic. For exam-\nple, maintaining state allows the returning traffic to be \naccepted if the request was initiated from the internal \nnetwork. Note that in these simple cases we are only con-\nsidering two computers communicating — for example, an \ninternal workstation connecting to an external Web server. \n Forms of State \n The state of a connection can be divided into three main \ncategories: new, established, and related. The new state \nindicates that this is the first packet in a connection. The \nestablished state has observed traffic from both direc-\ntions, so the minimum requirement is that the source \ncomputer sends a packet and receives a packet in reply. \nThe new state will change to established once the reply \npacket is processed by the firewall. \n The third type of state is related , which is somewhat \ncomplicated. A connection is considered related if it is \nassociated with an established connection. Therefore an \nestablished connection may create a new connection, \nseparate from the original, which is considered related. \nThe common example of this process is the File Transfer \nProtocol (FTP), which is used to transmit data from a \nsource computer to a destination computer. The process \nbegins with one connection from source to destination \non port 21, the command connection. If there is data to \nbe transferred, a second connection is created on port 20 \nfor the data. Hence the data connection is related to the \n" }, { "page_number": 393, "text": "PART | II Managing Information Security\n360\ninitial control connection. To simplify the firewall pol-\nicy, it is possible to add a single rule to permit related \nconnections. \n In the previous example, the two computers commu-\nnicating remained the same, but new connections were \ncreated, which can be managed in a table. However, \nunderstanding related connections is problematic for \nmany new services. One example is the Common Object \nResource Broker Architecture (CORBA), which allows \nsoftware components to be executed on different com-\nputers. This communication model may initiate new \nconnections from different computers, similar to peer-\nto-peer networking. Therefore it is difficult to associate \nrelated connections. \n Payload Inspection \n Although firewalls originally only inspected the packet \nheader, content filtering is increasingly commonplace. \nIn this case the packet payload (also called contents \nor data ) is examined for certain patterns (analogous to \nsearching for certain words on a page). These patterns, \nor signatures, could be for inappropriate or illegal con-\ntent, spam email messages, or intrusion attempts. For \nexample, it is possible to search for certain URLs in the \npacket payload. \n The patterned searched for is often called a signa-\nture . If the pattern is found, the packet can be simply \ndropped, or the administrator may want to log the con-\nnection. In terms of intrusion signatures, this includes \nknown patterns that may cause a buffer overflow in a \nnetwork service. \n Content filtering can be used to provide differen-\ntiated services as well. For example if the firewall can \ndetect that a connection is used for multimedia, it may \nbe possible to provide more bandwidth or disconnect \nit, depending on the policy. Of course, content filtering \nassumes that the content is available (readable), which \nis not the case when encryption is used. For example, \nmany worms encrypt their communications to prevent \ncontent filtering at the firewall. \n Examining the packet payload normally requires \nsignificantly more processing time than normal header \ninspection. A signature may actually contain several pat-\nterns to match, specifying where they should occur rela-\ntive to the packet beginning and the distance between \npatterns in the signature. This is only a short list of \npotential signature characteristics. \n A signature can also span multiple packets — for exam-\nple, a 20-byte signature could occur over two 10-byte IP \nfragments. Recall that IP may fragment packets based on \nthe maximum transfer unit (MTU) of a link. Therefore \nthe system may have to reassemble fragments before the \nscanning can begin. This necessary reassembly will fur-\nther delay the transmission of data, which is problematic \nfor certain types of applications (for example, multime-\ndia). However, at this point, the discussion is more about \nintrusion detection systems (IDSs) than firewalls. \n Over the years several techniques have been devel-\noped to decrease the amount of time required for pay-\nload inspection. Faster searching algorithms, dedicated \nhardware, and parallel searching techniques have all \nshown promise in this regard. However, payload inspec-\ntion at high bandwidths with low latency often requires \nexpensive equipment. \n 13. SECURE EXTERNAL SERVICES \nPROVISIONING \n Often we need a server that will provide services that are \nwidely available to the external network. A Web server is \na simple example of providing a service (Web pages) to \na potentially large set of users (both honest and dishon-\nest). As a result the server will be subjected to malicious \nintrusion attempts during its deployment. \n Therefore systems that provide external services are \noften deployed on the edge or perimeter of the internal \nnetwork. Given the location, it is it important to main-\ntain secure communications between it and other serv-\ners. For example, assume that the Web server needs to \naccess a database server for content (PHP and MySQL); \nthe connection between these machines must be secure \nto ensure proper operation. \n A common solution to secure communications is \nthe use of a virtual private network (VPN), which uses \nencryption to tunnel through an insecure network and \nprovide secrecy. Advanced firewalls can create VPNs to \ndifferent destinations, including mobile users. The first \nand most popular protocol for VPN is Internet Security \nProtocol (IPsec), which consists of standards from IPv6 \nported to IPv4. \n 14. NETWORK FIREWALLS FOR VOICE \nAND VIDEO APPLICATIONS \n The next generation of network applications is expected \nto better leverage different forms of media. This is evident \nwith the increased use of Voice over IP (VoIP) instead of \ntraditional line-line telephones. Teleconferencing is another \napplication that is seeing a steady increase in use because \nit provides an easy method for collaborating with others. \n" }, { "page_number": 394, "text": "Chapter | 21 Firewalls\n361\n Teleoperations is another example that is seeing \nrecent growth. These applications allow operators to con-\ntrol equipment that is at another location over the network \n(for example, telemedicine). Of course these examples \nassume that the network can provide QoS guarantees, but \nthat is a separate discussion. \n Generally speaking, these applications require spe-\ncial handling by network firewalls. In addition, they nor-\nmally use more than one connection. For example, the \naudio, video, and control information of a multimedia \napplication often uses multiple network connections. \n Multimedia applications also use multiple transport \nprotocols. Control messages can be sent using TCP, \nwhich provides a reliable service between the sender and \nreceiver. The media (voice and/or video) is typically sent \nusing UDP. Often Realtime Transport Protocol (RTP) is \nused, but this protocol is built on UDP. UDP is unreli-\nable but faster than TCP, which is more important for \nmultimedia applications. \n As a result, these connections must be carefully man-\naged by the firewall to ensure the proper operation of the \napplication. This includes maintaining state across mul-\ntiple connections and ensuring that packets are filtered \nwith minimal delay. \n Packet Filtering H.323 \n There are a few multimedia standards for transmitting \nvoice and video over the Internet. Session Initiation \nProtocol (SIP) and H.323 are two examples commonly \nfound in the Internet. The section briefly describes H.323 \nto illustrate the support required by network firewalls. \n H.323 is the International Telecommunications Union \n(ITU) standard for videoconferencing. It is a high-level \nstandard that uses other lower-level standards for the \nactual setup, transmission, control, and tear-down of a \nvideoconference. For example, G.711 is used for encod-\ning and decoding speech, and H.245 is used to negotiate \nthe connections. \n During H.323’s operation, one port will be used for \ncall setup using the static port 1720 (easy for firewalls). \nEach datastream will require one dynamically allocated \nTCP port for control and one dynamically allocated \nUDP port for data. As previously described, audio and \nvideo are transmitted separately. \n Therefore an H.323 session will generate at least \neight dynamic connections, which makes packet proces-\nsing at the firewall very difficult. How does a firewall \nknow which ports to open for an H.323 session? This \nis referred as a lack of symmetry between the computer \nlocated in the internal network and the computer located \nin the external network. \n A stateful firewall can inspect the packet payloads \nand determine the dynamic connection port numbers. \nThis information (negotiated port numbers) is placed \nin higher-level protocols, which are difficult to quickly \nparse and can be vendor specific. \n In 2005 the ITU ratified the H.460.17/.18/.19 stand-\nards, which describe how to allow H.323 to traverse a \nfirewall (or a NAT router/firewall, which essentially has \nthe same problem). H.460.17 and H.460.18 deal with \nsignaling, whereas H.460.19 concerns media. The H.460 \nstandards require the deployment of stateful firewalls \nand updated H.323 equipment. This is a solution, but it \nremains a complex problem. \n 15. FIREWALLS AND IMPORTANT \nADMINISTRATIVE SERVICE PROTOCOLS \n There are a large number of administrative network pro-\ntocols that are used to manage computer systems. These \nprotocols are typically not complex to control at the fire-\nwall since dynamic connections are not used and little \nstate information is necessary. The administrator should \nbe aware of these services when designing the security \npolicy, since many can be leveraged for attacks. This \nsection reviews some of these important protocols. \n Routing Protocols \n Routing protocols are used to distribute routing infor-\nmation between routing devices. This information will \nchange over time based on network conditions; therefore \nthis information is critical to ensure that packets will get \nto their destinations. Of course, attackers can also use \nrouting protocols for attacks. For example, maliciously \nsetting routes such that a certain network is not reach-\nable can be considered a denial-of-service (DoS) attack. \nSecuring routers and routing protocols is a continuing \narea of research. The firewall can help prevent these \nattacks (typically by not forwarding such information). \n In considering routing protocols, it is important to \nfirst determine which devices in the internal network \nwill need to receive and submit routing information. \nMore than likely only devices that are directly connected \nto the external network will need to receive and respond \nto external routing changes — for example, the gateway \nrouter(s) for the internal network. This is primarily due \nto the hierarchical nature of routing tables, which does \nnot require an external host to know the routing specifics \n" }, { "page_number": 395, "text": "PART | II Managing Information Security\n362\nof a distant subnetwork. As a result, there is typically no \nneed to forward routing information from the external \nnetwork into the internal network, and vice versa. \n Routing Information Protocol (RIP) is the oldest rout-\ning protocol for the Internet. The two versions of RIP \ndiffer primarily by the inclusion of security measures. \nRIPv1 is the original protocol, and RIPv2 is the same but \nsupports classless addresses and includes some security. \nDevices that use RIP will periodically (approximately \nevery 30 seconds) broadcast routing information to neigh-\nboring hosts. The information sent by a host describes the \ndevices they are directly connected to and the cost. RIP is \nnot very scalable so is primarily used for small networks. \nRIP uses UDP to broadcast messages; port 520 is used by \nservers, whereas clients use a port above 1023. \n Another routing protocol is Open Short Path First \n(OSPF), which was developed after RIP. As such OSPF \nis considered an improvement because it converges faster \nand it incorporates authentication. Interestingly, OSPF is \nnot built on the transport layer but instead talks directly \nto IP. It is considered protocol 89 by the IP layer. OSPF \nmessages are broadcast using two special multicast \nIP addresses: 224.0.0.5 (all SPF/link state routers) and \n224.0.0.6 (all designated routers). The use of multicast \naddresses and setting the packet Time to Live (TTL) to \none (which is done by OSPF) typically means a firewall \nwill not pass this routing information. \n Internet Control Message Protocol \n Internet Control Message Protocol (ICMP) is used to \nsend control messages to network devices and hosts. \nRouters and other network devices monitor the opera-\ntion of the network. When an error occurs, these devices \ncan send a message using ICMP. Messages that can be \nsent include destination unreachable, time exceeded, and \necho request. \n Although ICMP was intended to help manage the \nnetwork, unfortunately attackers can use it as well. \nSeveral attacks are based on ICMP messages since they \nwere originally allowed through the firewall. For exam-\nple, simply forging a “ destination unreachable ” ICMP \nmessage can cause problems. \n The program ping is one program that uses ICMP to \ndetermine whether a system is connected to the Internet \n(it uses the ICMP messages Echo Request and Echo \nReply). However, this program can also be used for a \nsmurf attack, which causes a large number of unsolicited \nping replies to be sent toward one computer. As a result \nmost firewall administrators do not allow ping requests \nor replies across the firewall. \n Another program that uses ICMP is traceroute, which \ndetermines the path (list of routers) between a source and \ndestination. Finding the path is done by sending multiple \npackets, each with an increasing TTL number (starting \nat one). When a router encounters a packet that it cannot \nforward due to the TTL, an ICMP message is sent back \nto the source. This reveals the router on the path (assum-\ning that the path remains the same during the process). \nMost administrators do not want to provide this infor-\nmation, since it can show addresses assigned to hosts, \nwhich is useful to attackers. As a result firewalls are \noften configured to only allow traceroute requests orig-\ninating from the internal network or limiting replies to \ntraceroute originating from known external computers. \n ICMP is built on the IP layer, like TCP and UDP. A \nfirewall can filter these messages based on the message \ncode field, which is a number that corresponds to each \ntype of error message. Although this section described \nthe problems with allowing ICMP messages through \nthe firewall, an administrator may not want to block all \nICMP packets. For example, Maximum Transfer Unit \n(MTU) messages are important for the transmission of \npackets and probably should be allowed. \n Network Time Protocol \n Network Time Protocol (NTP) is a protocol that allows \nthe synchronization of system clocks (from desktops to \nservers). Having synchronized clocks is not only con-\nvenient but required for many distributed applications. \nTherefore the firewall policy must allow the NTP service \nif the time comes from an external server. \n NTP is a built-on UDP, where port 123 is used for \nNTP server communication and NTP clients use port \n1023 (for example, a desktop). Unfortunately, like many \nlegacy protocols, NTP suffers from security issues. It is \npossible to spoof NTP packets, causing clocks to set to \nvarious times (an issue for certain services that run peri-\nodically). There are several cases of NTP misuse and \nabuse where servers are the victim of DoS attacks. \n As a result, if clock synchronization is needed, it \nmay be better to provide an internal NTP server (mas-\nter clock) that synchronizes the remaining clocks in the \ninternal network. If synchronization is needed by an NTP \nserver in the Internet, consider using a bastion host. \n Central Log File Management \n Almost every operating system maintains a system log \nwhere important information about a system’s state is \n" }, { "page_number": 396, "text": "Chapter | 21 Firewalls\n363\nreported. This log is a valuable resource for managing \nsystem resources and investigating security issues. \n Given that almost every system (especially a server) \ngenerates log messages, having this information at a cen-\ntral location is beneficial. The protocol syslog provides \nthis functionality, whereby messages can be forwarded \nto a syslog server, where they are stored. An attacker \nwill commonly attempt to flood the syslog server with \nfake messages in an effort to cover their steps or to cause \nthe server disk to fill, causing syslog to stop. \n Syslog runs on UDP, where syslog servers listen to \nUDP port 514 and clients (sending log messages) use a \nport above 1023. Note that a syslog server will not send \na message back to the client, but the syslog log server \ncan communicate, normally using port 514. \n Generally allowing syslog communication between \nthe external and internal network is not needed or \nadvised. Syslog communications should be limited to \ninternal computers and servers; otherwise a VPN should \nbe used to prevent abuse from others and to keep the \ninformation in the messages private. \n Dynamic Host Configuration Protocol \n The Dynamic Host Configuration Protocol (DHCP) pro-\nvides computers essential information when connecting \nto an IP network. This is necessary because a compu-\nter (for example, a mobile laptop) does not have an IP \naddress to use. \n The computer needing an IP address will first send \na broadcast request for an IP address. A DHCP server \nwill reply with the IP address, netmask, and gateway \nrouter information the computer should use. The address \nprovided comes from a pool of available IP addresses, \nwhich is managed by the DHCP server. Therefore the \nDHCP provides a method of sharing IP addresses among \na group of hosts that will change over time. \n The actual exchange of information is more elaborate \nthan described here, but this is enough information for \nour discussion. In general the DHCP server providing \n addresses will be located in the internal network. As a \nresult, this information should not be transmitted across \nthe firewall that separates the internal and external net-\nworks. Why would you want to provide IP addresses to \ncomputers in the external network? \n 16. INTERNAL IP SERVICES PROTECTION \n Domain Name Service (DNS) provides the transla-\ntion between the hostname and the IP address, which is \n necessary to send packets in the Internet. Given the \nnumber of hostnames in the Internet, DNS is built on \na hierarchical structure. The local DNS server cannot \nstore all the possible hostnames and IP addresses, so \nthis server will need to occasionally request a translation \nfrom another DNS server located in the external net-\nwork. As a result it is important to configure the firewall \nto permit this type of lookup. \n In many cases the service provider provides the \naddress of a DNS server that can be used to translate \nexternal hostnames. There is no need to manage a local \nDNS server in this case. However, it is possible to man-\nage a local DNS, which allows the internal network to \nuse local hostnames (these can be published to the exter-\nnal network). Some advanced firewalls can provide \nDNS, which can help hide internal computer hostnames \nand IP addresses. As a result, external computers have a \nlimited view of the internal network. \n Another important service that can be provided by the \nfirewall is Network Address Translation (NAT). NAT is a \npopular method for sharing a smaller set of IP addresses \nacross a larger number of computers. Recall that every \npacket has a source IP, source port, destination IP, and \ndestination port. Assume that a small network only has \none external IP address but has multiple computers that \nneed to access the Internet. Note the external address is a \nroutable address, whereas the internal computers would \nuse a private address (addresses have no meaning out-\nside the internal network). NAT will allow the internal \nmachines to share the single external IP address. 11 \n When a packet arrives from a computer in the internal \nnetwork, its source address is replaced with the external \naddress, as shown in Figure 21.4 . The packet is sent to \nthe destination computer, which returns a packet. The \nreturn packet has the external address (which is routable), \nso it is forwarded to the firewall. The firewall can then \nreplace the external destination address with the correct \ninternal destination address. What if multiple internal \nmachines send a packet to a server in the external net-\nwork? The firewall will replace the source address with \nthe external address, but how will the firewall differenti-\nate the return packets? NAT will also change the source \nport number, so each connection can be separated. \n The NAT process described in the preceding para-\ngraph is source NAT 12 , which works for packets initiated \nin the internal network. There is also destination NAT , \n 11 B. Hickman, D. Newman, S. Tadjudin, and T. Martin, Benchmarking \nMethodology for Firewall Performance , IETF RFC 3511, 2003. \n 12 K. Egevang and P. Francis, The IP Network Address Translator \n(NAT) , IETF RFC 1631, 1994. \n" }, { "page_number": 397, "text": "PART | II Managing Information Security\n364\nwhich works in a similar fashion for packets initiated in \nthe external network. In this case the firewall needs to \nknow which machine to forward packets to in the inter-\nnal network. \n 17. FIREWALL REMOTE ACCESS \nCONFIGURATION \n As described in the first section, firewalls are deployed \nto help maintain the privacy of data and authenticate \nthe source. Privacy can be provided using encryption, \nfor which there are several possible algorithms to use. \nThese algorithms can be categorized as either secret key \nor public key. Secret key techniques use the same key \nto encrypt and decrypt information. Examples include \nIDEA, RC4, Twofish, and AES. Though secret key algo-\nrithms are fast, they require the key to be distributed \nbetween the two parties in advance, which is not trivial. \n Public key encryption uses two keys — one to encrypt \n(the public key) and another to decrypt (the private key). \nThe public key can be freely available for others to use \nto encrypt messages for the owner of the private key, \nsince only the private key can decrypt a message. Key \nmanagement sounds easy, but secure key distribution is \ndifficult. How do you know the public key obtained is \nthe correct one? Perhaps it is a man-in-middle attack. \nThe Public Key Infrastructure (PKI), one method of dis-\ntributing public keys, depends on a system of trusted key \nservers. \n Authentication is another important component of \nsecurity; it attempts to confirm a person is who he or she \nclaims to be. This can be done based on what the user \nhas (ID card or security token) or by something a per-\nson knows (for example, a password). A very familiar \nmethod of authentication is requesting a username and \npassword, which is common for VPNs. \n Secrecy and authentication are also important when \nan entity manages multiple separate networks. In this \ncase the administrator would like to interconnect the \nnetworks but must do so using an insecure network (for \nexample, the Internet). \n Tunneling from one firewall to another firewall can \ncreate a secure interconnection. This can be done using \napplication proxies or VPN. Application firewalls imple-\nment a proxy for each application supported. A user first \ncontacts the firewall and authenticates before connect-\ning to the server. The firewall then connects to the des-\ntination firewall, which then connects to the destination \nserver. Three connections are thus involved. \n An alternative is to construct a VPN from one fire-\nwall to another. Now a secure connection exists between \nthe two networks. However, note that the VPN could \nalso be used as an easy connection for an attacker who \nhas successfully broken into one of the networks. \n It is also important to note that tunneling can be \nused to transport packets over a network with a different \ntransport protocol — for example, carrying TCP/IP traffic \nover Frame Relay. \nInternet\n(Public Network)\nPrivate Network\n10.1.1.0/24\n10.1.1.2\n10.1.1.3\nData\ns \u0003 178 . 15 . 140 . 2 : 2020\nd \u0003 152 . 17 . 140 . 2 : 80\nData\ns \u0003 10 . 1 . 1 . 2 : 2020\nd \u0003 152 . 17 . 140 . 2 : 80\nData\nB \u0003 152 . 17 . 140 . 2 : 80\nd \u0003 10 . 1 . 1. 2 : 2020\nData\nB \u0003 152 . 17 . 140 . 2 : 80\nd \u0003 178 . 15 . 140. 2 : 2020\n178.15.140.2\n1\n4\n3\n2\n FIGURE 21.4 Example source Network Address Translation (NAT). The connection originates from a computer in the internal network and is \nsent to a computer in the external network. Note the address and port number exchange performed at the firewall. \n" }, { "page_number": 398, "text": "Chapter | 21 Firewalls\n365\n 18. LOAD BALANCING AND FIREWALL \nARRAYS \n As network speeds continue to increase, firewalls \nmust continue to process packets with minimal delay \n(latency). Unfortunately, firewalls that can operate at \nthese extreme data rates are also typically very expensive \nand cannot easily be upgraded to meet future demands. \nLoad-balancing firewalls can provide an answer to this \nimportant problem. 13 \n Load balancing (or parallelization ) provides a scal-\nable firewall solution for high-bandwidth networks \nand/or low-latency applications. This approach consists \nof an array of firewalls that process arriving packets in \nparallel. A simple system would consist of two load bal-\nancers connected to an array of firewalls, where each \nfirewall is identically configured (same firewall policy), \nas depicted in Figure 21.5 . One balancer connects to the \nInternet, then to the array (arriving traffic); the other bal-\nancer connects to the internal network, then to the array \n(departing traffic). Of course, one load balancer can be \nused instead of two separate load balancers. \n When a packet arrives, it is sent to a firewall that cur-\nrently has the lightest load (fewest number of packets \nawaiting processing), hence the term load balancing . As \na result, the amount of traffic each firewall must process \nis roughly 1/ n of the total traffic, where n is the number \nof firewalls in the array. \n As with any new firewall implementation, the integ-\nrity of the policy must be maintained. This means that \ngiven a policy, a traditional single firewall and a load-\nbalancing firewall will accept the same packets and deny \nthe same packets. For static rules, integrity is provided, \nsince the policy is duplicated at every firewall; therefore \nthe set of accepted and denied packets at each firewall is \nalso the same. As will be discussed in the next sections, \nmaintaining integrity for stateful rules is not easy. \n Load Balancing in Real Life \n A simple supermarket analogy can help describe the sys-\ntem and the potential performance increase. Consider a \nmarket consisting of an array of n cashiers. As with the \nfirewall system, each cashier in the market is identical and \nperforms the same duties. When a customer wants to pay \nfor her items, she is directed to the cashier with the short-\nest line. The load balancer is the entity that would direct \nthe customer, but in reality such a person rarely exists. \nA customer must guess which line is actually the best to \njoin, which as we all know is not simple to determine. \n Obviously, as more cashiers are added, the market \ncan check out more customers. This is akin to increasing \nthe bandwidth of the firewall system. Another important \nadvantage of a load-balancing system is robustness. Even \nif a cashier takes a break (or a firewall in the array fails), \nthe system will still function properly, albeit more slowly. \n How to Balance the Load \n An important problem with load-balancing firewalls is \nhow to quickly balance the lines (queues) of packets. We \nare all aware that customers require different amounts of \ntime to check out of a market. This is dependent on the \nnumber of items (which is observable) and their ability \nto pay (not easily observable). Similarly, it is difficult \nto determine how much time a packet will require at a \nfirewall (for software-based systems). It will depend on \nthe number of rules, organization of the rules, and which \nrule the packet will match. \n A more important problem with load balancing is \nhow to maintain state. As described in the preceding \nsections, some firewall rules will maintain the state of a \nconnection. For example, a firewall rule may allow traf-\nfic arriving from the Internet only if an internal computer \nrequested it. If this is the case, a new temporary rule will \nbe generated to handle traffic arriving from the Internet. \nIn a parallel system, where should this rule reside? Which \nfirewall? The objective is to ensure that the integrity of \nthe policy is maintained in the load-balancing system. \n To use the market analogy again (and this will be a \nstretch), assume that a mother leaves her credit card with \none cashier, then sends her children into the market to \nbuy certain items. When the children are ready to pay for \n 13 Errin W. Fulp and Ryan J. Farley, “A function-parallel architecture \nfor high-speed fi rewalls,” In Proceedings of the IEEE International \nConference on Communications , 2006. \nFirewall\narray\nInternet\n(External network)\nInternal network\nLoad\nbalancer\nLoad\nbalancer\n FIGURE 21.5 Load-balancing firewall array consisting of a three-fire-\nwall array and a load balancer. \n" }, { "page_number": 399, "text": "PART | II Managing Information Security\n366\ntheir items they must go to the cashier that has the credit \ncard. If the children don’t know which cashier has the \ncredit card the load balancer must not only balance lines \nbut also check a list to make certain the children join the \ncorrect line. \n Maintaining state increases the amount of time the \nload balancer will spend per packet, which increases the \nlatency of the system. An alternative solution is it to rep-\nlicate the stateful rules across every firewall (or the credit \ncard across every cashier). This requires an interconnec-\ntion and state-aware program per firewall. Maintaining \nnetwork connections is also difficult for applications that \ndynamically create new connections. Examples include \nFTP (one connection for control, the other for data), \nmultimedia, and CORBA. Again, a new rule must be \nadded to handle the new traffic. \n Advantages and Disadvantages of Load \nBalancing \n Given these issues, firewall load balancing is still done. \nThere are several advantages and disadvantages to this \napproach. Disadvantages of load balancing include: \n ● Load balancing is not trivial. The load balancer \nseeks to ensure that the lines of packets across the \narray of firewalls remains equal. However, this \nassumes that the balancer can predict how much time \na packet will require. \n ● Maintaining state is difficult. All packets that belong \nto a session will need to traverse the same firewall, or \nstate information must be shared across the firewalls. \nEither solution is difficult to manage. \n There are several advantages to load balancing: \n ● Scalable solution for higher throughput. If higher \nthroughput is needed, adding more firewalls is sim-\nple and cost effective. \n ● Robustness. If a firewall fails in the array, the \nintegrity of the system remains. The only loss is \nthroughput. \n ● Easy policy management. If the rules change, simply \nupdate the policy at each firewall. \n The load balancer can be implemented in software or \nhardware. Hardware load balancers provide better per-\nformance, but they will also have a much higher price. \n 19. HIGHLY AVAILABLE FIREWALLS \n As previously discussed, a network firewall is generally \nconsidered easier to manage than multiple host-based \nfirewalls. The administrator only manages one firewall \nand one policy, but this design also introduces a single \npoint of failure. If the network firewall fails in the system, \nthe entire internal network is isolated from the Internet. \nAs a result, there is a real need to provide a highly avail-\nable, or robust, firewall system. \n The load-balancing system described in the previous \nsection provides a greater degree of robustness. If a firewall \nin the array fails, the system is still functional; however, \nthe capacity of the system is reduced, which is considered \nacceptable under the circumstances. Unfortunately, the \ndesign still has a single point of failure — the load distribu-\ntor. If the load distributor fails, the entire system fails. \n A simple solution to this problem simply replicates \nthe load distributor. The incoming connection is dupli-\ncated to both load distributors, and the distributors are \nthen connected to the firewalls in the array. The distribu-\ntors are interconnected via a lifeline to detect failure and \npossibly share information. \n Load Balancer Operation \n The two load balancers described in the previous sec-\ntion can operate in one of two modes: active-backup or \nactive-active. In active-backup mode, one balancer oper-\nates as normal distributing packets to minimize delays \nand maintain state information. The second distributor \nis in backup mode and monitors the status of the active \nload balancer and duplicates any necessary state infor-\nmation. Upon load-balance failure, the backup device \ncan quickly take over operation. \n In contrast, active-active operation operates both load \nbalancers in parallel. When a packet arrives at the sys-\ntem, it is processed by one of the load balancers, which \nforwards the packet to a firewall in the array. Of course, \nthis seems as though there will be a load balancer for \nthe load balancers, but this is not necessary. Under \nactive-active mode, the load balancers use the lifeline \nto synchronize and determine which packets to process. \nAlthough the active-active mode can increase perform-\nance by using both load balancers simultaneously, it is \nmore difficult to implement and complex to manage. \n Interconnection of Load Balancers \nand Firewalls \n In our simple example, the additional load balancer \nrequires double the number of ports per firewall (one per \nload balancer). This design provides greater robustness \nbut may also be cost prohibitive. \n An alternative solution uses active-active mode and \ndivides the array into two equal groups. One group is \nthen connected to one load-balancer and the other group \n" }, { "page_number": 400, "text": "Chapter | 21 Firewalls\n367\nis connected to the other. For example, consider an array \nof six firewalls. Three firewalls would be connected to \none load balancer and the other six firewalls connected \nto the second load balancer. Although this design only \nrequires one port per firewall (on one side), if a load bal-\nancer fails, half the firewalls are nonoperational. \n 20. FIREWALL MANAGEMENT \n Once a firewall has been deployed and policy created, it is \nimportant to determine whether it is providing the desired \nsecurity. Auditing is the process of verifying the firewall \nand policy and consists of two steps. First, the adminis-\ntrator should determine whether the firewall is secure. If \nan attacker can exploit the firewall, the attacker has a sig-\nnificant advantage. Consider the information that can be \ngained just from knowing the firewall policy. \n The firewall should be in a secure location and have \nthe latest security patches (recall that many firewalls are \nimplemented in software). Also ensure that the firewall \nonly provides the necessary services, such as SSH, if \nremote access to the firewall is needed. Exploiting a fire-\nwall operating system or provided services is the most \ncommon method for breaking into a firewall. Therefore \nthe services and access should be tightly controlled. User \nauthentication with good passwords and secure connec-\ntions should always be used. \n Once the firewall has been secured, the administra-\ntor should review the policy and verify that it provides \nthe security desired. Does it block the illegitimate traffic \nand permit legitimate traffic? This is not a trivial task, \ngiven the first-match criterion and the number of rules \nin the policy. It is easy to create firewall policies with \nanomalies, such as shadowing (a subsequent rule that is \nnever matched because of an earlier rule). Some soft-\nware packages are available to assist this process, but in \ngeneral it is a difficult problem. \n An administrator should periodically audit the fire-\nwall rules and test the policy to verify that the system \nperforms as expected. In addition, the system should \nundergo penetration testing to verify correct implemen-\ntation. This includes seeded and blind penetration test-\ning. Seeded testing includes detailed information about \nthe network and configuration, so target systems and \nservices can be tested. Blind testing is done without any \nknowledge of the system, so it is more complete but also \nmore time consuming. \n Keeping backups of configurations and policies \nshould be done in case of hardware failure or an intru-\nsion. Logging at the firewall should also be performed, \nwhich can help measure performance. In addition, logs \ncan show connections over time, which is useful for \nforensics and verifying whether the security policy is \nsufficient. \n 21. CONCLUSION \n Network firewalls are a key component of providing \na secure environment. These systems are responsible \nfor controlling access between two networks, which is \ndone by applying a security policy to arriving packets. \nThe policy describes which packets should be accepted \nand which should be dropped. The firewall inspects the \npacket header and/or the payload (data portion). \n There are several different types of firewalls, each \nbriefly described in this chapter. Firewalls can be catego-\nrized based on what they inspect (packet filter, stateful, \nor application), their implementation (hardware or soft-\nware), or their location (host or network). Combinations \nof the categories are possible, and each type has specific \nadvantages and disadvantages. \n Placement of the firewall with respect to servers and \ninternal computers is key to the way these systems will \nbe protected. Often servers that are externally available, \nsuch as Web servers, will be located away from other \ninternal computers. This is often accomplished by plac-\ning these servers in a demilitarized zone (DMZ). A dif-\nferent security policy is applied to these computers so \nthe access between computers in the DMZ and the inter-\nnal network is limited. \n Improving the performance of the firewall can be \nachieved by minimizing the rules in the policy (prima-\nrily for software firewalls). Moving more popular rules \nnear the beginning of the policy can also reduce the \nnumber of rules comparisons that are required. However, \nthe order of certain rules must be maintained (any rules \nthat can match the same packet). \n Parallel firewalls can provide greater performance \nimprovements. These systems consist of a load balancer \nand an array of firewalls, where all the firewalls in the \narray are identical. When a packet arrives at the system, \nit is sent to one of the firewalls in the array. The load bal-\nancer maintains short packet queues, which can provide \ngreater system bandwidth and possibly a lower latency. \n Regardless of the firewall implementation, place-\nment, or design, deployment requires constant vigilance. \nDeveloping the appropriate policy (set of rules) requires \na detailed understanding of the network topology and the \nnecessary services. If either of these items change (and \nthey certainly will), that will require updating the policy. \nFinally, it is important to remember that a firewall is not \na complete security solution but is a key part of a secu-\nrity solution. \n" }, { "page_number": 401, "text": "This page intentionally left blank\n" }, { "page_number": 402, "text": "369\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n Penetration Testing \n Sanjay Bavisi \n EC-Council \n Chapter 22 \n Last year I walked into a restaurant in Rochester, \nNew York, with a business partner; I was wearing an \nEC-Council official polo shirt. The back of the shirt \nwas embroidered with the words “ Licensed Penetration \nTester. ” On reading those words on my shirt, a group of \nyoung executives seated behind me started an intense \ndialogue among themselves. \n They were obviously not amused at my “ behavior, ” \nsince that was a restaurant for decent people! On my way \nout, I walked up to them and asked if they were amazed \nat what that statement meant. They replied “ Absolutely! ” \nWhen I explained to them the meaning of a Licensed \nPenetration Tester, they gave a loud laugh and apolo-\ngized to me. They admitted that they had thought I was \na pervert. \n Each time I am at an airport, I get some stares when \nI put on that shirt. So the question is, what is penetration \ntesting? \n In this chapter, we’ll talk about penetration testing \nand what it is (and isn’t!), how it differs from an actual \n “ hacker attack, ” some of the ways penetration tests are \nconducted, how they’re controlled, and what organiza-\ntions might look for when they’re choosing a company \nto conduct a penetration test for them. \n Because this is a chapter and not an entire book, \nthere are a lot of things that I just don’t have the space to \ntalk about. What you’re about to read is, quite literally, \njust the tip of the iceberg when it comes to penetration \ntesting. Keep that in mind when you think to yourself: \n “ What about . . .? ” The answer to your question (what-\never it might be) is probably a part of our licensed pen-\netration tester certification course! \n 1. WHAT IS PENETRATION TESTING? \n Penetration testing is the exploitation of vulnerabilities \npresent in an organization’s network. It helps determine \nwhich vulnerabilities are exploitable and the degree of \ninformation exposure or network control that the organi-\nzation could expect an attacker to achieve after success-\nfully exploiting a vulnerability. No penetration test is \nor ever can be “ just like a hacker would do it, ” due to \nnecessary limitations placed on penetration tests con-\nducted by “ white hats. ” Hackers don’t have to follow the \nsame rules as the “ good guys ” and they could care less \nwhether your systems crash during one of their “ tests. ” \nWe’ll talk more about this later. Right now, before we \ncan talk any more about penetration testing, we need to \ntalk about various types of vulnerabilities and how they \nmight be discovered. \n Before we can exploit a vulnerability in a penetra-\ntion test, we have to discover what vulnerabilities exist \nwithin (and outside of) the organization. A vulnerabil-\nity is a potential weakness in an organization’s security. \nI use the term “ potential ” because not all vulnerabili-\nties are exploitable or worth exploiting. A flaw may \nexist and may even be documented, but perhaps no \none has figured out (yet) how to exploit it. Some vul-\nnerabilities, although exploitable, might not yield \nenough information in return for the time or resources \nnecessary to exploit them. Why break into a bank and \nsteal only a dollar? That doesn’t make much sense, \ndoes it? \n Vulnerabilities can be thought of in two broad cat-\negories: logical and physical. We normally think of \nlogical vulnerabilities as those associated with the organ-\nization’s computers, infrastructure devices, software, or \napplications. Physical vulnerabilities, on the other hand, \nare normally thought of as those having to do with either \nthe actual physical security of the organization (such as \na door that doesn’t always lock properly), the sensitive \ninformation that “ accidentally ” ends up in the dumpster, \nor the vulnerability of the organization’s employees to \nsocial engineering (a vendor asking to use a computer to \nsend a “ quick email ” to the boss). \n" }, { "page_number": 403, "text": "PART | II Managing Information Security\n370\n Logical vulnerabilities can be discovered using \nany number of manual or automated tools and even by \nbrowsing the Internet. For those of you who are familiar \nwith Johnny Long’s Google Hacking books: “ Passwords , \nfor the love of God!!! Google found passwords! ” The \ndiscovery of logical vulnerabilities is usually called \n security scanning, vulnerability scanning , or just scan-\nning . Unfortunately, there are a number of “ security con-\nsultants ” who run a scan, put a fancy report cover on the \noutput of the tool, and pass off these scans as a penetra-\ntion test. \n Physical vulnerabilities can be discovered as part \nof a physical security inspection, a “ midnight raid ” on \nthe organization’s dumpsters, getting information from \nemployees, or via unaccompanied access to a usually \nnonpublic area (I really need to use the bathroom!). \n Vulnerabilities might also exist due to a lack of com-\npany policies or procedures or an employee’s failure to \nfollow the policy or procedure. Regardless of the cause \nof the vulnerability, it might have the potential to com-\npromise the organization’s security. So, of all the vul-\nnerabilities that have been discovered, how do we know \nwhich ones pose the greatest danger to the organization’s \nnetwork? We test them! We test them to see which ones \nwe can exploit and exactly what could happen if a “ real ” \nattacker exploited that vulnerability. \n Because few organizations that I know of have \nenough money, time, or resources to eliminate every vul-\nnerability discovered, they have to prioritize their efforts; \nthis is one of the best reasons for an organization to con-\nduct a penetration test. At the conclusion of the pen-\netration test, they will know which vulnerabilities can \nbe exploited and what can happen if they are exploited. \nThey can then plan to correct the vulnerabilities based \non the amount of critical information exposed or net-\nwork control gained by exploiting the vulnerability. In \nother words, a penetration test helps organizations strike \na balance between security and business functionality. \nSounds like a perfect solution, right? If only it were so! \n There are organizations that do not care about the \n “ true risks ” that their organizations face. Instead, they \nare more interested in being able to tell their sharehold-\ners or their regulating agencies that they’ve conducted \na penetration test and “ passed ” it. If the penetration test \nis structured so that only certain systems are tested, or \nif the test is conducted during a known timeframe, the \ntest results will be favorable to the organization, but the \ntest isn’t a true reflection of its network security posture. \nThis kind of “ boutique testing ” can lead to a false sense \nof security for the organization, its employees, and its \nstakeholders. \n 2. HOW DOES PENETRATION TESTING \nDIFFER FROM AN ACTUAL “ HACK? ” \n Earlier, I mentioned that penetration testing isn’t and \nnever can be “ just like a hacker would do it. ” How \ncome? Except in the case of “ directed ” sabotage or espi-\nonage, it’s not personal between your organization and \nattackers. They don’t care who you are, where you are, \nor what your organization does to earn its living. They \njust want to hack something. The easier it is to attack \nyour network, the more likely it is that you’ll be a tar-\nget. Ask your network administrator to look at the net-\nwork intrusion detection logs (or look at them yourself \nif you’re a network admin). See how many times in a \n24-hour period your organization’s network gets scanned \nfor potential vulnerabilities. \n Once an attacker has decided that you’re his or her \n(yes, there are female hackers — good ones, too!) next \ntarget, they may take weeks or months to perform the \nfirst step of an attack: reconnaissance. As a penetration \ntester or company providing penetration testing serv-\nices, I doubt that you’re going to get hired to spend six \nmonths doing reconnaissance and another couple of \nmonths conducting an attack. So, the first difference \nbetween a penetration test and a real attack is the length \nof time taken to conduct all the activities needed to pro-\nduce a successful outcome. As “ good guys, ” we don’t \nhave the luxury of time that the “ bad guys ” do. So we’re \nhandicapped to begin with. \n In some (not all) cases, once attackers find a suitable \nvulnerability to attack, they will. They don’t care that \nthe vulnerability resides on a mission-critical system, if \nthe vulnerability will crash the system, or that the sys-\ntem might become unusable in the middle of the busiest \ntime of day. If that vulnerability doesn’t “ pan out, ” they \nfind another and try again. They keep it up until they \nrun out of vulnerabilities that they can exploit, or they’re \ndiscovered, or they manage to successfully breach the \nnetwork or crash the system. Penetration test teams nor-\nmally don’t have this luxury, either. Usually the test team \nhas X amount of time to find a vulnerability and get in \nor the test is over and the network is declared “ safe and \nsecure. ” If the test team didn’t have enough time to test \nall the vulnerabilities — oh, well. The test is still over, they \nstill didn’t get in, and so our network must be safe! Few \nseem to think about the fact that a “ real ” attacker may, \njust by blind luck, choose one of the unable-to-be-tested-\nbecause-of-time-limitations vulnerabilities and be able to \nwaltz right into the network without being detected. \n Some systems are declared “ off limits ” to testing \nbecause they’re “ too important to have crash ” during a \n" }, { "page_number": 404, "text": "Chapter | 22 Penetration Testing\n371\n test. An organization may specify that testing can only \noccur during certain hours or on certain days because of \na real or perceived impact on business operations. This \nis the second difference between a penetration test and a \nreal attack: Hackers don’t play by any rules. They attack \nwhat they want when they want and how they want. Just \nto be clear: I’m not advocating denial-of-service test-\ning during the busiest time of an organization’s day, or \nunrestricted testing at any time. I’m just trying to make a \npoint that no system is too critical to test. From a hack-\ner’s perspective, there are no “ off-limits ” systems, just \nopportunities for attack. We’ll talk more about differ-\nences between real attacks and penetration tests when we \ntalk about the various types of testing in the next section. \n 3. TYPES OF PENETRATION TESTING \n Some sources classify penetration testing into two \ntypes — internal and external — and then talk about the \n “ variations ” of these types of tests based on the amount \nof information the test team has been given about the \norganization prior to starting the test. Other sources use a \nreverse-classification system, typing the penetration test \nbased on the amount of information available to the test \nteam and then the location from which the test is con-\nducted. I much prefer the latter method, since it removes \nany chance of misunderstanding about what testing is \ngoing to be conducted where. Warning: If you’re plan-\nning to take the CISSP or some of the other network \nsecurity certification examinations, stick with the “ old \nskool ” “ classification by location, then type ” definitions \nfor penetration testing types and variations. \n When a penetration test is conducted against Internet-\nfacing hosts, it is known as external testing . When con-\nducted against hosts inside the organization’s internal \nnetwork, it is known as internal testing . Obviously, a \ncomplete penetration test will encompass testing of both \nexternal and internal hosts. The “ variations ” of penetra-\ntion tests are normally classified based on how much \ninformation the test team has been given about the \norganization. The three most commonly used terms for \npenetration types are white-box, gray-box, and black-box \ntesting . Before we talk about these penetration testing \nvariations, you need to understand that we can conduct \nany of them (white, gray, or black) either externally or \ninternally. If we want a complete test, we need to test \nboth externally and internally. Got it? Good! Now we \ncan talk about what is involved with each type of testing. \n We’ll start with white-box testing. The “ official ” \ndefinition of white-box testing usually includes verbiage \nabout providing information so as to be able to assess the \nsecurity of a specific target or assess security against a \nspecific attack. There are several problems with this defi-\nnition in real life. The first is that it sounds as though you \nwould only conduct a white-box test if you were looking \nto verify the security of one specific host or looking to \nverify the security of your network against one specific \nattack. But it would be foolhardy to test for only one vul-\nnerability. Since the “ variations ” we’re talking about are \nsupposed to be centered on the amount of information \navailable to the test team, let’s look at a white-box test \nfrom an information availability perspective. \n Who in your organization knows the most about \nyour network? Probably the network administrator. Any \norganization that has recently terminated employment of \na network administrator or member of the IT Department \nunder less than favorable circumstances has a big prob-\nlem. There is the potential that the organization’s net-\nwork could be attacked by this former employee who \nhas extreme knowledge about the network. In a white-\nbox test, therefore, the test team should be given about \nthe same amount of information that a network admin-\nistrator would have. Probably the team won’t be given \npasswords, but they’ll be given network ranges, topolo-\ngies, and so on. \n Gray-box testing, by “ official ” definition, pro-\nvides “ some ” knowledge to the test team, about the \nsort of thing a normal, unprivileged user might have: \nhostnames, maybe a few IP addresses, the fact that the \norganization allows senior management to “ remote ” \ninto the network, and so on. Common, though not nec-\nessarily public, knowledge about the “ inner workings ” \nof the organization’s network is the level of informa-\ntion provided to the test team. Some sources claim that \nthis testing type (as well as the information disclosed in \na white-box test) “ puts the tester at an advantage ” over \nan attacker because the test team possesses information \nthat an attacker wouldn’t have. But that’s not necessarily \nthe case. Any organization that has terminated a “ normal \nuser ” has the potential for an attack based on that user’s \ninside knowledge of the network. \n Now let’s talk about everyone’s favorite: black-box \npenetration testing. Again, we’ll start with the common \ndefinition, which usually says something like “ provides \nthe test team with little or no information except for pos-\nsibly the company name. ” The test team is required to \nobtain all their attack information from public sources \nsuch as the Internet. There’s usually some sentence \nsomewhere in the description that says how a black-\nbox test is always much better than any other type of \ntest because it most closely mimics the way an attacker \n" }, { "page_number": 405, "text": "PART | II Managing Information Security\n372\n would conduct an attack. The definition might be right \n(and I might even be inclined to agree with it) if the \norganization has never terminated a network admin or \nany other employee with network access. I might also \nbe more in agreement with this train of thought if the \nmajority of attacks were conducted by unknown per-\nsons from the far reaches of the globe instead of former \nemployees or currently employed “ insiders ” ! \n Depending on what article or book you happen to \nbe reading at the time, you might also see references to \napplication penetration testing, Web penetration testing, \nshrink-wrap penetration testing, wireless penetration \ntesting, telephony penetration testing, Bluetooth penetra-\ntion testing . . . and the list goes on. I’ve seen every pos-\nsible device in a network listed as a separate penetration \ntest. The bottom line is that if it’s present in the network, \nan organization needs to discover what vulnerabilities \nexist on it and then test those vulnerabilities to discover \nwhat could happen if they’re successfully exploited. \n I guess since Morgan Kaufmann asked me to write \nthis chapter, I can give you my opinion: I don’t like \nblack-box penetration testing. I don’t want any network \nof mine tested “ just like a hacker would do it. ” I want \nmy network tested better than any hacker ever could, \nbecause I don’t want to end up on the front page of The \nNew York Times as the subject of a “ latest breach ” arti-\ncle. My client’s data are much too important to take that \nchance. \n The success of every penetration test rests on the \nexperience of the test team. If some hacker has more \nexperience than the test team I hire, I’m in trouble! So \nhow do I even the odds? Instead of hiring a team to do a \nblack box, in which they’re going to spend hours search-\ning for information and poking around, I give them the \nnecessary information to test the network thoroughly, \nright up front. By doing so, their time is actually spent \ntesting the network, not carrying out a high-tech scaven-\nger hunt. Any good penetration test team is still going to \ndo reconnaissance and tell me what information is avail-\nable about my organization from public sources anyway. \nNo, I’m not going to give up the administrative pass-\nword to my network. But I am going to tell them what \nmy IP ranges are, whether or not I have wireless, and \nwhether I allow remote access into the network, among \nother things. \n In addition to the types and variations of penetra-\ntion testing, we also need to talk about announced and \nunannounced testing. Which of these two methods \nwill be used depends on whether your intent is to test \nthe network itself or the network’s security staff. In an \nannounced test, the penetration testing team works in \n “ full cooperation ” with the IT staff and the IT staff has \n “ full knowledge ” about the test, such as what will be \ntested and when. In an unannounced test, only specific \nmembers of the tested organization (usually the higher \nlevels of management) are aware that the testing will \ntake place. Even they may only know a “ window ” of \ntime for testing, not the exact times or dates. \n If you follow the published guidelines, an unan-\nnounced test is used when testing an organization’s inci-\ndent response capability is called for. Announced tests \nare used when the organization simply wants to test net-\nwork devices. Is this really how it happens in the real \nworld? Sometimes it isn’t. \n Most organizations that conduct annual testing do \nso at about the same time every year, especially if it’s \na government organization. So there’s really no such \nthing as an “ unannounced ” test. Everyone knows that \nsometime between X and Y dates, they’re going to be \ntested. In some organizations this means that during that \ntimeframe there suddenly appears to be an increased \nawareness of network security. Machines get patched \nquicker. Logs are reviewed daily. Abnormal activities are \nreported immediately. After the testing window is over, \nhowever, it’s back to the same old routine until the next \ntesting window, next year. \n What about announced testing, you ask? Think about \nit: If you’re a network administrator and you know a test \nis coming, you’re going to make sure that everything \nis as good as you can make it. Once again, there’s that \nincreased emphasis on security — until the testing win-\ndow is over, that is. \n Let’s take a minute to recap what we’ve talked about \nso far. We’ve learned that a penetration test is used to \nexploit discovered vulnerabilities and to help determine \nwhat an attacker could do if they successfully exploited \na vulnerability. We learned that not all vulnerabilities \nare actually a concern, because there might not be a way \nto exploit them or what we’d get by exploiting them \nwouldn’t justify the time or effort we spent in doing so. \nWe learned that there are different types and variations \nof penetration tests, that they can be announced or unan-\nnounced, and that none of them are actually “ just like a \nhacker would do it. ” Probably the most important thing \nwe’ve learned so far is that if we really and truly want \nto protect our network from a real-life attack, we have to \noffset the biggest advantage that a hacker has: time. We \ndiscovered that we can do this by giving the testing team \nsufficient information to thoroughly test the network \ninstead of surfing the Internet on our dime. \n What’s next? Let’s talk about how a penetration test \nmight be conducted. \n" }, { "page_number": 406, "text": "Chapter | 22 Penetration Testing\n373\n 4. PHASES OF PENETRATION TESTING \n There are three phases in a penetration test, and they \nmimic the phases that an attacker would use to conduct \na real attack. These phases are the pre-attack phase, the \nattack phase, and the post-attack phase, as shown in \n Figure 22.1 . \n The activities that take place in each phase (as far \nas the penetration testing team is concerned) depend on \nhow the rules of engagement have specified that the pen-\netration test be conducted. To give you a more complete \npicture, we talk about these phases from the perspective \nof a hacker and from that of a penetration team conduct-\ning the test under “ black-box ” conditions. \n The Pre-Attack Phase \n The pre-attack phase (see Figure 22.2 ) consists of the \npenetration team’s or hacker’s attempts to investigate or \nexplore the potential target. This reconnaissance effort is \nnormally categorized into two types: active reconnais-\nsance and passive reconnaissance. \n Beginning with passive reconnaissance, which does \nnot “ touch ” the network and is therefore undetectable by \nthe target organization, the hacker or penetration tester \nwill gather as much information as possible about the \ntarget company. Once all available sources for passive \nreconnaissance have been exhausted, the test team or \nattacker may move into active reconnaissance. \n During active reconnaissance, the attacker may actu-\nally “ touch ” the network, thereby increasing the chance \nthat they will be detected or alert the target that someone \nis “ rattling the doorknobs. ” Some of the information gath-\nered during reconnaissance can be used to produce a pro-\nvisional map of the network infrastructure for planning a \nmore coordinated attack strategy later. Ultimately it boils \ndown to information gathering in all its many forms. \nHackers will often spend more time on pre-attack or \nreconnaissance activities than on the actual attack itself. \n The Attack Phase \n This stage involves the actual compromise of the target. \nThe hacker or test team may exploit a logical or physi-\ncal vulnerability discovered during the pre-attack phase \nor use other methods such as a weak security policy to \ngain access to a system. The important point here is to \nunderstand that although there could be several possible \nvulnerabilities, the hacker needs only one to be success-\nful to compromise the network. \n By comparison, a penetration test team will be inter-\nested in finding and exploiting as many vulnerabilities \nas possible because neither the organization nor the \ntest team will know which vulnerability a hacker will \nchoose to exploit first (see Figure 22.3 ). Once inside, \nthe attacker may attempt to escalate his or her privileges, \ninstall one or more applications to sustain their access, \nfurther exploit the compromised system, and/or attempt \nto extend their control to other systems within the net-\nwork. When they’ve finished having their way with the \nsystem or network, they will attempt to eliminate all evi-\ndence of their presence in a process some call “ covering \ntheir tracks. ” \n The Post-Attack Phase \n The post-attack phase is unique to the penetration \ntest team. It revolves around returning any modified \nsystem(s) to the pretest state. With the exception of \n covering their tracks, a real attacker couldn’t care less \nabout returning a compromised system to its original \nstate. The longer the system remains compromised, the \nlonger they can legitimately claim credit for “ pwning ” \n(owning) the system. \nPre-Attack Phase\nPost-Attack Phase\nAttack Phase\n FIGURE 22.1 The three phases in a penetration test. \nPre-Attack\nPhase\nPassive\nReconnaissance\nActive\nReconnaissance\n FIGURE 22.2 The pre-attack phase. \n" }, { "page_number": 407, "text": "PART | II Managing Information Security\n374\n Obviously, in a real penetration test, the following \nlist would include reversal of each and every change \nmade to the network to restore it to its pre-attack state. \nSome of the activities that the test team may have to \naccomplish are shown here: \n ● Removal of any files, tools, exploits, or other test-\ncreated objects uploaded to the system during testing \n ● Removal or reversal of any changes to the registry \nmade during system testing \n ● Reversal of any access control list (ACL) changes to \nfile(s) or folder(s) or other system or user object(s) \n ● Restoration of the system, network devices, and net-\nwork infrastructure to the state the network was in \nprior to the beginning of the test \n The key element for the penetration test team to be \nable to restore the network or system to its pre-attack \nstate is documentation. The penetration testing team \ndocuments every step of every action taken during the \ntest, for two reasons. The obvious one is so that they can \nreverse their steps to “ cleanse ” the system or network. \nThe second reason is to ensure repeatability of the test. \nWhy is repeatability an issue? \n An important part of the penetration test team’s job \nis not only to find and exploit vulnerabilities but also \nto recommend appropriate mitigation strategies for dis-\ncovered vulnerabilities. This is especially true for those \nvulnerabilities that were successfully exploited. After \nthe tested organization implements the recommended \ncorrections, it should repeat the penetration test team’s \nactions to ensure that the vulnerability has indeed been \neliminated and that the applied mitigation has not had \n “ unintended consequences ” by creating a new vulner-\nability. The only way to do that is to recreate the original \ntest that found the vulnerability in the first place, make \nsure it’s gone, and then make sure there are no new vul-\nnerabilities as a result of fixing the original problem. \n As you might imagine, there are lots of possible ways \nfor Murphy to stick his nose into the process we just \ntalked about. How do penetration test teams and tested \norganizations try to keep Murphy at bay? Rules! \n 5. DEFINING WHAT’S EXPECTED \n Someone once said: “ You can’t win the game if you don’t \nknow the rules! ” That statement makes good sense for a \npenetration test team as well. Every penetration test must \nhave a clearly defined set of rules by which the penetra-\ntion test “ game ” is played. These rules are put into place \nto help protect both the tested organization and the pene-\ntration test team from errors of omission and commission \n(under normal circumstances). According to the National \nInstitute of Standards and Technology (NIST), the “ rule \nbook ” for penetration tests is often called the “ Rules of \nEngagement. ” Rules of Engagement define things like \nwhich IP addresses or hosts are and are not allowed to \nbe tested, which techniques are and are not allowed to be \nused, when testing is permitted or prohibited, points of \ncontact for both the test team and the tested organization, \nIP addresses of machines from which testing is conducted, \nand measures to prevent escalation of an incident response \nto law enforcement, just to name a few. \n There isn’t a standard format for the Rules of \nEngagement, so it is not the only document that can be \nused to control a penetration test. Based on the complex-\nity of the tested organization and the scope of the penetra-\ntion test, the penetration test team may also use detailed \ntest plan(s) for either or both logical and physical test-\ning. In addition to these documents, both the client and \nthe penetration test company conducting the operation \nwill require some type of formal contact that spells out \neither explicitly, or incorporates by reference, such items \nas how discovered sensitive material will be handled, an \nindemnification statement, nondisclosure statement, fees, \nAttack\nPhase\nPenetrate\nPerimeter\nAcquire\nTarget\nEscalate\nPrivileges\nExecute,\nImplant,\nRetract\n FIGURE 22.3 The attack phase. \n" }, { "page_number": 408, "text": "Chapter | 22 Penetration Testing\n375\n project schedule, reporting, and responsibilities. These \nare but a few of the many items that penetration testing \ncontrol documents cover, but they should be sufficient for \nyou to understand that all aspects of the penetration test \nneed to be described somewhere in a document. \n We now know what’s expected during the penetration \ntest. Armed with this information, how does the penetra-\ntion test team plan to deliver the goods? It starts with a \nmethodology. \n 6. THE NEED FOR A METHODOLOGY \n When you leave for vacation and you’ve never been \nto your destination before, you’re likely make a list of \nthings you need or want to do before, during, or after the \ntrip. You might also take a map along so you know how \nto get there and not get lost or sidetracked along the way. \nPenetration test teams also use a map of sorts. It’s called \ntheir methodology . \n A methodology is simply a way to ensure that a par-\nticular activity is conducted in a standard manner, with \ndocumented and repeatable results. It’s a planning tool \nto help ensure that all mandatory aspects of an activity \nare performed. \n Just as a map will show you various ways to get to \nyour destination, a good penetration testing methodology \ndoes not restrict the test team to a single way of com-\npromising the network. While on the road to your dream \nvacation, you might find your planned route closed or \nunder construction and you might have to make a detour. \nYou might want to do some sightseeing, or maybe visit \nlong-lost relatives along the way. \n Similarly, in a penetration test, your primary attack \nstrategy might not work, forcing you to find a way \naround a particular network device, firewall, or intrusion \nprevention system. While exploiting one vulnerability, \nyou may discover another one that leads you to a differ-\nent host or a different subnet. A well-written methodol-\nogy allows the test team the leeway necessary to explore \nthese “ targets of opportunity ” while still ultimately guid-\ning them to the stated goals of the test. \n Most penetration test companies will have developed \na standard methodology that covers all aspects of a pen-\netration test. This baseline methodology document is \nthe starting point for planning a particular test. Once the \ncontrol documentation has been finalized, the penetration \ntest team will know exactly what they can and cannot \ntest. They will then modify that baseline methodology \nbased on the scope statement in the Rules of Engagement \nfor the penetration test that they are going to conduct. \n Different clients are subject to different regulatory \nrequirements such as HIPAA, Sarbanes-Oxley, Gramm-\nLeach-Bliley, or others, so the penetration test team’s \nmethodology must also be flexible enough to cover these \nand other government or private industry regulations. \n In a minute, we’ll talk about the sources of penetration \ntesting methodologies, but for now, just understand that a \nmethodology is not a “ nice to have, ” it’s a “ must have. ” \nWithout a methodology to be used as the basis to plan and \nexecute a penetration test, there is no reliability. The team \nwill be lost in the network, never knowing if they’ve ful-\nfilled the requirements of the test or not until they’re writ-\ning the report. By then, it’s too late — for them and for the \norganization. \n 7. PENETRATION TESTING \nMETHODOLOGIES \n Back to our map example for a minute. Unless you’re \na cartographer, you’re probably not going to make \nyour own map to get to your dream vacation destina-\ntion. You’ll rely on a map that someone else has drawn, \ntested, and published. The same holds true for penetra-\ntion testing methodologies. Before we talk about how a \nmethodology, any methodology, is used in a penetration \ntest, let’s discuss penetration testing methodologies in \ngeneral. There are probably as many different method-\nologies as there are companies conducting penetration \ntests, but all of them fit into one of two broad categories: \nopen source or proprietary. \n Open-source methodologies are just that: available \nfor use by anyone. Probably the best-known open-source \nmethodology, and de facto standard, is the Open Source \nSecurity Testing Methodology Manual (OSSTMM), the \nbrainchild of Pete Herzog. You can get the latest copy \nof this document at www.isecom.org . Another valuable \nopen-source methodology is the Open Web Application \nSecurity Project (OWASP), geared to securing Web \napplications, available at www.owasp.org . \n Proprietary methodologies have been developed by \nparticular entities offering network security services \nor certifications. The specific details of the processes \nthat produce the output of the methodology are usu-\nally kept private. Companies wanting to use these pro-\nprietary methodologies must usually undergo specific \ntraining in their use and abide by quality standards set \nby the methodology proponent. Some examples of pro-\nprietary methodologies include IBM, ISS, Foundstone, \nand our own EC Council Licensed Penetrator Tester \nmethodology. \n" }, { "page_number": 409, "text": "PART | II Managing Information Security\n376\n 8. METHODOLOGY IN ACTION \n A comprehensive penetration test is a systematic analy-\nsis of all security controls in place at the tested organiza-\ntion. The penetration test team will look at not only the \nlogical and physical vulnerabilities that are present but \nalso at the tested organization’s policies and procedures \nto assess whether or not the controls in place adequately \nprotect the organization’s information infrastructure. \n Let’s examine the use of a penetration testing meth-\nodology in more detail to demonstrate how it’s used to \nconduct a penetration test. Of course, as the President of \nthe EC-Council, I’m going to take the liberty of using \nour LPT methodology as the example. \n EC-Council LPT Methodology \n Figure 22.4 is a block representation of some of the \nmajor areas of the LPT methodology as taught in the \nEC-Council’s Licensed Penetration Tester certification \ncourse. The first two rows of the diagram (except for the \nWireless Network Penetration Testing block) represent a \nfairly normal sequence of events in the conduct of a pene-\ntration test. The test team will normally start by gathering \ninformation, then proceed with vulnerability discovery \nand analysis, followed by penetration testing from the net-\nwork perimeter, and graduating to the internal network. \n After that, beginning with wireless testing, what the \ntest team actually does will depend on what applica-\ntions or services are present in the network and what is \nallowed to be tested under the Rules of Engagement. I’ve \nchosen not to show every specific step in the process as \npart of the diagram. After all, if I told you everything, \nthere wouldn’t be any reason for you to get certified as a \nLicensed Penetration Tester, would there? \n The methodology (and course) assumes that the pen-\netration test team has been given authorization to con-\nduct a complete test of the target network, including \nusing denial-of-service tactics so that we can acquaint \nour LPT candidates with the breadth of testing they may \nbe called on to perform. In real life, a test team may sel-\ndom actually perform DoS testing, but members of the \npenetration test team must still be proficient in conduct-\ning this type of test, to make recommendations in their \nreports as to the possible consequences of an attacker \nconducting a DoS attack on the company’s network \ninfrastructure. Here I give you a quick overview of each \nof the components. \n Information Gathering \n The main purpose of information gathering is to understand \nmore about the target company. As we’ve already talked \nabout, there are a number of ways to gather information \nInformation\nGathering\nWireless\nNetwork\nPenetration\nTesting\nDenial of Service\nPenetration\nTesting\nVoIP\nPenetration\nTesting\nVPN\nPenetration\nTesting\nEND\nSTART\nDatabase\nPenetration\nTesting\nPhysical\nSecurity\nPenetration\nTesting\nApplication\nPenetration\nTesting\nPassword\nCracking\nPenetration\nTesting\nSocial\nEngineering\nPenetration\nTesting\nStolen Laptop/\nPDA/Cell Phone\nPenetration\nTesting\nIDS\nPenetration\nTesting\nFirewall\nPenetration\nTesting\nRouter\nPenetration\nTesting\nVulnerability\nAnalysis\nExternal\nPenetration\nTesting\nInternal\nNetwork\nPenetration\nTesting\n FIGURE 22.4 Block representation of some of the major areas of the LPT methodology. \n" }, { "page_number": 410, "text": "Chapter | 22 Penetration Testing\n377\n about the company from public domain sources such as the \nInternet, newspapers, and third-party information sources. \n Vulnerability Analysis \n Before you can attack, you have to find the weak points. A \nvulnerability analysis is the process of identifying logical \nweaknesses in computers and networks as well as physi-\ncal weaknesses and weaknesses in policies, procedures, \nand practices relating to the network and the organization. \n External Penetration Testing \n External testing is normally conducted before internal \ntesting. It exploits discovered vulnerabilities that are \naccessible from the Internet to help determine the degree \nof information exposure or network control that could be \nachieved by the successful exploitation of a particular \nvulnerability from outside the network. \n Internal Network Penetration Testing \n Internal testing is normally conducted after external test-\ning. It exploits discovered vulnerabilities that are acces-\nsible from inside the organization to help determine the \ndegree of information exposure or network control that \ncould be achieved by the successful exploitation of a \nparticular vulnerability from inside the network. \n Router Penetration Testing \n Depending on where they are located in the network \ninfrastructure, routers may forward data to points inside \nor outside the target organization’s network. Take down \na router; take down all hosts connected to that router. \nBecause of their importance, routers that connect the tar-\nget organization to the Internet may be tested twice: once \nfrom the Internet and again from inside the network. \n Firewall Penetration Testing \n Firewall(s) are another critical network infrastructure \ncomponent that may be tested multiple times, depending \non where they reside in the infrastructure. Firewalls that \nare exposed to the Internet are a primary line of defense \nfor the tested organization and so will usually be tested \nfrom the Internet and from within the DMZ for both \ningress and egress vulnerabilities and proper rule sets. \nInternal firewalls are often used to segregate portions of \nthe internal network from each other. Those firewalls are \nalso tested from both sides and for ingress and egress fil-\ntering to ensure that only applicable traffic can be passed. \n IDS Penetration Testing \n As networks have grown more complex and the methods \nto attack them have multiplied, more and more organi-\nzations have come to rely on intrusion detection (and \nprevention) systems (IDS/IPS) to give them warning or \nprevent an intrusion from occurring. The test team will \nbe extremely interested in testing these devices for any \nvulnerabilities that will allow an attacker to circumvent \nsetting of the IPS/IDS alarms. \n Wireless Network Penetration Testing \n If the target company uses wireless (and who doesn’t \nthese days), the test team will focus on the availability \nof “ outside ” wireless networks that can be accessed by \nemployees of the target company (effectively circum-\nventing the company’s firewalls), the “ reach ” of the com-\npany’s own wireless signal outside the physical confines \nof the company’s buildings, and the type and strength of \nencryption employed by the wireless network. \n Denial-of-Service Penetration Testing \n If the test team is lucky enough to land a penetration \ntest that includes DoS testing, they will focus on crash-\ning the company’s Web sites and flooding the sites or \nthe internal network with enough traffic to bring normal \nbusiness processes to a standstill or a crawl. They may \nalso attempt to cause a DoS by locking out user accounts \ninstead of trying to crack the passwords. \n Password-Cracking Penetration Testing \n Need we say more? \n Social Engineering Penetration Testing \n The test team may use both computer- and human-based \ntechniques to try to obtain not only sensitive and/or non-\npublic information directly from employees but also to \ngain unescorted access to areas of the company that are \nnormally off-limits to the public. Once alone in an off-\nlimits area, the social engineer may then try to obtain \nadditional sensitive or nonpublic information about the \ncompany, its data, or its customers. \n Stolen Laptop, PDA, and Cell Phone \nPenetration Testing \n Some organizations take great pains to secure the equip-\nment that is located within the physical confines of their \nbuildings but fail to have adequate policies and procedures \n" }, { "page_number": 411, "text": "PART | II Managing Information Security\n378\n in place to maintain that security when mobile equip-\nment leaves the premises. The test team attempts to tem-\nporarily “ liberate ” mobile equipment and then conducts \ntesting to gain access to the data stored on those devices. \nThey will most often attempt to target either or both \nmembers of the IT Department and the senior members \nof an organization in the hopes that their mobile devices \nwill contain the most useful data. \n Application Penetration Testing \n The test team will perform meticulous testing of an \napplication to check for code-related or “ back-end ” vul-\nnerabilities that might allow access to the application \nitself, the underlying operating system, or the data that \nthe application can access. \n Physical Security Penetration Testing \n The test team may attempt to gain access to the organi-\nzational facilities before, during, or after business hours \nusing techniques meant to defeat physical access con-\ntrol systems or alarms. They may also conduct an overt \n “ walk-thorough ” accompanied by a member of the tested \norganization to provide the tested company with an \n “ objective perspective ” of the physical security controls \nin place. Either as a part of physical security testing or as \na part of social engineering, the team may rifle through \nthe organization’s refuse to discover what discarded \ninformation could be used by an attacker to compromise \nthe organization and to observe employee reaction to an \nunknown individual going through the trash. \n Database Penetration Testing \n The test team may attempt to directly access data con-\ntained in the database using account password-cracking \ntechniques or indirectly access data by manipulating \ntriggers and stored procedures that are executed by the \ndatabase engine. \n Voice-Over-IP Penetration Testing \n The test team may attempt to gain access to the VoIP \nnetwork for the purpose of recording conversations or to \nperform a DoS to the company’s voice communications \nnetwork. In some cases, if the organization has not fol-\nlowed the established “ best practices ” for VoIP, the team \nmay attempt to use the VoIP network as a jumping-off \npoint to conduct further compromise of the organiza-\ntion’s network backbone. \n VPN Penetration Testing \n A number of companies allow at least some of their \nemployees to work remotely, either from home or while \nthey are “ on the road. ” In either case, a VPN represents a \ntrusted connection to the internal network. The test team \nwill attempt to gain access to the VPN by either com-\npromising the remote endpoint or gaining access to the \nVPN tunnel so that they have a “ blessed ” connection to \nthe internal company network. \n In addition to testing the applicable items in the \nblocks on the previous diagram, the penetration test \nteam must also be familiar with and able to test for com-\npliance with the regulatory requirements to which the \ntested organization is subject. Each standard has specific \nareas that must be tested, the process and procedures of \nwhich may not be part of a standard penetration test. \n I’m sure that as you read the previous paragraphs, \nthere was a nagging thought in the back of your mind: \nWhat are the risks involved? Let’s talk about the risks. \n 9. PENETRATION TESTING RISKS \n The difference between a real attack and a penetration \ntest is the penetration tester’s intent, authority to conduct \nthe test, and lack of malice. Because penetration testers \nmay use the same tools and procedures as a real attacker, \nit should be obvious that penetration testing can have \nserious repercussions if it’s not performed correctly. \n Even if your target company ceased all operations for the \ntime the penetration test was being conducted, there is still a \ndanger of data loss, corruption, or system crashes that might \nrequire a reinstall from “ bare metal. ” Few, if any, compa-\nnies can afford to stop functioning while a penetration test \nis being performed. Therefore it is incumbent on both the \ntarget organization and the penetration test team to do eve-\nrything in their power to prevent an interruption of normal \nbusiness processes during penetration testing operations. \n Target companies should be urged to back up all \ntheir critical data before testing begins. They should \nalways have IT personnel present to immediately begin \nrestoration in the unfortunate event that a system crashes \nor otherwise becomes unavailable. The test team must \nbe prepared to lend all possible assistance to the target \ncompany in helping to restore any system that is affected \nby penetration testing activities. \n 10. LIABILITY ISSUES \n Although we just discussed some of the risks involved \nwith conducting a penetration test, the issue of liability \n" }, { "page_number": 412, "text": "Chapter | 22 Penetration Testing\n379\n deserves its own special section. A botched penetration test \ncan mean serious liability for the penetration test company \nthat conducted the testing. The penetration test company \nshould ensure that the documentation for the penetration \ntest includes a liability waiver. \n The waiver must be signed by an authorized repre-\nsentative of the target company and state that the pen-\netration testing firm cannot be held responsible for the \nconsequences of items such as: \n ● Damage to systems \n ● Unintentional denial-of-service conditions \n ● Data corruption \n ● System crashes or unavailability \n ● Loss of business income \n 11. LEGAL CONSEQUENCES \n The legal consequences of a penetration test gone wrong \ncan be devastating to both the target company and the \npenetration testers performing the test. The company may \nbecome the target of lawsuits by customers. The penetra-\ntion testers may become the target of lawsuits by the target \ncompany. The only winners in this situation are the lawyers. \nIt is imperative that proper written permission is obtained \nfrom the target company before any testing is conducted. \n Legal remedies are normally contained in a penetration \ntesting contract that is drawn up in addition to the testing \ncontrol documentation. Both the penetration test com-\npany and the target company should seek legal counsel \nto review the agreement and to protect their interests. The \nauthorization to perform testing must come from a senior \nmember of the test company, and that senior member must \nbe someone who has the authority to authorize such test-\ning, not just the network administrator or LAN manager. \n Authorized representatives of both the penetration test \ncompany and the target company must sign the penetration \ntesting contract to indicate that they agree with its contents \nand the contents of all documentation that may be included \nor included by reference, such as the Rules of Engagement, \nthe test plan, and other test control documentation. \n 12. “ GET OUT OF JAIL FREE ” CARD \n We just talked about how the target company and the pen-\netration test company protect themselves. What about that \nindividual team member crawling around in the dump-\nster at 2:00 a.m., or the unfortunate team member who’s \nmanaged to get into the company president’s office and \nlog onto his or her computer? What protection do those \nindividuals have when that 600-pound gorilla of a security \nguard clamps his hand on their shoulder and asks what \nthey’re doing? \n A “ Get Out of Jail Free ” card might just work won-\nders in these and other cases. Though not really a card, \nit’s usually requested by the penetration test team as \n “ extra insurance ” during testing. It is presented if they’re \ndetained or apprehended while in the performance of \ntheir duties, as proof that their actions are sanctioned by \nthe officers of the company. The card may actually be a \nletter on the tested company’s letterhead and signed by \nthe senior officer authorizing the test. It states the spe-\ncific tasks that can be performed under the protection of \nthe letter and specifically names the bearer. \n It contains language that the bearer is conducting activ-\nities and testing under the auspices of a contract and that \nno violation of policy or crime is or has been committed. \nIt includes a 24-hour contact number to verify the validity \nof the letter. As you can imagine, these “ Get Out of Jail \nFree ” cards are very sensitive and are usually distributed \nto team members immediately before testing begins, col-\nlected, and returned to the target company immediately \nafter the end of any testing requiring their use. \n There is a tremendous amount of work involved in \nconducting a thorough and comprehensive penetration \ntest. What we’ve just discussed is but a 40,000-foot flyo-\nver of what actually happens during the process. But \nwe’re not done yet! The one area that we haven’t dis-\ncussed is the personnel performing these tests. \n 13. PENETRATION TESTING \nCONSULTANTS \n The quality of the penetration test performed for a client \nis directly dependent on the quality of the consultants \nperforming the work, singularly and in the aggregate. \nThere are hundreds if not thousands of self-proclaimed \n “ security services providers ” out there, both companies \nand individuals. If I’m in need of a penetration test, how \ncan I possibly know whether the firm or individual I’m \ngoing to select is really and truly qualified to test my \nnetwork comprehensively, accurately, and safely? What \nif I end up hiring a consultancy that employs hackers? \nWhat if the consultant(s) aren’t hackers, but they just \ndon’t know what they’re doing? \n In these, the last pages of this chapter, I want to talk \nabout the people who perform penetration testing serv-\nices. First, you get another dose of my personal opinion. \nThen we’ll talk about security services providers, those \nwho provide penetration testing services. We’ll talk \nabout some of the questions that you might want to ask \n" }, { "page_number": 413, "text": "PART | II Managing Information Security\n380\n about their operations and their employees. Here’s a hint \nfor you: If the company is evasive in answering the ques-\ntions or outright refuses to answer them — run! \n What determines whether or not a penetration tester \nis “ experienced ” ? There are few benchmarks to test the \nknowledge of a penetration tester. You can’t ask her for \na score: “ Yes, I’ve performed 27 penetration tests, been \nsuccessful in compromising 23 of those networks, and \nthe overall degree of difficulty of each one of those tests \nwas 7 on a scale of 1 to 10. ” \n There really isn’t a Better Business Bureau for pen-\netration testers. You can’t go to a Web site and see that \nXSecurity has had three complaints filed against them \nfor crashing networks they’ve been testing or that “ John \nSmith ” has tested nine networks that were hacked within \nthe following 24 hours. (Whoa! What an idea . . . ! \nNah, wouldn’t work.) Few companies would want to \nadmit that they chose the wrong person or company to \ntest their networks, and that kind of information would \nsurely be used by attackers in the “ intelligence-gathering \nphase ” as a source of information for future attacks on \nany company that chose to report. \n Well, if we can’t do that, what can we do? \n We pretty much have to rely on “ word of mouth ” \nthat such-and-such a company does a good job. We have \nto pretty much rely on the fact that the tester has been \ncertified to a basic standard of knowledge by a reputa-\nble certification body. We pretty much have to rely on \nthe skill set of the individual penetration tester and that \n “ the whole is more than the sum of its parts ” thing called \nsynergy, which is created when a group of penetration \ntesters works together as a team. \n I’m not going to insult you by telling you how to go \nask for recommendations. I will tell you that there are \nsecurity certification providers who are better than oth-\ners and who hold their candidates to a higher standard \nof knowledge and professionalism than others. Since it’s \nhard to measure the “ synergy ” angle, let’s take a closer \nlook at what skill sets should be present on the penetra-\ntion team that you hire. \n 14. REQUIRED SKILL SETS \n Your penetration test “ dream team ” should be well \nversed in areas such as these: \n ● Networking concepts \n ● Hardware devices such as routers, firewalls, and \nIDS/IPS \n ● Hacking techniques (ethical hacking, of course!) \n ● Databases \n ● Open-source technologies \n ● Operating systems \n ● Wireless protocols \n ● Applications \n ● Protocols \n ● Many others \n That’s a rather long list to demonstrate a simple con-\ncept: Your penetration team should be able to show proof \nthat they have knowledge about all the hardware, soft-\nware, services, and protocols in use within your network. \n Okay. They have technical skills. Is that enough? \n 15. ACCOMPLISHMENTS \n Are the members of the test team “ bookworms ” or have they \nhad practical experience in their areas of expertise? Have \nthey contributed to the security community? The list that fol-\nlows should give you some indication of questions you can \nask to determine the “ experiences ” of your test team: \n ● Have they conducted research and development in \nthe security arena? \n ● Have they published research papers or articles in \ntechnical journals? \n ● Have they presented at seminars, either locally or \ninternationally? \n ● What certifications do they hold? Where are those \ncertifications from? \n ● Do they maintain membership/affiliation/\naccreditation in organizations such as the EC-\nCouncil, ISC2, ISACA, and others? \n ● Have they written or contributed to security-related \nbooks and articles? \n How about some simple questions to ask of the com-\npany that will perform your test? \n 16. HIRING A PENETRATION TESTER \n Here are some of the questions you might consider ask-\ning prospective security service providers or things to \nthink about when hiring a test team: \n ● Is providing security services the primary mission of \nthe security service provider, or is this just an “ addi-\ntional revenue source ” for another company? \n ● Does the company offer a comprehensive suite of \nservices tailored to your specific requirements, or do \nthey just offer service packages? \n ● Does the supplier have a methodology? Does \ntheir methodology follow a recognized authority \nin security such as OSSTMM, OWASP, or LPT? \n" }, { "page_number": 414, "text": "Chapter | 22 Penetration Testing\n381\n ● Does the supplier hire former hackers? Do they \nperform background checks on their employees? \n ● Can they distinguish (and articulate) between \ninfrastructure and application testing? \n ● How many consultants does the supplier have who \nperform penetration testing? How long have those \nconsultants been practicing? \n ● What will the final report look like? Does it meet \nyour needs? Is the supplier willing to modify the \nformat to suit your needs (within reason)? \n ● Is the report just a list of what’s wrong, or does it \ncontain mitigation strategies that will be tailored to \nyour particular situation? \n ● Is the supplier a recognized contributor to the \nsecurity community? \n ● Do they have references available to attest to the \nquality of work already performed? \n That ought to get a company started down the road to \nhiring a good penetration test team. Now let’s talk about \nwhy a company should hire you, either as an individual \nor as a member of a penetration testing team. \n 17. WHY SHOULD A COMPANY HIRE YOU? \n When a prospective client needs a penetration test, they \nmay publish a request for proposal (RFP) or just make \nan announcement that they are accepting solicitations for \nsecurity services. When all the bids come in, they will \nonly consider the most qualified candidates. How to you \nmake sure that your or your company’s bid doesn’t end \nup in the “ circular file? ” \n Qualifications \n Highlight your or your company’s qualifications to per-\nform the desired services. Don’t rely on “ alphabet soup ” \nafter your name or your team’s names to highlight quali-\nfications. Take the time to explain the meaning of CISSP, \nLPT, or MCSE. \n Work Experience \n The company will provide some information about itself. \nAlign your response by highlighting work (without nam-\ning specific clients!) in the same or related fields and of \nrelated size. \n Cutting-Edge Technical Skills \n It doesn’t bode well when you list one of your primary \nskills as “ MCSE in Windows NT 3.51 and NT 4.0. ” \nMaintain your technical proficiency and showcase your \nmost recent and most highly regarded certification \naccomplishments, such as CCNA, CEH, CHFI, CCNP, \nMCSE, LPT, CISA, and CISM. \n Communication Skills \n Whether your communicate with the prospective client \nthrough written or verbal methods, made sure you are \nwell written or well spoken. Spell-check your written \ndocuments, and then look them over again. There are \nsome errors that spell checkers just won’t find. “ Dude, \nI’m gonna hack yer network! ” isn’t going to cut it in a \nsecond-round presentation. \n Attitude \n Let’s face it: Most of us in the security arena have a bit \nof an ego and sometimes come off as a bit arrogant. \nThough that may hold some sway with your peers, it \nwon’t help your case with a prospective client. Polite \nand professional at all times is the rule. \n Team Skills \n There is no synergy if there is no team. You must be a \ngood team player who can deal with subordinates, supe-\nriors, and clients professionally, even in the most critical \nmoments of an engagement. \n Okay. You’ve done all this. What else do you need to \nknow to get hired? What about the concerns of the com-\npany that will hire you? \n Company Concerns \n You can have a sterling record and the best qualifica-\ntions around and still not get hired. Here are some of \nthe “ influencing factors ” companies may consider when \nlooking to hire a penetration testing team: \n ● Companies usually want to work in collaboration \nwith reputable and well-established firms such as \nFoundstone, ISS, EC-Council, and others. \n ● Companies may want to verify the tools that will be run \nduring a test and the equipment on which the tools run. \n ● Companies will want references for the individuals \non the team as well as recommendations about the \ncompany itself. \n ● Companies demand security-related certifications \nsuch as CISSP, CEH, and TICSA to confirm the \nauthenticity of the testing company. \n" }, { "page_number": 415, "text": "PART | II Managing Information Security\n382\n ● Companies usually have an aversion to hiring those \nwho are known or suspected hackers. \n ● Companies may require security clearances. \n ● Companies may inquire about how and where their \ndata will be stored while in the testing company’s \npossession. \n Okay, you get the idea, right? \n 18. ALL’S WELL THAT ENDS WELL \n Anybody got an aspirin? I’m sure you probably need one \nafter reading all the information I’ve tried to throw at you \nin this chapter. I’ve only barely scratched the surface of \nwhat a penetration test is, what it’s meant to accomplish, \nhow it’s done, and how you report the findings to the cli-\nent, but I’m out of space to tell you more. \n Let me take these last couple of inches on the page to \nsummarize. If you’ve got a network, the question is not \n “ if ” you’ll be attacked, but “ when. ” If you’re going to \nmake your network as safe as possible, you need to find \nthe weaknesses and then test them to see which ones are \nreally, truly the things that need to be fixed “ yesterday. ” \nIf you only conduct a penetration test once a year, a real \nhacker has 364 “ unbirthdays ” in which to attack and \ncompromise your network. Don’t give them the oppor-\ntunity. Get someone on your staff certified as an ethical \nhacker and a Licensed Penetration Tester so that they \ncan perform ethical hacks and penetrations on a regular \nbasis to help protect your network. \n" }, { "page_number": 416, "text": "383\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n What Is Vulnerability Assessment? \n Almantas Kakareka \n Terremark Worldwide, Inc. \n Chapter 23 \n In computer security, the term vulnerability is applied to a \nweakness in a system that allows an attacker to violate the \nintegrity of that system. Vulnerabilities may result from \nweak passwords, software bugs, a computer virus or other \nmalware (malicious software), a script code injection, or \nan SQL injection, just to name a few. \n A security risk is classified as vulnerability if it is \nrecognized as a possible means of attack. A security risk \nwith one or more known instances of a working or fully \nimplemented attack is classified as an exploit . Constructs \nin programming languages that are difficult to use prop-\nerly can be large sources of vulnerabilities. \n Vulnerabilities always existed, but when the Internet \nwas in its early stage they were not as often used and \nexploited. The media did not report news of hackers who \nwere getting put in jail for hacking into servers and steal-\ning vital information. \n Vulnerability assessment may be performed on many \nobjects, not only computer systems/networks. For exam-\nple, a physical building can be assessed so it will be clear \nwhat parts of the building have what kind of flaw. If the \nattacker can bypass the security guard at the front door \nand get into the building via a back door, it is definitely a \nvulnerability. Actually, going through the back door and \nusing that vulnerability is called an exploit . The physi-\ncal security is one of the most important aspects to be \ntaken into account. If the attackers have physical access \nto the server, the server is not yours anymore! Just stat-\ning, “ Your system or network is vulnerable ” doesn’t pro-\nvide any useful information. Vulnerability assessment \nwithout a comprehensive report is pretty much useless. \nA vulnerability assessment report should include: \n ● Identification of vulnerabilities \n ● Quantity of vulnerabilities \n It is enough to find one critical vulnerability, which \nmeans the whole network is at risk, as shown in Figure 23.1 . \n Vulnerabilities should be sorted by severity and then \nby servers/services. Critical vulnerabilities should be at \nthe top of the report and should be listed in descending \norder, that is, critical, then high, medium, and low. 1 \n 1. REPORTING \n Reporting capability is of growing importance to admin-\nistrators in a documentation-oriented business climate \nwhere you must not only be able to do your job, you must \nalso provide written proof of how you’ve done it. In fact, \nrespondents to Sunbelt’s survey 2 indicate that flexible and \nprioritizing reporting is their number-one favorite feature. \n A scan might return hundreds or thousands of results, \nbut the data is useless unless it is organized in a way that \ncan be understood. That means that ideally you will be \nable to sort and cross-reference the data, export it to other \nprograms and formats (such as CSV, HTML, XML, MHT, \nMDB, Excel, Word, and/or Lotus), view it in different \nways, and easily compare it to the results of earlier scans. \n Comprehensive, flexible, and customizable reporting \nis used within your department to provide a guideline of \ntechnical steps you need to take, but that’s not all. Good \nreports also give you the ammunition you need to justify to \nmanagement the costs of implementing security measures. \n 2. THE “ IT WON’T HAPPEN TO US ” \nFACTOR \n Practical matters aside, CEOs, CIOs, and administrators \nare all human beings and thus subject to normal human \ntendencies — including the tendency to assume that bad \nthings happen to “ other people, ” not to us. Organizational \ndecision makers assume that their companies aren’t \nlikely targets for hackers ( “ Why would an attacker want \n 1 http://en.wikipedia.org/wiki/Vulnerability_assessment . \n 2 www.sunbeltsoftware.com/documents/snsi_whitepaper.pdf . \n" }, { "page_number": 417, "text": "PART | II Managing Information Security\n384\nto break into the network of Widgets, Inc., when they \ncould go after the Department of Defense or Microsoft \nor someone else who’s much more interesting? ” ). 2 \n 3. WHY VULNERABILITY ASSESSMENT? \n Organizations have a tremendous opportunity to use infor-\nmation technologies to increase their productivity. Securing \ninformation and communications systems will be a neces-\nsary factor in taking advantage of all this increased con-\nnectivity, speed, and information. However, no security \nmeasure will guarantee a risk-free environment in which to \noperate. In fact, many organizations need to provide easier \nuser access to portions of their information systems, thereby \nincreasing potential exposure. Administrative error, for \nexample, is a primary cause of vulnerabilities that can be \nexploited by a novice hacker, whether an outsider or insider \nin the organization. Routine use of vulnerability assessment \ntools along with immediate response to identified problems \nwill alleviate this risk. It follows, therefore, that routine vul-\nnerability assessment should be a standard element of every \norganization’s security policy. Vulnerability assessment is \nused to find unknown problems in the systems. The main \npurpose of vulnerability assessment is to find out what sys-\ntems have flaws and take action to mitigate the risk. Some \nindustry standards such as DSS PCI require organizations \nto perform vulnerability assessments on their networks. \nThe sidebar “ DSS PCI Compliance ” gives a brief look.\n DSS PCI Compliance \n PCI DSS stands for Payment Card Industry Data Security \nStandard. This standard was developed by leading credit-\ncard companies to help merchants be secure and follow \ncommon security criteria to protect sensitive customers ’ \ncredit-card data. Before that every credit card company \nhad a similar standard to protect customer data on the \nmerchant side. Any company that does transactions \nvia credit cards needs to be PCI compliant. One of the \nrequirements to be PCI compliant is to regularly test secu-\nrity systems and processes. This can be achieved via vul-\nnerability assessment. Small companies that don’t process \na lot of transactions are allowed to do self-assessment via \nquestionnaire. Big companies that process a lot of trans-\nactions are required to be audited by third parties. 3 \n 4. PENETRATION TESTING VERSUS \nVULNERABILITY ASSESSMENT \n There seems to be a certain amount of confusion within \nthe security industry about the difference between \npenetration testing and vulnerability assessment. They \nare often classified as the same thing but in fact they are \nnot. Penetration testing sounds a lot more exciting, but \nmost people actually want a vulnerability assessment \nand not a penetration test, so many projects are labeled \nas penetration tests when in fact they are 100% vulner-\nability assessments. \n A penetration test mainly consists of a vulnerability \nassessment, but it goes one step further. A penetration \ntest is a method for evaluating the security of a com-\nputer system or network by simulating an attack by a \nmalicious hacker. The process involves an active analy-\nsis of the system for any weaknesses, technical flaws, \nor vulnerabilities. This analysis is carried out from the \nposition of a potential attacker and will involve active \nexploitation of security vulnerabilities. Any security \nissues that are found will be presented to the system \nowner, together with an assessment of their impact \nand often with a proposal for mitigation or a technical \nsolution. \n A vulnerability assessment is what most compa-\nnies generally do, since the systems they are testing \nare live production systems and can’t afford to be dis-\nrupted by active exploits that might crash the system. \nVulnerability assessment is the process of identifying \nand quantifying vulnerabilities in a system. The sys-\ntem being studied could be a physical facility such as a \nnuclear power plant, a computer system, or a larger sys-\ntem (for example, the communications infrastructure or \nwater infrastructure of a region). Vulnerability assess-\nment has many things in common with risk assessment. \nAssessments are typically performed according to the \nfollowing steps: \n 1. Cataloging assets and capabilities (resources) in a \nsystem \n 2. Assigning quantifiable value and importance to the \nresources \n 3. Identifying the vulnerabilities or potential threats to \neach resource \n 4. Mitigating or eliminating the most serious vulner-\nabilities for the most valuable resources \n This is generally what a security company is con-\ntracted to do, from a technical perspective — not to actu-\nally penetrate the systems but to assess and document \nthe possible vulnerabilities and to recommend mitigation \n FIGURE 23.1 One critical vulnerability affects the entire network. \n 3 www.pcisecuritystandards.org . \n" }, { "page_number": 418, "text": "Chapter | 23 What Is Vulnerability Assessment?\n385\nmeasures and improvements. Vulnerability detection, \nmitigation, notification, and remediation are linked, as \nshown in Figure 23.2 . 4 \n 5. VULNERABILITY ASSESSMENT GOAL \n The theoretical goal of network scanning is elevated \nsecurity on all systems or establishing a networkwide \nminimal operation standard. Figure 23.3 shows how use-\nfulness is related to ubiquity. \n ● HIPS: Host-Based Intrusion Prevention System \n ● NIDS: Network-Based Intrusion Detection System \n ● AV: Antivirus \n ● NIPS: Network-Based Intrusion Prevention System \n 6. MAPPING THE NETWORK \n Before we start scanning the network we have to find \nout what machines are alive on it. Most of the scanners \nhave a built-in network mapping tool, usually the Nmap \nnetwork mapping tool running behind the scenes. \nThe Nmap Security Scanner is a free and open-source \nutility used by millions of people for network discov-\nery, administration, inventory, and security auditing. \nNmap uses raw IP packets in novel ways to determine \nwhat hosts are available on a network, what services \n(application name and version) those hosts are offer-\ning, what operating systems they are running, what type \nof packet filters or firewalls are in use, and more. \nNmap was named “ Information Security Product of \nthe Year ” by Linux Journal and Info World . It was also \nused by hackers in the movies Matrix Reloaded, Die \nHard 4 , and Bourne Ultimatum . Nmap runs on all major \ncomputer operating systems, plus the Amiga. Nmap \nhas a traditional command-line interface, as shown in \n Figure 23.4 . \n Zenmap is the official Nmap Security Scanner GUI \n(see Figure 23.5 ). It is a multiplatform (Linux, Windows, \nMac OS X, BSD, etc.), free and open-source application \nthat aims to make Nmap easy for beginners to use while \nproviding advanced features for experienced Nmap \nusers. Frequently used scans can be saved as profiles \nto make them easy to run repeatedly. A command crea-\ntor allows interactive creation of Nmap command lines. \nScan results can be saved and viewed later. Saved scan \nresults can be compared with one another to see how \nthey differ. The results of recent scans are stored in a \nsearchable database. \n Gordon Lyon (better known by his nickname, \nFyodor) released Nmap in 1997 and continues to coor-\ndinate its development. He also maintains the Insecure.\nOrg, Nmap.Org, SecLists.Org, and SecTools.Org secu-\nrity resource sites and has written seminal papers on \nOS detection and stealth port scanning. He is a found-\ning member of the Honeynet project and coauthored the \nbooks Know Your Enemy: Honeynets and Stealing the \nNetwork: How to Own a Continent . Gordon is President \nof Computer Professionals for Social Responsibility \n(CPSR), which has promoted free speech, security, and \nprivacy since 1981. 5 , 6 \n Some systems might be disconnected from the net-\nwork. Obviously, if the system is not connected to any \nnetwork at all it will have a lower priority for scanning. \nHowever, it shouldn’t be left in the dark and not be scanned \nat all, because there might be other nonnetwork related \nflaws, for example, a Firewire exploit that can be used \nto unlock the Windows XP SP2 system. Exploits work \nlike this: An attacker approaches a locked Windows XP \nDetection\nIsolation\nRemediation\nNotification\n FIGURE 23.2 Vulnerability mitigation cycle. \nEmergent\nDominant\nDormant\nEsoteric\nAV\nNIDS\nNIPS\nHIPS\nFirewalls\nUbiquity\nUsefulness\n FIGURE 23.3 Usefulness/ubiquity relationship. \n 4 www.darknet.org.uk/2006/04/penetration-testing-vs-vulnerability-\nassessment/ . \n 5 http://en.wikipedia.org/wiki/Gordon_Lyon . \n 6 www.nmap.org . \n" }, { "page_number": 419, "text": "PART | II Managing Information Security\n386\nSP2 station, plugs a Firewire cable into it, and uses special \ncommands to unlock the locked machine. This technique \nis possible because Firewire has direct access to RAM. \nThe system will accept any password and unlock the \ncomputer. 7 , 8 \n 7. SELECTING THE RIGHT SCANNERS \n Scanners alone don’t solve the problem; using scan-\nners well helps solve part of the problem. Start with one \nscanner but consider more than one. It is a good practice \nto use more than one scanner. This way you can com-\npare results from a couple of them. Some scanners are \nmore focused on particular services. A typical scanner \narchitecture is shown in Figure 23.6 . \n For example, Nessus is an outstanding general-\npurpose scanner, but Web application-oriented scanners \nsuch as HP Web Inspect or Hailstorm will do a much \nbetter job of scanning a Web server. In an ideal situ-\nation, scanners would not be needed because everyone \nwould maintain patches and tested hosts, routers, gate-\nways, workstations, and servers. However, the real world \nis different; we are humans and we tend to forget to \ninstall updates, patch systems, and/or configure systems \nproperly. Malicious code will always find a way into \nyour network! If a system is connected to the network, \nthat means there is a possibility that this system will \nbe infected at some time in the future. The chances \nmight be higher or lower depending on the system’s \nmaintenance level. The system will never be secure \n100%. There is no such thing as 100% security; if \nwell maintained, it might be 99.9999999999% secure, \nbut never 100%. There is a joke that says, if you want \nto make a computer secure, you have to disconnect it \nfrom the network and power outlet and then put it into \na safe and lock it. This system will be almost 100% \n FIGURE 23.4 Nmap command-line interface. \n FIGURE 23.5 Zenmap graphical user interface. \n 7 http://en.wikipedia.org/wiki/FireWire . \n 8 http://storm.net.nz/projects/16 . \n" }, { "page_number": 420, "text": "Chapter | 23 What Is Vulnerability Assessment?\n387\nsecure (although not useful), because social engineering \ncons may call your employees and ask them to remove \nthat system from the safe and plug it back into the \nnetwork. 9 , 10 , 11 \n 8. CENTRAL SCANS VERSUS LOCAL \nSCANS \n The question arises: Should we scan locally or cen-\ntrally? Should we scan the whole network at once, or \nshould we scan the network based on subdomains and \nvirtual LANs? Table 23.1 shows pros and cons of each \nmethod. \n With localized scanning with central scanning verifi-\ncation, central scanning becomes a verification audit. The \nquestion again arises, should we scan locally or centrally? \nThe answer is both. Central scans give overall visibility \ninto the network. Local scans may have higher visibility \ninto the local network. Centrally driven scans serve as the \nbaseline. Locally driven scans are key to vulnerability \nreduction. Scanning tools should support both method-\nologies. Scan managers should be empowered to police \ntheir own area and enforce policy. So what will hackers \ntarget? Script kiddies will target any easily exploitable \nsystem; dedicated hackers will target some particular net-\nwork/organization (see sidebar, “ Who Is the Target? ” ).\nVulnerability Database\nUser Configuration\nConsole\nScanning Engine\nCurrent Active Scan\nKnowledge Base\nResults Repository\nand Report\nGenerating\nTarget 1\nTarget 2\nTarget 3\nTarget 4\nTarget 5\n FIGURE 23.6 Typical scanner architecture. \n 9 www.scmagazineus.com/HP-WebInspect-77/Review/2365/ . \n 10 www.cenzic.com/products_services/products_overview.php. \n 11 http://en.wikipedia.org/wiki/Social_engineering_(computer_security) . \n TABLE 23.1 Pros and Cons of Central Scans and \nLocal Scans \n \n Centrally Controlled and \nAccessed Scanning \n Decentralized \nScanning \n Pros \n Easy to maintain \n Scan managers can \nscan at will \n Cons \n Slow; most scans must be \nqueued \n Patching of the \nscanner is often \noverlooked \n" }, { "page_number": 421, "text": "PART | II Managing Information Security\n388\n Who Is The Target? \n “ We are not a target. ” How many times have you heard \nthis statement? Many people think that they don’t have \nanything to hide, they don’t have secrets, and thus \nnobody will hack them. Hackers are not only after \nsecrets but after resources as well. They may want to use \nyour machine for hosting files, use it as a source to attack \nother systems, or just try some new exploits against it. \n If you don’t have any juicy information, you might not \nbe a target for a skilled hacker, but you will always be a \ntarget for script kiddies. In hacker culture terms, script kid-\ndie describes an inexperienced hacker who is using avail-\nable tools, usually with a GUI, to do any malicious activity. \nScript kiddies lack technical expertise to write or create any \ntools by themselves. They try to infect or deface as many \nsystems as they can with the least possible effort. If they \ncan’t hack your system/site in a couple of minutes, usu-\nally they move to an easier target. It’s rather different with \nskilled hackers, who seek financial or other benefits from \nhacking the system. They spend a lot of time just exploring \nthe system and collecting as much information as possible \nbefore trying to hack it. The proper way of hacking is data \nmining and writing scripts that will automate the whole \nprocess, thus making it fast and hard to respond to. \n 9. DEFENSE IN DEPTH STRATEGY \n Defense in depth is an information assurance (IA) strategy \nin which multiple layers of defense are placed throughout an \nIT system. Defense in depth addresses security vulnerabili-\nties in personnel, technology, and operations for the duration \nof the system’s life cycle. The idea behind this approach is \nto defend a system against any particular attack using sev-\neral varying methods. It is a layering tactic, conceived by \nthe National Security Agency (NSA) as a comprehensive \napproach to information and electronic security. Defense in \ndepth was originally a military strategy that seeks to delay, \nrather than prevent, the advance of an attacker by yielding \nspace in order to buy time. The placement of protection \nmechanisms, procedures, and policies is intended to increase \nthe dependability of an IT system where multiple layers of \ndefense prevent espionage and direct attacks against critical \nsystems. In terms of computer network defense, defense-in-\ndepth measures should not only prevent security breaches, \nthey should give an organization time to detect and respond \nto an attack, thereby reducing and mitigating the impact of a \nbreach. Using more than one of the following layers consti-\ntutes defense in depth: \n ● Physical security (e.g., deadbolt locks) \n ● Authentication and password security \n ● Antivirus software (host based and network based) \n ● Firewalls (hardware or software) \n ● Demilitarized zones (DMZs) \n ● Intrusion detection systems (IDSs) \n ● Packet filters (deep packet inspection appliances and \nstateful firewalls) \n ● Routers and switches \n ● Proxy servers \n ● Virtual private networks (VPNs) \n ● Logging and auditing \n ● Biometrics \n ● Timed access control \n ● Proprietary software/hardware not available to the \npublic 12 \n 10. VULNERABILITY ASSESSMENT TOOLS \n There are many vulnerability assessment tools. The top \n10 tools according to www.sectools.org are listed here. \nEach tool is described by one or more attributes: \n ● Generally costs money; a free limited/demo/trial ver-\nsion may be available \n ● Works natively on Linux \n ● Works natively on OpenBSD, FreeBSD, Solaris, and/\nor other Unix-like systems \n ● Works natively on Apple Mac OS X \n ● Works natively on Microsoft Windows \n ● Features a command-line interface \n ● Offers a GUI (point-and-click) interface \n ● Source code available for inspection \n Nessus \n Nessus is a premier vulnerability assessment tool. There \nis a 2.x version that is open source and a 3.x version that \nis closed source. Unix-like implementations have CLI \nand Windows implementations have GUI only. Nessus \nwas a popular free and open-source vulnerability scanner \nuntil it closed the source code in 2005 and removed the \nfree “ registered feed ” version in 2008. A limited “ home \nfeed ” is still available, though it is only licensed for \nhome network use. Some people avoid paying by vio-\nlating the “ home feed ” license or by avoiding feeds \nentirely and using just the plug-ins included with each \nrelease. But for most users, the cost has increased from \nfree to $ 1200/year. Despite this, Nessus is still the best \nUnix vulnerability scanner available and among the best \n 12 www.nsa.gov/snac/support/defenseindepth.pdf . \n" }, { "page_number": 422, "text": "Chapter | 23 What Is Vulnerability Assessment?\n389\nto run on Windows. Nessus is constantly updated, with \nmore than 20,000 plug-ins. Key features include remote \nand local (authenticated) security checks, a client/server \narchitecture with a GTK graphical interface, and an \nembedded scripting language for writing your own plug-\nins or understanding the existing ones. \n GFI LANguard \n A commercial network security scanner for Windows, \nGFI LANguard scans IP networks to detect what \nmachines are running. Then it tries to discern the host \nOS and what applications are running. It also tries to \ncollect Windows machines ’ service pack level, missing \nsecurity patches, wireless access points, USB devices, \nopen shares, open ports, services/applications active \non the computer, key registry entries, weak passwords, \nusers and groups, and more. Scan results are saved to an \nHTML report, which can be customized and queried. It \nalso includes a patch manager that detects and installs \nmissing patches. A free trial version is available, though \nit only works for up to 30 days. \n Retina \n Commercial vulnerability assessment scanner by eEye. \nLike Nessus, Retina’s function is to scan all the hosts on \na network and report on any vulnerabilities found. It was \nwritten by eEye, well known for security research. \n Core Impact \n An automated, comprehensive penetration testing prod-\nuct, Core Impact isn’t cheap (be prepared to spend tens \nof thousands of dollars), but it is widely considered to be \nthe most powerful exploitation tool available. It sports a \nlarge, regularly updated database of professional exploits \nand can do neat tricks, such as exploiting one machine \nand then establishing an encrypted tunnel through that \nmachine to reach and exploit other boxes. If you can’t \nafford Impact, take a look at the cheaper Canvas or the \nexcellent and free Metasploit Framework. Your best bet \nis to use all three. \n ISS Internet Scanner \n Application-level \nvulnerability \nassessment \nInternet \nScanner started off in 1992 as a tiny open-source scanner \nby Christopher Klaus. Now he has grown ISS into a bil-\nlion-dollar company with myriad security products. \n X-Scan \n A general scanner for scanning network vulnerabilities, \nX-Scan is a multithreaded, plug-in-supported vulnerabil-\nity scanner. X-Scan includes many features, including \nfull NASL support, detecting service types, remote OS \ntype/version detection, weak user/password pairs, and \nmore. You may be able to find newer versions available \nat the X-Scan site if you can deal with most of the page \nbeing written in Chinese. \n SARA \n Security Auditor’s Research Assistant SARA is a vulnera-\nbility assessment tool that was derived from the infamous \nSATAN scanner. Updates are released twice a month and \nthe company tries to leverage other software created by \nthe open-source community (such as Nmap and Samba). \n QualysGuard \n A Web-based vulnerability scanner delivered as a serv-\nice over the Web, QualysGuard eliminates the burden of \ndeploying, maintaining, and updating vulnerability man-\nagement software or implementing ad hoc security appli-\ncations. Clients securely access QualysGuard through an \neasy-to-use Web interface. QualysGuard features 5000 \u0002 \nunique vulnerability checks, an inference-based scanning \nengine, and automated daily updates to the QualysGuard \nvulnerability knowledge base. \n SAINT \n Security Administrator’s Integrated Network Tool \n(SAINT) is another commercial vulnerability assess-\nment tool (like Nessus, ISS Internet Scanner, or Retina). \nIt runs on Unix and used to be free and open source but \nis now a commercial product. \n MBSA \n Microsoft Baseline Security Analyzer (MBSA) is an easy-\nto-use tool designed for the IT professional that helps \nsmall and medium-sized businesses determine their secu-\nrity state in accordance with Microsoft security recommen-\ndations, and offers specific remediation guidance. Built on \nthe Windows Update Agent and Microsoft Update infra-\nstructure, MBSA ensures consistency with other Microsoft \nmanagement products, including Microsoft Update (MU), \nWindows Server Update Services (WSUS), Systems \n" }, { "page_number": 423, "text": "PART | II Managing Information Security\n390\nManagement Server (SMS), and Microsoft Operations \nManager (MOM). Apparently MBSA, on average, scans \nover three million computers each week. 13 \n 11. SCANNER PERFORMANCE \n A vulnerability scanner can use a lot of network band-\nwidth, so you want the scanning process to complete as \nquickly as possible. Of course, the more vulnerabilities \nin the database and the more comprehensive the scan, the \nlonger it will take, so this can be a tradeoff. One way to \nincrease performance is through the use of multiple scan-\nners on the enterprise network, which can report back to \none system that aggregates the results. 12 \n 12. SCAN VERIFICATION \n The best practice is to use few scanners during your vul-\nnerability assessment, then use more than one scanning \ntool to find more vulnerabilities. Scan your networks with \ndifferent scanners from different vendors and compare \nthe results. Also consider penetration testing, that is, hire \nwhite/gray-hat hackers to hack your own systems. 14 , 15 \n 13. SCANNING CORNERSTONES \n All orphaned systems should be treated as hostile. \nSomething in your organization that is not maintained or \ntouched poses the largest threat. For example, say that you \nhave a Web server and you inspect every byte of DHTML \nand make sure it has no flaws, but you totally forget to \nmaintain the SMTP service with open relay that it is also \nrunning. Attackers might not be able to deface or harm \nyour Web page, but they will be using the SMTP server \nto send out spam emails via your server. As a result, your \ncompany’s IP ranges will be put into spammer lists such \nas spamhaus and spamcop. 16 , 17 \n 14. NETWORK SCANNING \nCOUNTERMEASURES \n A company wants to scan its own networks, but at the \nsame time the company should take countermeasures to \nprotect itself from being scanned by hackers. Here is a \nchecklist of countermeasures to use when you’re consid-\nering technical modifications to networks and filtering \ndevices to reduce the effectiveness of network scanning \nand probing undertaken by attackers: \n ● Filter inbound Internet Control Message Protocol \n(ICMP) message types at border routers and fire-\nwalls. This forces attackers to use full-blown TCP \nport scans against all your IP addresses to map your \nnetwork correctly. \n ● Filter all outbound ICMP type 3 unreachable messages \nat border routers and firewalls to prevent UDP port \nscanning and firewalking from being effective. \n ● Consider configuring Internet firewalls so that they \ncan identify port scans and throttle the connections \naccordingly. You can configure commercial firewall \nappliances (such as those from Check Point, \nNetScreen, and WatchGuard) to prevent fast port \nscans and SYN floods being launched against your \nnetworks. On the open-source side, many tools such as \nport sentry can identify port scans and drop all packets \nfrom the source IP address for a given period of time. \n ● Assess the way that your network firewall and IDS \ndevices handle fragmented IP packets by using \nfragtest and fragroute when performing scanning and \nprobing exercises. Some devices crash or fail under \nconditions in which high volumes of fragmented \npackets are being processed. \n ● Ensure that your routing and filtering mechanisms \n(both firewalls and routers) can’t be bypassed using \nspecific source ports or source-routing techniques. \n ● If you house publicly accessible FTP services, ensure \nthat your firewalls aren’t vulnerable to stateful \ncircumvention attacks relating to malformed PORT \nand PASV commands. \n ● If a commercial firewall is in use, ensure the \nfollowing: \n \n ● The latest service pack is installed. \n \n ● Antispoofing rules have been correctly defined \nso that the device doesn’t accept packets with \nprivate spoofed source addresses on its external \ninterfaces. \n \n ● Fastmode services aren’t used in Check Point \nFirewall-1 environments. \n ● Investigate using inbound proxy servers in your \nenvironment if you require a high level of security. \nA proxy server will not forward fragmented or \nmalformed packets, so it isn’t possible to launch FIN \nscanning or other stealth methods. \n ● Be aware of your own network configuration and its \npublicly accessible ports by launching TCP and UDP \n 13 www.sectools.org . \n 14 http://en.wikipedia.org/wiki/White_hat . \n 15 http://en.wikipedia.org/wiki/Grey_hat . \n 16 www.spamhaus.org \n 17 www.spamcop.net . \n" }, { "page_number": 424, "text": "Chapter | 23 What Is Vulnerability Assessment?\n391\nport scans along with ICMP probes against your own \nIP address space. It is surprising how many large \ncompanies still don’t properly undertake even simple \nport-scanning exercises. 18 \n 15. VULNERABILITY DISCLOSURE DATE \n The time of disclosure of vulnerability is defined differ-\nently in the security community and industry. It is most \ncommonly referred to as “ a kind of public disclosure of \nsecurity information by a certain party. ” Usually vul-\nnerability information is discussed on a mailing list or \npublished on a security Web site and results in a security \nadvisory afterward. \n The time of disclosure is the first date that security \nvulnerability is described on a channel where the dis-\nclosed information on the vulnerability has to fulfill the \nfollowing requirements: \n ● The information is freely available to the public. \n ● The vulnerability information is published by a \ntrusted and independent channel/source. \n ● The vulnerability has undergone analysis by experts \nsuch that risk rating information is included upon \ndisclosure. \n The method of disclosing vulnerabilities is a topic of \ndebate in the computer security community. Some advo-\ncate immediate full disclosure of information about vul-\nnerabilities once they are discovered. Others argue for \nlimiting disclosure to the users placed at greatest risk and \nonly releasing full details after a delay, if ever. Such delays \nmay allow those notified to fix the problem by developing \nand applying patches, but they can also increase the risk to \nthose not privy to full details. This debate has a long his-\ntory in security; see full disclosure and security through \nobscurity. More recently a new form of commercial vul-\nnerability disclosure has taken shape, as some commercial \nsecurity companies offer money for exclusive disclosures \nof zero-day vulnerabilities. Those offers provide a legiti-\nmate market for the purchase and sale of vulnerability \ninformation from the security community. \n From the security perspective, a free and public dis-\nclosure is successful only if the affected parties get the \nrelevant information prior to potential hackers; if they did \nnot, the hackers could take immediate advantage of the \nrevealed exploit. With security through obscurity, the same \nrule applies but this time rests on the hackers finding the \nvulnerability themselves, as opposed to being given the \ninformation from another source. The disadvantage here is \nthat fewer people have full knowledge of the vulnerability \nand can aid in finding similar or related scenarios. \n It should be unbiased to enable a fair dissemination \nof security-critical information. Most often a channel is \nconsidered trusted when it is a widely accepted source \nof security information in the industry (such as CERT, \nSecurityFocus, Secunia, and FrSIRT). Analysis and risk \nrating ensure the quality of the disclosed information. \nThe analysis must include enough details to allow a con-\ncerned user of the software to assess his individual risk \nor take immediate action to protect his assets. \n Find Security Holes Before They Become \nProblems \n Vulnerabilities can be classified into two major categories: \n ● Those related to errors made by programmers in \nwriting the code for the software \n ● Those related to misconfigurations of the software’s \nsettings that leave systems less secure than they \ncould be (improperly secured accounts, running of \nunneeded services, etc.) \n Vulnerability scanners can identify both types. \nVulnerability assessment tools have been around for many \nyears. They’ve been used by network administrators and \nmisused by hackers to discover exploitable vulnerabili-\nties in systems and networks of all kinds. One of the early \nwell-known Unix scanners, System Administrator Tool \nfor Analyzing Networks (SATAN), later morphed into \nSAINT (Security Administrator’s Integrated Network \nTool). These names illustrate the disparate dual nature of \nthe purposes to which such tools can be put. \n In the hands of a would-be intruder, vulnerability scan-\nners become a means of finding victims and determining \nthose victims ’ weak points, like an undercover intelligence \noperative who infiltrates the opposition’s supposedly secure \nlocation and gathers information that can be used to launch \na full-scale attack. However, in the hands of those who are \ncharged with protecting their networks, these scanners are \na vital proactive defense mechanism that allows you to \nsee your systems through the eyes of the enemy and take \nsteps to lock the doors, board up the windows, and plug up \nseldom used passageways through which the “ bad guys ” \ncould enter, before they get a chance. \n In fact, the first scanners were designed as hacking \ntools, but this is a case in which the bad guys ’ weapons \nhave been appropriated and used to defend against them. \nBy “ fighting fire with fire, ” administrators gain a much-\nneeded advantage. For the first time, they are able to battle \n 18 www.trustmatta.com/downloads/pdf/Matta_IP_Network_Scanning.pdf . \n" }, { "page_number": 425, "text": "PART | II Managing Information Security\n392\nintruders proactively. 12 Once the vulnerabilities are found, \nwe have to remove them (see sidebar, “ Identifying and \nRemoving Vulnerabilities ” ).\n Identifying and Removing Vulnerabilities \n Many software tools can aid in the discovery (and some-\ntimes removal) of vulnerabilities in a computer system. \nThough these tools can provide an auditor with a good \noverview of possible vulnerabilities present, they can-\nnot replace human judgment. Relying solely on scanners \nwill yield false positives and a limited-scope view of the \nproblems present in the system. \n Vulnerabilities have been found in every major oper-\nating system including Windows, Mac OS, various forms \nof Unix and Linux, OpenVMS, and others. The only \nway to reduce the chance of a vulnerability being used \nagainst a system is through constant vigilance, includ-\ning careful system maintenance (e.g., applying software \npatches), best practices in deployment (e.g., the use of \nfirewalls and access controls), and auditing during devel-\nopment and throughout the deployment life cycle. \n 16. PROACTIVE SECURITY VERSUS \nREACTIVE SECURITY \n There are two basic methods of dealing with security \nbreaches: \n ● The reactive method is passive; when a breach \noccurs, you respond to it, doing damage control at \nthe same time you track down how the intruder or \nattacker got in and cut off that means of access so it \nwon’t happen again. \n ● The proactive method is active; instead of waiting for \nthe hackers to show you where you’re vulnerable, you \nput on your own hacker hat in relation to your own \nnetwork and set out to find the vulnerabilities your-\nself, before anyone else discovers and exploits them. \n The best security strategy employs both reactive and \nproactive mechanisms. Intrusion detection systems (IDSs), \nfor example, are reactive in that they detect suspicious net-\nwork activity so that you can respond to it appropriately. \n Vulnerability assessment scanning is a proactive tool \nthat gives you the power to anticipate vulnerabilities and \nkeep out attackers instead of spending much more time \nand money responding to attack after attack. The goal of \nproactive security is to prevent attacks before they hap-\npen, thus decreasing the load on reactive mechanisms. \nBeing proactive is more cost effective and usually easier; \nthe difference can be illustrated by contrasting the time \nand cost required to clean up after vandals break into \nyour home or office with the effort and money required \nto simply install better locks that will keep them out. \n Despite the initial outlay for vulnerability assessment \nscanners and the time spent administering them, poten-\ntial return on investment is very high in the form of time \nand money saved when attacks are prevented. 12 \n 17. VULNERABILITY CAUSES \n The following are vulnerability causes: \n ● Password management flaws \n ● Fundamental operating system design flaws \n ● Software bugs \n ● Unchecked user input \n Password Management Flaws \n The computer user uses weak passwords that could be dis-\ncovered by brute force. The computer user stores the pass-\nword on the computer where a program can access it. Users \nreuse passwords between many programs and Web sites. \n Fundamental Operating System Design \nFlaws \n The operating system designer chooses to enforce subop-\ntimal policies on user/program management. For example, \noperating systems with policies such as default permit grant \nevery program and every user full access to the entire com-\nputer. This operating system flaw allows viruses and mal-\nware to execute commands on behalf of the administrator. \n Software Bugs \n The programmer leaves an exploitable bug in a software \nprogram. The software bug may allow an attacker to mis-\nuse an application through (for example) bypassing access \ncontrol checks or executing commands on the system \nhosting the application. Also the programmer’s failure to \ncheck the size of data buffers, which can then be over-\nflowed, can cause corruption of the stack or heap areas of \nmemory (including causing the computer to execute code \nprovided by the attacker). \n Unchecked User Input \n The program assumes that all user input is safe. Programs \nthat do not check user input can allow unintended direct \n" }, { "page_number": 426, "text": "Chapter | 23 What Is Vulnerability Assessment?\n393\nexecution of commands or SQL statements (known as \nBuffer overflows, SQL injection, or other nonvalidated \ninputs). The biggest impact on the organization would be \nif vulnerabilities are found in core devices on the network \n(routers, firewalls, etc.), as shown in Figure 23.7 . 19 \n 18. DIY VULNERABILITY ASSESSMENT \n If you perform credit-card transactions online, you’re \nmost likely PCI compliant or working on getting there. \nIn either case, it is much better to resolve compliancy \nissues on an ongoing basis rather than stare at a truck-\nload of problems as the auditor walks into your office. \nThough writing and reviewing policies and procedures \nis a big part of reaching your goal, being aware of the \nvulnerabilities in your environment and understanding \nhow to remediate them are just as important. For most \nsmall businesses, vulnerability assessments sound like a \nlot of work and time that you just don’t have. What if \nyou could have a complete understanding of all vulner-\nabilities in your network and a fairly basic resolution \nfor each, outlined in a single report within a couple of \nhours? Sound good? What if I also told you the tool \nthat can make this happen is currently free and doesn’t \nrequire an IT genius to run it? Sounding better? \n It isn’t very pretty and it’s not always right, but it can \ngive you some valuable insight into your environment. \nTenable’s Nessus vulnerability scanner is one of the most \nwidely used tools in professional vulnerability assess-\nments today. In its default configuration, all you need to \ndo is provide the tool with a range of IP addresses and \nclick Go. It will then compare its database of known vul-\nnerabilities against the responses it receives from your \nnetwork devices, gathering as much information as pos-\nsible without killing your network or servers, usually. It \ndoes have some very dangerous plug-ins that are disa-\nbled by default, and you can throttle down the amount \nof bandwidth it uses to keep the network noise levels to \na minimum. The best part about Nessus is that it’s, very \nwell documented, and used by over 75,000 organizations \nworldwide, so you know you’re dealing with trustworthy \nproduct. I urge you to take a look Tenable’s enterprise \nofferings as well. You might just be surprised at how \neasy it is to perform a basic do-it-yourself vulnerability \nassessment! Related links: \n ● Tenable’s Nessus: www.nessus.org \n ● Tenable Network Security: www.tenablesecurity.com \n 19. CONCLUSION \n Network- and host-based vulnerability assessment tools \nare extremely useful in determining what vulnerabilities \nmight exist on a particular network. However, these tools \nare not useful if the vulnerability knowledge base is not \nkept current. Also, these tools can only take a snapshot \nof the systems at a particular point in time. Systems \nadministrators will continually update code on the target \nsystems and will continuously add/delete services and \nconfigure the system. All found vulnerabilities should be \npromptly patched (especially critical ones). \nRouters / Firewalls\nNetwork\nWindows / Linux / OS X\nOperating System\nOpen Source / Commercial\nApplications\nOracle / MySQL / DB2\nDatabase\nApache / Microsoft IIS\nWeb Server\nOpen Source / Commercial\nThird-Party Web Applications\nTechnical Vulnerabilities\nWeb Applications\nBusiness Logic Flaws\nCustom\n FIGURE 23.7 Vulnerabilities with the biggest impact. \n 19 www.webappsec.org . \n" }, { "page_number": 427, "text": "This page intentionally left blank\n" }, { "page_number": 428, "text": " Encryption Technology \n Part III \n CHAPTER 24 Data Encryption \n Dr. Bhushan Kapoor and Dr. Pramod Pandya \n CHAPTER 25 Satellite Encryption \n Daniel S. Soper \n CHAPTER 26 Public Key Infrastructure \n Terence Spies \n CHAPTER 27 Instant-Messaging Security \n Samuel J. J. Curry \n" }, { "page_number": 429, "text": "This page intentionally left blank\n" }, { "page_number": 430, "text": "397\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Data Encryption \n Dr. Bhushan Kapoor \n California State University \n Dr. Pramod Pandya \n California State University \n Chapter 24 \n This chapter is about security and the role played by \ncryptographic technology in data security. Securing data \nwhile it is in storage or in transition from an unauthor-\nized access is a critical function of information technol-\nogy. All forms of ecommerce activities such as online \ncredit card processing, purchasing stocks, and banking \ndata processing would, if compromised, lead to busi-\nnesses losing billions of dollars in revenues, as well as \ncustomer confidence lost in ecommerce. \n The Internet evolved over the years as a means \nfor users to access information and exchange emails. \nLater, once the bandwidth became available, businesses \nexploited the Internet’s popularity to reach customers \nonline. In the past few years it has been reported that \norganizations that store and maintain customers ’ private \nand confidential records were compromised on many \noccasions by hackers breaking into the data networks and \nstealing the records from storage media. More recently \nwe have come across headline-grabbing security breaches \nregarding laptops with sensitive data being lost or stolen, \nand most recently the Feds have encrypted around 1 mil-\nlion laptops with encryption software loaded to secure \ndata such as names and Social Security numbers. \n Data security is not limited to wired networks but is \nequally critical for wireless communications such as in \nWi-Fi and cellular. A very recent case was highlighted \nwhen the Indian government requested to Research In \nMotion (RIM) to share the encryption algorithm used \nin the BlackBerry cellular device. Of course, RIM \nrefused to share the encryption algorithm. This should \ndemonstrate that encryption is an important technology \nin all forms of communication. It is hard to accept that \nsecured systems could ever remain secured, since they \nare designed by us and therefore must be breakable by \none of us, given enough time. Every human-engineered \nsystem must have a flaw, and it is only a matter of time \nbefore someone finds it, thus demanding new innova-\ntions by exploring applications from algebraic structures \nsuch as groups and rings, elliptic curves, and quantum \nphysics. \n Over the past 20 years we have seen classical cryptog-\nraphy evolve to quantum cryptography, a branch of quan-\ntum information theory. Quantum cryptography is based \non the framework of quantum physics, and it is meant to \nsolve the problem of key distribution, which is an essen-\ntial component of cryptography that enables us to secure \ndata. The key allows the data to be coded so that to \ndecode it, one would need to know the key that was used \nto code it. This coding of the given data using a key is \nknown as encryption , and decoding of the encrypted data, \nthe reverse step-by-step process, is known as decryption . \nAt this stage we point out that the encryption algo-\nrithm comes in two flavors: symmetric and asymmetric, \nof which we will get into the details later on. Securing \ndata requires a three-pronged approach: detection, pre-\nvention, and response. Data normally resides on storage \nmedia that are accessible over a network. This network \nis designed with a perimeter around it, such that a single \naccess point provides a route for inbound and outbound \ntraffic through a router supplemented with a firewall. \n Data encryption prevents data from being exposed to \nunauthorized access and makes it unusable. Detection \nenables us to monitor the activities of network users and \nprovides a means to differentiate levels of activities and \noffers a possible clue to network violations. Response is \nequally important, since a network violation must not be \nallowed to be repeated. Thus the three-pronged approach \nis evolutionary, and therefore systems analysis and \ndesign principles must be taken into account when we \ndesign a secured data network. \n" }, { "page_number": 431, "text": "PART | III Encryption Technology\n398\n 1. NEED FOR CRYPTOGRAPHY \n Data communication normally takes place over an unse-\ncured channel, as is the case when the Internet provides \nthe pathways for the flow of data. In such a case the cryp-\ntographic protocols would enable secured communications \nby addressing the following. \n Authentication \n Alice sends a message to Bob. How can Bob verify that \nthe message originated from Alice and not from Eve pre-\ntending to be Alice? Authentication is critical if Bob is to \nbelieve the message — for example, if the bank is trying \nto verify your Social Security or account number. \n Confidentiality \n Alice sends a message to Bob. How can Bob be sure that \nthe message was not read by Eve? For example, personal \ncommunications need to be maintained as confidential. \n Integrity \n Alice sends a message to Bob. How does Bob verify \nthat Eve did not intercept the message and change its \ncontents? \n Nonrepudiation \n Alice could send a message to Bob and later deny that \nshe ever sent a message to Bob. In such a case, how could \nBob ever determine who actually sent him the message? \n 2. MATHEMATICAL PRELUDE \nTO CRYPTOGRAPHY \n We will continue to describe Alice and Bob as two par-\nties exchanging messages and Eve as the eavesdropper. \nAlice sends either a character string or a binary string \nthat constitutes her message to Bob. In mathematical \nterms we have the domain of the message. The message \nin question needs to be secured from the eavesdropper \nEve — hence it needs to be encrypted. \n Mapping or Function \n The encryption of the message can be defined as map-\nping the message from the domain to its range such that \nthe inverse mapping should recover the original message. \nThis mapping is a mathematical construct known as the \n function . \n So we have a domain, and the range of the func-\ntion is defined such that the elements of the domain will \nalways map to the range of the function, never outside \nit. If f represents the function, and the message m \u0002 the \ndomain, then: \n f(m)\nM\nthe range\n\u0003\n∈\n \n This function can represent, for example, swapping \n(shifting by k places) the characters positions in the mes-\nsage as defined by the function: \n f(m, k)\nM\nthe range\n\u0003\n∈\n \n The inverse of this function f must recover the origi-\nnal message, in which case the function is invertible and \none-to-one defined. If we were to apply two functions \nsuch as f followed by g , the composite function (g \f f) \nmust be defined and furthermore invertible and one-to-\none to recover the original message: \n (g f)(m)\ng ( f(m) )\n°\n\u0003\n \n We will later see that this function is an algorithm \nthat tells the user in a finite number of ways to disguise \n(encrypt) the given message. The inverse function, if \nit does exist, would enable us to recover the original \nmessage, which is known as the decryption. \n Probability \n Information security is the goal of the secured data \nencryption; hence if the encrypted data is truly ran-\ndomly distributed in the message space (range), to the \nhacker the encrypted message is equally likely to be in \nany one of the states (encrypted). This would amount \nto maximum entropy, so one could reasonably ask as to \nthe likelihood of a hacker breaking the encrypted mes-\nsage, that is, what is the probability of an insecure event \ntaking place? This is conceptually similar to a system \nbeing in statistical equilibrium, when it could be equally \nlikely to be in any one of the states. This could lay the \nfoundations of cryptoanalysis in terms of how secure the \nencryption algorithm is, and can it be broken in polyno-\nmial time? \n Complexity \n Computational complexity deals with problems that \ncould be solved in polynomial time, for a given input. \n" }, { "page_number": 432, "text": "Chapter | 24 Data Encryption\n399\nIf a given encryption algorithm is known to be difficult \nto solve and may have a number of solutions, the hacker \nwould have a surmountable task to solve it. Therefore, \nsecured encryption can be examined within the scope of \ncomputational complexity to determine whether a solu-\ntion exists in polynomial time. There is a class of prob-\nlems that have solutions in polynomial time for a given \ninput, designated as P . By contrast, NP is the set of all \nproblems that have solutions in polynomial time but \nthe correctness of the problem cannot be ascertained. \nTherefore, NP is a larger set containing the set P . This is \nuseful, for it leads us to NP -completeness, which reduces \nthe solvability of problems in class P to class NP . \n Consider a simple example — a set S \u0003 { 4, 7, 12, 1, \n10 } of five numbers. We want any three numbers to add \nto 23. Each of the numbers is either selected once only \nor not selected. The target is 23. Is there an algorithm for \nthe target 23? If there is one, do we have more than one \nsolution? Let’s explore whether we can add three num-\nbers to reach a target of 25. Is there a solution for a tar-\nget of 25? Does a solution exist, and can we investigate \nin polynomial time? We could extend this concept of \ncomputational complexity to crack encryption algorithm \nthat is public, but the key used to encrypt and decrypt the \nmessage is kept private. So, in essence the cryptoanalysis \ndeals with discovering the key. \n 3. CLASSICAL CRYPTOGRAPHY \n The conceptual foundation of cryptography was laid out \naround 3000 years ago in India and China. The earlier \nwork in cryptology was centered on messages that were \nexpressed using alphanumeric symbols; hence encryp-\ntion involved simple algorithms such as shifting charac-\nters within the string of the message in a defined manner, \nwhich is now known as shift cipher. We will also intro-\nduce the necessary mathematics of cryptography: integer \nand modular arithmetic, linear congruence, Euclidean \nand Extended Euclidean algorithms, Fermat’s theorem, \nand elliptic curves. We will specify useful notations in \ncontext. \n Take the set of integers: \n Z\n{.............,\n3, \n2, \n1, 0, 1, 2, 3, ................}\n\u0003\n\t\n\t\n\t\n \n For any integers a and n , we say that n divides a \nif the remainder is zero after the division, or else we \nwrite: \n a\nq\nn\nr\nq: quotient, r: remainder\n\u0003\n\u0002\n•\n \n The Euclidean Algorithm \n Given two positive integers, a and b , find the greatest \ncommon divisors of a and b . Let d be the greatest com-\nmon divisors ( gcd ) of a and b , then, \n d\ngcd(a,b)\n\u0003\n \n Use the following example: \n gcd(36,10) \n \u0003 \ngcd(10,6) \n \u0003 \ngcd(6,4) \n \u0003 gcd(4,2) \n \u0003 gcd(2,0) \u0003 2 \n Hence: \n gcd(\n, \n)\n36 10\n2\n\u0003 \n The Extended Euclidean Algorithm \n Let a and b be two positive integers, then \n d\ngcd(a, b)\nax\nby\n\u0003\n\u0003\n\u0002\n \n Use the following example: \n \ngcd(\n, \n)\ngcd(\n, \n)\ngcd(\n, \n)\ngcd(\n, \n)\ngcd(\n540 168\n168 36\n36 24\n24 12\n12\n\u0003\n\u0003\n\u0003\n\u0003\n,00\n12\n540\n3 168\n36\n36\n540\n3 168\n168\n4 36\n24\n24\n168\n4 36\n36\n) \u0003\n\u0003\n\u0002\n\u0003\n\t\n\u0003\n\u0002\n\u0003\n\t\n(\n)\n(\n)\n(\n)\n(\n)\n\u0003\n\u0002\n\u0003\n\t\n\u0003\n\t\n\t\n\u0002\n\u0003\n\t\n\u0002\n1 24\n12\n12\n36\n1 24\n12\n540\n3 168\n168\n4 36\n540\n4 168\n4\n(\n)\n(\n)\n(\n)\n(\n)\n(\n)\n(336\n540\n4 168\n4 540\n12 168\n5 540\n16 168\n)\n(\n)\n(\n)\n(\n)\n(\n)\n(\n)\n\u0003\n\t\n\u0002\n\t\n\u0003\n\t\n \n Therefore: \n x\n and y\n\u0003\n\u0003 \t\n5\n16 \n Hence: \n 12\n5 540\n16 168\n\u0003\n\t\n( )\n(\n)\n \n Modular Arithmetic \n For a given integer a , positive integer m , and the \nremainder r , \n r\na (mod m)\n\u0003\n \n Consider examples: \n \n2\n27\n5\n10\n18\n14\n\u0003\n\u0003 \t\n mod \n mod \n \n" }, { "page_number": 433, "text": "PART | III Encryption Technology\n400\n { divide \t 18 by 14 leaves \t 4 as a remainder, then add \n14 to \t 4 so that ( \t 4 \u0002 14) \u0003 10 so the remainder is \nnonnegative } \n A set of residues is a set consisting of remainders \nobtained by dividing positive integers by a chosen posi-\ntive number m (modulus). \n Zm \u0003\n\u0003\n\t\na mod m)\n \n \n m\n(\n{ , ,\n,\n,.......,\n}\n0 1 2 3\n1 \n Take m \u0003 7, then \n Z7\n0 1 2 3 4 5 6\n\u0003 { , ,\n,\n,\n,\n,\n}\n \n \n \n \n \n \n Congruence \n In arithmetic we normally use the relational operator, \nequal ( \u0003 ), to express that the pair of numbers are equal \nto each other, which is a binary operation. In cryptogra-\nphy we use congruence to express that the residue is the \nsame for a set of integers divided by a positive integer. \nThis essentially groups the positive integers into equiva-\nlence classes. Let’s look at some examples: \n 2\n2\n10 2\n12\n10 2\n22\n10\n≡\n≡\n≡\n mod \n mod \n mod \n;\n;\n \n Hence we say that the set { 2, 12, 22 } are congruent \nmod 10. \n Residue Class \n A residue class is a set of integers congruent mod m , \nwhere m is a positive integer. \n Take m \u0003 7: \n \n[ ]\n,\n,\n,\n,\n,\n,\n[ ]\n0\n21\n14\n7 0 7 14 21\n1\n\u0003\n\t\n\t\n\t\n\u0003\n{..........,\n,\n \n \n \n \n ..........}\n{............,\n,\n \n \n \n ..........}\n{...\n\t\n\t\n\t\n\u0003\n20\n13\n6 1 8 15 22\n2\n,\n, ,\n,\n,\n,\n[ ]\n........,\n,\n,\n \n \n \n \n.}\n{......\n\t\n\t\n\t\n\u0003\n19\n12\n5 2 9 16 23\n3\n,\n,\n,\n,\n,..........\n[ ]\n......,\n,\n \n \n \n \n \n}\n{..........\n\t\n\t\n\t\n\u0003\n18\n11\n4 3 10 17 24\n4\n,\n,\n,\n,\n,\n, .......\n[ ]\n..,\n,\n \n \n \n \n}\n{.............,\n\t\n\t\n\t\n\u0003\n17\n10\n3 4 11 18 25\n5\n,\n,\n,\n,\n,\n,........\n[ ]\n\t\n\t\n\t\n\u0003\n\t\n16\n9\n2 5 12 19 26\n6\n15\n,\n,\n,\n,\n,\n,.........\n[ ]\n,\n \n \n \n \n}\n{.............,\n,,\n,\n,\n,\n,\n\t\n\t\n8\n1 6 13 20 27\n,\n \n \n \n \n,..........} \n Some more useful operations defined in Z m : \n \n(a\nb) mod m\n{(a mod m)\n(b mod m)} mod m\n(a\nb) mod m\n{(a mod\n\u0002\n\u0003\n\u0002\n\t\n\u0003\n m)\n(b mod m)} mod m\n(a * b ) mod m\n{(a mod m) * (b mod m)\n\t\n\u0003\n} mod m \n 10\n10 mod\nn\nn\n(mod x)\n \n \n mod m\n\u0003 〈\n〉\nx\n \n Inverses \n In everyday arithmetic, it is quite simple to find the \ninverse of a given integer if the binary operation is either \nadditive or multiplicative, but such is not the case with \nmodular arithmetic. \n We will begin with the additive inverse of two num-\nbers a, b \u0002 Z m \n (a\nb)\n (mod m)\n\u0002\n≡0\n \n That is, the additive inverse of a is b \u0003 ( m \t a ). \n Given \n a\n, and m\n\u0003\n\u0003\n4\n10 \n then: \n b\nm\na\n\u0003\n\t\n\u0003\n\t\n\u0003\n10\n4\n6 \n Verify: \n 4\n6\n0\n10\n\u0002\n≡\n (mod \n) \n Similarly, the multiplicative inverse of two integers \na, b \u0002 Z m if \n a * b\n (mod m)\n≡1\n \n a has a multiplicative inverse b \u0002 Z m if and only if \n gcd(m, a) \u0003 1 \n in which case ( m, a ) are relative prime. \n We remind the reader that a prime number is any \nnumber greater than 1 that is divisible (with a remain-\nder 0) only by itself and 1. For example, { 2, 3, 5, 7, 11, \n13, . . . } are prime numbers, and we quote the following \ntheorem for the reader. \n Fundamental Theorem of Arithmetic \n Each positive number is either a prime number or a com-\nposite number, in which case it can be expressed as a \nproduct of prime numbers. \n Let’s consider a set of integers mod 10 to find the \nmultiplicative inverse of the numbers in the set: \n \nZ10\n0 1 2 3 4 5 6 7 8 9\n1\n1\n10\n1\n3\n7\n10\n1\n9\n9\n\u0003\n\u0003\n\u0003\n{ , , , , , , , , , }\n*\n*\n*\n(\n) mod \n(\n) mod \n(\n) mod 10\n1\n\u0003\n \n then there are only three pairs (1,1); (3,7); and (9,9): \n Z10\n1 3 7 9\n*\n{ ,\n,\n,\n}\n\u0003\n \n \n \n \n The numbers { 0, 2, 4, 5, 6, 8 } have no multiplicative \ninverse. \n" }, { "page_number": 434, "text": "Chapter | 24 Data Encryption\n401\n Consider a set: \n Z6\n0 1 2 3 4 5\n\u0003 { , ,\n,\n,\n,\n}\n \n \n \n \n \n Then, \n Z6\n1 5\n* \u0003 { , } \n You will note that Z n * is a subset of Z n with unique mul-\ntiplicative inverse. \n Each member of Z n has a unique additive inverse, \nwhereas each member of Z n* has a unique multiplicative \ninverse. \n Congruence Relation Defined \n The a is congruent to b (mod m ) if m divides ( a \t b ), \nthat is, the remainder is zero. \n a\nb mod m\n≡\n \n Examples: 87 \u0006 27 mod 4, 67 \u0006 1 mod 6. \n Next we quote three theorems: \n Theorem 1: Suppose that a \u0006 c mod m and b \u0006 \u0003 d \nmod m, then \n \na\nb\nc\nd (mod m)\na\nb\nc\nd (mod m)\n\u0002\n\u0002\n≡\n≡\n*\n*\n \n Theorem 2: Suppose a*b \u0006 a*c (mod m) \n \nand gcd(a, m)\nthen b\nc (mod m)\n\u0003 1\n≡\n \n Theorem 3: Suppose a*b \u0006 a*c (mod m) \n \nand d\ngcd(a, m)\nthen b\nc (mod m/d)\n\u0003\n≡\n \n Example to illustrate the use of the theorems just \nstated: \n 6\n36\n10\n≡\n (mod \n) \n then \n 3\n2\n3\n12\n10\n\u0007\n\u0007\n≡\n (mod \n) \n since \n gcd( ,\n)\n3 10\n1\n\u0003\n \n therefore, \n 2\n12\n10\n≡\n (mod \n) \n also \n 2\n6 10\n\u0003 gcd( ,\n) \n therefore, \n 1\n6\n5\n≡\n (mod ) \n Given, \n 14\n12\n18\nx\n mod \n≡\n(\n) \n find x . \n Since \n gcd\n \n(\n,\n)\n14 18\n2\n\u0003\n \n therefore, \n 7\n6\n9\nx\n (mod )\n≡\n \n you will observe that, \n gcd\n \n( ,\n)\n7 9\n1\n\u0003\n \n therefore, \n x\n mod \n≡6 7\n9\n1\n(\n)\n\t\n \n and the multiplicative inverse of 7 \t 1 is 4, therefore, \n x\n (mod 9)\n≡( *\n)\n6\n4\n6\n\u0003\n \n Substitution Cipher \n Shift ciphers , also known as additive ciphers , are an \nexample of a monoalphabetic character cipher in which \neach character is mapped to another character, and a \nrepeated character maps to the same character irrespective \nof its position in the string. We give a simple example of \nan additive cipher, where the key is 3, and the algorithm is \n “ add. ” We restrict the mapping to { 0, 1, ………, 7 } (see \n Table 24.1 ) — that is, we use mod 8. This is an example of \nfinite domain and the range for mapping, so the inverse of \nthe function can be determined easily from the ciphertext. \n Observations: \n ● The domain of the function is x \u0003 { 0,1,2,3,4,5,6,7 } . \n ● The range of the function is y \u0003 { 0,1,2,3,4,5,6,7 } . \n ● The function is 1 to 1. \n ● The function is invertible. \n ● The inverse function is x \u0003 (y \t 3) mod 8. \n TABLE 24.1 Table of values for y \u0003 ( x \u0002 3) mod 8, \ngiven values of x \u0003 { 0,1, ... 7 } \n x \n 0 \n 1 \n 2 \n 3 \n 4 \n 5 \n 6 \n 7 \n y \n 3 \n 4 \n 5 \n 6 \n 7 \n 0 \n 1 \n 2 \n" }, { "page_number": 435, "text": "PART | III Encryption Technology\n402\n The affine cipher has two operations, addition and \nmultiplication, with two keys. Once again the arithmetic \nis mod m , where m is a chosen positive integer. \n y\nkx\nb mod m\n\u0003\n\u0002\n(\n)\n \n where k and b are chosen from integers { 0, 1, 2, 3, ………., \n( m \t 1) } , and x is the symbol to be encrypted. \n The decryption is given as: \n x\ny\nb\n mod m\n\u0003\n\t\n\t\n[(\n) *\n]\nk\n1\n \n where \n k\t1 \n is the multiplicative inverse of k in Zn* \n ( \t b) is the additive inverse in Z n \n Consider, \n y\nx\n) mod \n\u0003 ( *\n5\n3\n8\n+\n \n Then, \n x\ny\n mod \n\u0003\n\t\n(\n)\n3 5\n8 \n In this case, the multiplicative inverse of 5 happens to be 5. \n Monoalphabetic substitution ciphers are easily bro-\nken, since the key size is small (see Table 24.2 ). \n \nZ\nZ\n8\n \n \n \n \n \n \n \n \n\u0003\n\u0003\n{ , ,\n,\n,\n,\n,\n,\n}\n{ ,\n,\n}\n*\n0 1 2 3 4 5 6 7\n1 3 5\n8\n \n Transposition Cipher \n A transposition cipher changes the location of the char-\nacter by a given set of rules known as permutation . A \ncyclic group defines the permutation with a single key to \nencrypt, and the same key is used to decrypt the ciphered \nmessage. Table 24.3 provides an illustration. \n 4. MODERN SYMMETRIC CIPHERS \n Computers internally represent printable data in binary \nformat as strings of zeros and ones. Therefore any data \nis represented as a large block of zeros and ones. The \nprocessing speed of a computer is used to encrypt the \nblock of zeros and ones. Securing all the data in one \ngo would not be practical, nor would it secure the data; \nhence the scheme to treat data in chunks of blocks, lead-\ning to the concept of block ciphers. \n The most common value of a block is 64, 128, 256, \nor 512 bits. You will observe that these values are powers \nof 2, since computers process data in binary representa-\ntion using modular arithmetic with modulus 2. We need \nan algorithm and a key to encrypt the blocks of binary \ndata such that the ciphered data is confusing and diffus-\ning to the hacker. The algorithm is made public, whereas \nthe key is kept secret from unauthorized users so that \nhackers could establish the robustness of the cipher by \nattempting to break the encrypted message. The logic of \nthe block cipher is as follows: \n ● Each bit of ciphertext should depend on all bits of \nthe key and all bits of the plaintext. \n ● There should be no evidence of statistical relation-\nship between the plaintext and the ciphertext. \n In essence, this is the goal of an encryption algorithm: \nConfuse the message so that there is no apparent rela-\ntionship between the ciphertext and the plaintext. This is \nachieved by the substitution rule (S-boxes) and the key. \n If changing one bit in the plaintext has a minimal \neffect on the encrypted text, it might be possible for \nthe hacker to work backward from the encrypted text \nto the plaintext by changing the bits. Therefore a mini-\nmal change in the plaintext should lead to a maximum \nchange in the ciphertext, resulting in spreading, which is \nknown as diffusion . Permutation or P-boxes implement \nthe diffusion. \n The symmetric cipher consists of an algorithm and \na key. The algorithm is made public, whereas the key \nis kept secret and is known only to the parties that are \nexchanging messages. Of course, this does create a huge \nproblem, since every pair that is going to exchange mes-\nsages will need a secret key, growing indefinitely in \nnumber as the number of pairs increases. We also would \nneed a mechanism by which to manage the secret keys. \nWe will address these issues later on. \n The symmetric algorithm would consist of finite \nrounds of S-boxes and P-boxes. Once the plaintext \nis encrypted using the algorithm and the key, it would \nneed to be decrypted using the same algorithm and key. \n TABLE 24.2 Monoalphabetic Substitution Cipher \n x \n 0 \n 1 \n 2 \n 3 \n 4 \n 5 \n 6 \n 7 \n y \n 3 \n 0 \n 5 \n 2 \n 7 \n 4 \n 1 \n 6 \n TABLE 24.3 Transposition Cipher \n 1 \n 2 \n 3 \n 4 \n 5 \n 3 \n 1 \n 4 \n 5 \n 2 \n" }, { "page_number": 436, "text": "Chapter | 24 Data Encryption\n403\nThe decryption algorithm and the key would need to \nwork backward in some sense to revert the encrypted \nmessage to its original message. \n So you begin to see that the algorithm must con-\nsist of a finite number of combinations of S-boxes \nand P-boxes; encryption is mapping from message \nspace (domain) to another message space (range), \nthat is, mapping should be a closed operation, a “ nec-\nessary ” condition on the encryption algorithm. This \nimplies that message strings get mapped to message \nstrings, and of course these message strings belong \nto a set of messages. We are not concerned with the \nsemantics of the message; we leave this to the mes-\nsage sender and receiver. The S-boxes and P-boxes \nwould define a set of operations on the messages or \nbits that represent the string of messages. Therefore we \nrequire that this set of operations should also be able to \nundo the encryption, that is, mapping must be invertible \nin the mathematical sense. Hence the set of operations \nmust have definite relationships among them, result-\ning in some structural and logical connection. In math-\nematics an example of this is an algebraic structure such \nas group, ring, and field, which we explore in the next \nsection. \n S-Box \n The reader should note that an S-box can have a 3-bit \ninput binary string, and its output may be a 2-bit. The \nS-box may use a key or be keyless. Let S(x) be the linear \nfunction computed by the following function [1]: \n \nS\n \n \n)\n(\n[(\n)mod ]\n[(\n)mod\nx x x\nx\nx\nx\nx\nx\nx\nx\nx\nx\nx\n1\n2\n3\n1\n2\n3\n1\n2\n3\n1\n3\n1\n2\n1\n2\n1\n\u0003\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n+\n⋅\n⋅\n⋅\n2]\n \n Such a function is referred to as an S-box . For a given \n4-bit block of plaintext x 1 x 2 x 3 x 4 and the 3-bit key, k 1 k 2 k 3 , let \n \nE(\n)\nS\nx x x x\nk k k\nx x\nx x\nx x x\nk k k\n1 2 3 4\n1 2 3\n1 2\n3 4\n2 1 2\n1 2 3\n,\n(\n(\n))\n\u0003\n⊕\n⊕\n \n where \u0002 represents exclusive OR \n Given ciphertext, y 1 y 2 y 3 y 4 computed with E and the \nkey, k 1 k 2 k 3 , compute \n D(\n, \n(\nS(\ny y y y\nk k k\ny y\ny y y\nk k k\ny y\n1 2 3 4\n1 2 3\n1 2\n4 3 4\n1 2 3\n3 4\n\u0003\n⊕\n⊕\n))\n \n S-boxes are classified as linear if the number of \noutput bits is the same as the number of input bits, and \nthey’re nonlinear if the number of output bits is different \nfrom the number of input bits. Furthermore, S-boxes can \nbe invertible or noninvertible. \n P-Boxes \n A P-box (permutation box) will permute the bits per \nspecification. There are three different types of P-boxes, \nas shown in Tables 24.4, 24.5, and 24.6 . \n In the compression P-box, inputs 2 and 4 are blocked. \n The expansion P-box maps elements 1, 2, and 3 only. \n Let’s consider a permutation group with the mapping \ndefined, as shown in Table 24.7. \n TABLE 24.4 Straight P-box \n 1 \n 2 \n 3 \n 4 \n 5 \n 4 \n 1 \n 5 \n 3 \n 2 \n TABLE 24.5 Compression P-box \n 1 \n 2 \n 3 \n 4 \n 5 \n 1 \n \n 2 \n \n 3 \n TABLE 24.6 Expansion P-box \n 1 \n 3 \n 3 \n 1 \n 2 \n 1 \n 2 \n 3 \n 4 \n 5 \n TABLE 24.7 The permutation group \n \n 1 \n 2 \n 3 \n 4 \n 5 \n 6 \n 7 \n 8 \n a \n 2 \n 6 \n 3 \n 1 \n 4 \n 8 \n 5 \n 7 \n \n 1 \n 1 \n 1 \n 1 \n 0 \n 0 \n 1 \n 0 \n \n 1 \n 0 \n 1 \n 1 \n 1 \n 0 \n 0 \n 1 \n a 2 \n 6 \n 8 \n 3 \n 2 \n 1 \n 7 \n 4 \n 5 \n \n 0 \n 0 \n 1 \n 1 \n 1 \n 1 \n 1 \n 0 \n a 3 \n 8 \n 7 \n 3 \n 6 \n 2 \n 5 \n 1 \n 4 \n \n 0 \n 1 \n 1 \n 0 \n 1 \n 0 \n 1 \n 1 \n a 4 \n 7 \n 5 \n 3 \n 8 \n 6 \n 4 \n 2 \n 1 \n \n 1 \n 0 \n 1 \n 0 \n 0 \n 1 \n 1 \n 1 \n a 5 \n 5 \n 4 \n 3 \n 7 \n 8 \n 1 \n 6 \n 2 \n \n 0 \n 1 \n 1 \n 1 \n 0 \n 1 \n 0 \n 1 \n a 6 \n 4 \n 1 \n 3 \n 5 \n 7 \n 2 \n 8 \n 6 \n \n 1 \n 1 \n 1 \n 0 \n 1 \n 1 \n 0 \n 0 \n a 7 \u0003 e \n 1 \n 2 \n 3 \n 4 \n 5 \n 6 \n 7 \n 8 \n \n 1 \n 1 \n 1 \n 1 \n 0 \n 0 \n 1 \n 0 \n" }, { "page_number": 437, "text": "PART | III Encryption Technology\n404\nThe goal of encryption is to confuse and diffuse the \nhacker to make it almost impossible for the hacker to \nbreak the encrypted message. Therefore, encryption \nmust consist of finite number substitutions and transpo-\nsitions. The algebraic structure classical group facilitates \nthe coding of encryption algorithms. \n Next we give some relevant definitions and examples \nbefore we proceed to introduce the essential concept of a \nGalois field, which is central to formulation of a Rijndael \nalgorithm used in the Advanced Encryption Standard. \n Definition Group \n A definition group ( G , • ) is a finite set G together with \nan operation • satisfying the following conditions [2]: \n ● Closure: ∀ a, b \u0002 G, then (a • b) \u0002 G \n ● Associativity: ∀ a, b, c \u0002 G, then a • (b • c) \u0003 (a • b) • c \n ● Existence of identity: ∃ a unique element e \u0002 G such \nthat ∀ a \u0002 G: a • e \u0003 e • a \n ● ∀ a \u0002 G: ∃ a \t 1 \u0002 G: a \t 1 a \u0003 a \t 1 • a \u0003 e \n Definitions of Finite and Infinite Groups \n(Order of a Group) \n A group G is said to be finite if the number of elements \nin the set G is finite; otherwise the group is infinite. \n Definition Abelian Group \n A group G is abelian if for all a, b \u0002 G , a • b \u0003 b • a \n The reader should note that in a group, the elements \nin the set do not have to be numbers or objects; they can \nbe mappings, functions, or rules. \n Examples of a Group \n The set of integers Z is a group under addition ( \u0002 ), that \nis, ( Z, \u0002 ) is a group with identity e \u0003 0, and inverse of \nan element a is ( \t a ). This is an additive abelian group, \nbut infinite. \n Nonzero elements of Q (rationals), R (reals), and C \n(complex) form a group under multiplication, with the \nidentity element e \u0003 1, and a \t 1 being the multiplicative \ninverse. \n For any n \n 1, the set of integers modulo n forms a \nfinite additive group of n elements. \n G \u0003 \u0004 Z n , \u0002 \u0005 is an abelian group. \n The set of Z n * with multiplication operator , G \u0003 \u0004 \n Z n * , x \u0005 is also an abelian group. \n This group is a cyclic group with elements: \n G\ne, a, \n, \n, \n, \n, \n\u0003 (\n)\na\na\na\na\na\n2\n3\n4\n5\n6\n \n The identity mapping is given by a 7 \u0003 e. The inverse \nelement is a \t 1 . \n Table 24.7 shows a permutation of an 8-bit string \n(11110010). \n Product Ciphers \n Modern block ciphers are divided into two categories. \nThe first category of the cipher uses both invertible and \nnoninvertible components. A Feistel cipher belongs \nto the first category, and DES is a good example of \na Feistel cipher. This cipher uses the combination of \nS-boxes and P-boxes with compression and expansion \n(noninvertible). \n The second category of cipher only uses invert-\nible components, and Advanced Encryption Standard \n(AES) is an example of a non-Feistel cipher. AES uses \nS-boxes with an equal number of inputs and outputs and \na straight P-box that is invertible. \n Alternation of substitutions and transpositions of \nappropriate forms when applied to a block of plaintext \ncan have the effect of obscuring statistical relationships \nbetween the plaintext and the ciphertext and between the \nkey and the ciphertext (diffusion and confusion). \n 5. ALGEBRAIC STRUCTURE \n Modern encryption algorithms such as DES, AES, \nRSA, and ElGamal, to name a few, are based on alge-\nbraic structures such as group theory and field theory as \nwell as number theory. We will begin with a set S , with \na finite number of elements and a binary operation (*) \ndefined between any two elements of the set: \n ∗: S\nS\nS\n\u0007\n→\n \n that is, if a and b \u0002 S , then a * b \u0002 S . This is important \nbecause it implies that the set is closed under the binary \noperation. We have seen that the message space is finite, \nand we want to make sure that any algebraic operation \non the message space satisfies the closure property. \nHence, we want to treat the message space as a finite set \nof elements. We remind the reader that messages that \nget encrypted must be finally decrypted by the received \nparty, so the encryption algorithm must run in polynomial \ntime; furthermore, the algorithm must have the property \nthat it be reversible, to recover the original message. \n" }, { "page_number": 438, "text": "Chapter | 24 Data Encryption\n405\n The set Z n * , is a subset of Z n and includes only integers \nin Z n that have a unique multiplicative inverse. \n \nZ\nZ\n13\n13\n0 1 2 3 4 5 6 7 8 9 10 11 12\n1 2 3 4 5 6 7 8 9\n\u0003\n\u0003\n{ , , , , , , , , , ,\n,\n,\n}\n{ , , , , , , , , ,\n*\n10 11 12\n,\n,\n} \n Definition: Subgroup \n A subgroup of a group G is a non empty subset H of G , \nwhich itself is a group under the same operations as that \nof G . We denote that H is a subgroup of G as H \u0003 G , \nand H \u0004 G is a proper subgroup of G if the set H \u0004 \nG [2]: \n Examples of subgroups: \n Under addition, Z \u0003 Q \u0003 R \u0003 C. \n H \u0003 \u0004 Z 10 , \u0002 \u0005 is a proper subgroup of G \u0003 \u0004 \nZ 12 , \u0002 \u0005 \n Definition: Cyclic Group \n A group G is said to be cyclic if there exists an element \n a \u0002 G such that for any b \u0002 G, and i \n 0, b \u0003 a i . \nElement a is called a generator of G. \n The group G \u0003 \u0004 Z 10* , x \u0005 is a cyclic group with \ngenerators g \u0003 3 and g \u0003 7. \n Z10\n1 3 7 9\n*\n{ , , , }\n\u0003\n \n The group G \u0003 \u0004 Z 6 , \u0002 \u0005 is a cyclic group with \ngenerators g \u0003 1 and g \u0003 5. \n Z6\n0 1 2 3 4 5\n\u0003 { , ,\n,\n,\n,\n}\n \n \n \n \n \n Rings \n Let R be a non-empty set with two binary operations \naddition ( \u0002 ) and multiplication (*). Then R is called a \n ring if the following axioms are met: \n ● Under addition, R is an abelian group with zero as \nthe additive identity. \n ● Under multiplication, R satisfies the closure, the \nassociativity, and the identity axiom; 1 is the \nmultiplicative identity, and that 1 \u0004 0. \n ● For every a and b that belongs to R, a • b \u0003 b • a . \n ● For every a, b , and c that belongs to R , then a • (b \u0002 c) \n \u0003 a • b \u0002 a • c. \n Examples \n Z, Q, R , and C are all rings under addition and multipli-\ncation. For any n \u0005 0, Z n is a ring under addition and \nmultiplication modulo n with 0 as identity under addition, \n1 under multiplication. \n Definition: Field \n If the nonzero elements of a ring form a group under \nmultiplication, the ring is called a field . \n Examples \n Q, R, and C are all fields under addition and multiplication, \nwith 0 and 1 as identity under addition and multiplication. \n Note: Z under integer addition and multiplication is \nnot a field because any nonzero element does not have a \nmultiplicative inverse in Z. \n Finite Fields GF(2 n ) \n Construction of finite fields and computations in finite \nfields are based on polynomial computations. Finite \nfields play a significant role in cryptography and cryp-\ntographic protocols such as the Diffie and Hellman key \nexchange protocol, ElGamal cryptosystems, and AES. \n For a prime number p , the quotient Z/p (or F p ) is a \nfinite field with p number of elements. For any positive \ninteger q , GF(q) \u0003 F q . We define A to be algebraic struc-\nture such as a ring, group, or field. \n Definition: A polynomial over A is an expression of \nthe form \n \nf(x) \u0003\n\u0003\na x\ni\nn\ni\nn\n0∑\n \n where n is a nonnegative integer, the coefficient a i \u0002 A , \n0 \b i \b n , and x \u0005 A [2]. \n Definition: A polynomial f \u0002 A[x] is said to be irre-\nducible in A[x] if f has a positive degree and f \u0003 gh for \nsome g, h \u0002 A[x] implies that either g or h is a constant \npolynomial [2]. \n The reader should be aware that a given polynomial \ncan be reducible over one structure but irreducible over \nanother. \n Definition: Let f, g, q, and r \u0002 A[x] with g \u0004 0. Then \nwe say that r is a remainder of f divided by g: \n r\nf(mod g)\n≡\n \n The set of remainders of all the polynomials in \nA[x](mod g) denoted as A[x] g. \n Theorem: Let F be a field and f be a nonzero poly-\nnomial in F[x] . Then F[x] f is a ring and is a field if f is \nirreducible over F. \n" }, { "page_number": 439, "text": "PART | III Encryption Technology\n406\n Theorem: Let F be a field of p elements, and f be an \nirreducible polynomial over F . Then the number of ele-\nments in the field F[x] f is p n [2]. \n For every prime p and every positive integer n there \nexists a finite field of p n number of elements .\n For any prime number p , Z p is a finite field under \naddition and multiplication modulo p with 0 and 1 as the \nidentity under addition and multiplication. \n Z p is an additive ring and the nonzero elements of Z p , \ndenoted by Z p * , forms a multiplicative group. \n Galois field, GF ( p n ) is a finite field with number of \nelements p n , where p is a prime number and n is a posi-\ntive integer. \n Example: Integer representation of a finite field \n(Rijndael) element. \n Polynomial f(x) \u0003 x 8 \u0002 x 4 \u0002 x 3 \u0002 x \u0002 1 is irreduc-\nible over F 2. \n The set of all polynomials(mod f ) over F 2 forms a \nfield of 2 8 elements; they are all polynomials over F 2 of \ndegree less than 8. So any element in the field F 2 [x] f \n b x\nb x\nb x\nb x\nb x\nb x\nb x\nb\n7\n7\n6\n6\n5\n5\n4\n4\n3\n3\n2\n2\n1\n1\n0\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n \n where b 7 , b 6 , b 5 , b 4 , b 3 , b 2 , b 1 , b 0 \u0002 F 2 thus any element in \nthis field can represent an 8-bit binary number. \n We often use F28 field with 256 elements because \nthere exists an isomorphism between Rijndael and F28 . \n Data inside a computer is organized in bytes (8 bits) \nand is processed using Boolean logic, that is, bits are \nmanipulated using binary operations addition and mul-\ntiplication. These binary operations are implemented \nusing the logical operator XOR, or in the language of \nfinite fields, GF (2). Since the extended ASCII defines \n8 bits per byte, an 8-bit byte has a natural representa-\ntion using a polynomial of degree 8. Polynomial addi-\ntion would be mod 2, and multiplication would be mod \npolynomial degree 8. Of course this polynomial degree \n8 would have to be irreducible. Hence the Galois field \n GF (2 8 ) would be the most natural tool to implement the \nencryption algorithm. Furthermore, this would provide a \nclose algebraic formulation. \n Consider polynomials over GF (2) with p \u0003 2 and \n n \u0003 1. \n 1\n1\n1\n1\n1\n2\n2\n3\n, x, x\n, x\nx\n, x\n, x\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n \n For polynomials with negative coefficients, \t 1 is the \nsame as \u0002 1 in GF (2). Obviously, the number of such \npolynomials is infinite. Algebraic operations of addition \nand multiplication in which the coefficients are added \nand multiplied according to the rules that apply to GF (2) \nare sets of polynomials that form a ring. \n Modular Polynomial Arithmetic Over GF (2) \n The Galois field GF (2 3 ): Construct this field with eight \nelements that can be represented by polynomials of \nthe form \n ax\nbx\nc where a, b, c\nGF(2)\n \n2\n0 1\n\u0002\n\u0002\n\u0003\n∈\n{ , } \n Two choices for a, b, c give 2 \u0007 2 \u0007 2 \u0003 8 polynomials \nof the form \n ax\nbx\nc\nGF [x]\n2\n2\n\u0002\n\u0002\n∈\n \n What is our choice of the irreducible polynomials for \nthis field? \n \n(\n) (\n), (\n),\n(\n) (\n)\nx\nx\nx\n, x\nx\n x\nx\nx \nx\nx\n, x\nx\n3\n2\n3\n2\n3\n2\n3\n3\n2\n1\n1\n1\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n \n These two polynomials have no factors: (x 3 \u0002 x 2 \u0002 1), \n(x 3 \u0002 x \u0002 1) \n So we choose polynomial (x 3 \u0002 x \u0002 1). Hence all \npolynomial arithmetic multiplication and division is \ncarried out with respect to (x 3 \u0002 x \u0002 1). \n The eight polynomials that belong to GF(2 3 ): \n {\n}\n0 1\n1\n1\n1\n2\n2\n2\n2\n, , x, x , \nx, \nx , x\nx , \nx\nx\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n \n You will observe that GF(8) \u0003 { 0,1,2,3,4,5,6,7 } is \nnot a field, since every element (excluding zero) does \nnot have a multiplicative inverse such as { 2, 4, 6) (mod \n8) [2]. \n Using a Generator to Represent the \nElements of GF(2 n ) \n It is particularly convenient to represent the elements of \na Galois field with the help of a generator element. If α \nis a generator element, then every element of GF(2 n ), \nexcept for the 0 element, can be written as some power \nof α . A generator is obtained from the irreducible poly-\nnomial that was used to construct the finite field. If f( α ) \nis the irreducible polynomial used, then α is that element \nthat satisfies the equation f( α ) \u0003 0. You do not actually \nsolve this equation for its roots, since an irreducible\n polynomial cannot have actual roots in the field GF(2). \n Consider the case of GF(2 3 ), defined with the irre-\nducible polynomial x 3 \u0002 x \u0002 1. The generator α is that \nelement that satisfies α 3 \u0002 α \u0002 1 \u0003 0. Suppose α is a \n" }, { "page_number": 440, "text": "Chapter | 24 Data Encryption\n407\nroot in GF(2 3 ) of the polynomial p(x) \u0003 1 \u0002 x \u0002 x 3 , that \nis, p( α ) \u0003 0, then \n \nα\nα\nα\nα\nα α\nα\nα\nα\nα α\nα\nα α\nα\nα\nα\n3\n4\n2\n5\n4\n2\n3\n2\n2\n1\n2\n1\n1\n\u0003 \t \t\n\u0003\n\u0002\n\u0003\n\u0002\n\u0003\n\u0002\n\u0003\n\u0003\n\u0002\n\u0003\n\u0002\n\u0003\n (mod )\n.\n(\n)\n(\n)\n(\n\u0002\n\u0002\n\u0003\n\u0003\n\u0002\n\u0002\n\u0003\n\u0002\n\u0003\n\u0002\n\u0003\n\u0002\n\u0003\nα\nα\nα α\nα α\nα\nα\nα\nα\nα\nα\n1\n1\n1\n1\n2\n1\n1\n6\n5\n2\n2\n7\n2\n)\n(\n)\n)\n)\n(\n)\n.\n.\n(\n(\n.\n \n All powers of α generate nonzero elements of GF 8 . \nThe polynomials of GF(2 3 ) represent bit strings, as \nshown in Table 24.8 . \n We now consider all polynomials defined over GF (2), \nmodulo the irreducible polynomial x 3 \u0002 x \u0002 1. When an \nalgebraic operation (polynomial multiplication) results in a \npolynomial whose degree equals or exceeds that of the irre-\nducible polynomial, we will take for our result the remain-\nder modulo the irreducible polynomial. For example, \n \n(\n) * (\n)\n(\n)\n(\n)\n(\n)\n(\nx\nx\nx\n mod x\nx\nx\nx\nx\nx\nx\n mod x\nx\n2\n2\n3\n4\n3\n2\n2\n3\n1\n1\n1\n1\n\u0002\n\u0002\n\u0002\n\u0003\n\u0002\n\u0002\n\u0002\n\u0002\n+\n+\n+\n+\n\u0002\n\u0003\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0003 \t\n\u0002\n\u0003\n\u0002\n1\n1\n1\n4\n3\n3\n2\n2\n)\n(\n)\n(\n)\nx\nx\nx\n mod x\nx\nx\nx\nx\nx\n \n Recall that 1 \u0002 1 \u0003 0 in GF(2). With multiplications \nmodulo (x 3 \u0002 x \u0002 1), we have only the following eight \npolynomials in the set of polynomials over GF(2): \n { , ,\n}\n0 1\n1\n1\n1\n2\n2\n2\n2\n x, x\n, x , x\n, x\nx, x\nx\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n \n We refer to this set as GF(2 3 ), where the power of \n2 is the degree of the modulus polynomial. The eight \nelements of Z 8 are to be integers modulo 8. Similarly, \nGF(2 3 ) maps all the polynomials over GF(2) to the eight \npolynomials shown. But you will note the crucial differ-\nence between GF(2 3 ) and 2 3 : GF(2 3 ) is a field, whereas \nZ 8 is not [2]. \n GF(2 3 ) Is a Finite Field \n We know that GF(2 3 ) is an abelian group because the \noperation of polynomial addition satisfies all the require-\nments of a group operator and because polynomial addi-\ntion is commutative. GF(2 3 ) is also a commutative ring \nbecause polynomial multiplication is a distributive over \npolynomial addition. GF(2 3 ) is a finite field because it is \na finite set and because it contains a unique multiplica-\ntive inverse for every nonzero element. \n GF(2 n ) is a finite field for every n . To find all the pol-\nynomials in GF(2 n ), we need an irreducible polynomial \nof degree n . AES arithmetic is based on GF(2 8 ). It uses \nthe following irreducible polynomial: \n f x\nx\nx\nx\nx\n1\n( ) \u0003\n\u0002\n\u0002\n\u0002\n\u0002\n8\n4\n3\n \n The finite field GF(2 8 ) used by AES obviously con-\ntains 256 distinct polynomials over GF(2). In general, \nGF(p n ) is a finite field for any prime p . The elements of \nGF(p n ) are polynomials over GF(p) (which is the same \nas the set of residues Z p ). \n Next we show how the multiplicative inverse of a \npolynomial is calculated using the Extended Euclidean \nalgorithm: \n \nMultiplicative inverse of x\nx\n \nin F [x]/ x\nx\n is x\n(\n)\n(\n)\n(\n2\n2\n4\n1\n1\n\u0002\n\u0002\n\u0002\n\u0002\n2 \u0002 x)\n \n (\n) (\n)\n)\nx\nx x\nx\n mod(x\nx\n2\n2\n4\n1\n1\n1\n\u0002\n\u0002\n\u0002\n\u0003\n\u0002\n\u0002\n \n \nMultiplicative inverse of x\nx\n \nin F [x]/ x\nx\nx\nx\n(\n)\n(\n)\n6\n2\n8\n4\n3\n1\n1\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n \nis x\nx\nx\nx\n(\n)\n6\n5\n2\n1\n\u0002\n\u0002\n\u0002\n\u0002\n \n \n(\n) (\n)\n)[ , ]\nx\nx\n x\nx\nx\nx\n \nmod (x\nx\nx\nx\n4\n6\n6\n5\n2\n8\n3\n1\n1\n1\n1 2 3\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0003\n\u0002\n\u0002\n\u0002\n\u0002\n \n 6. THE INTERNAL FUNCTIONS \nOF RIJNDAEL IN AES IMPLEMENTATION \n Rijndael is a block cipher. The messages are broken \ninto blocks of a predetermined length, and each block \nis encrypted independently of the others. Rijndael \n TABLE 24.8 The polynomials of GF(2 3 ) \n Polynomial \n Bit String \n 0 \n 000 \n 1 \n 001 \n x \n 010 \n x \u0002 1 \n 011 \n x 2 \n 100 \n x 2 \u0002 1 \n 101 \n x 2 \u0002 x \n 110 \n x 2 \u0002 x \u0002 1 \n 111 \n" }, { "page_number": 441, "text": "PART | III Encryption Technology\n408\noperates on blocks that are 128-bits in length. There \nare actually three variants of the Rijndael cipher, each \nof which uses a different key length. The permissible \nkey lengths are 128, 192, and 256 bits. The details of \nRijndael may be found in Bennett and Gilles (1984), but \nwe give an overview here [2, 3]. \n Mathematical Preliminaries \n Within a block, the fundamental unit operated on is a \nbyte, that is, 8 bits. Bytes can be interpreted in two dif-\nferent ways. A byte is given in terms of its bits as b 7 b 6 \nb 5 b 4 b 3 b 2 b 1 b 0 . We may think of each bit as an element \nin GF(2), the finite field of two elements (mod 2). First, \nwe may think of a byte as a vector, b 7 b 6 b 5 b 4 b 3 b 2 b 1 b 0 \nin GF(2 8 ). Second, we may think of a byte as an element \nof GF(2 8 ), in the following way: Consider the polyno-\nmial ring GF(2)[X]. We may mod out by any polynomial \nto produce a factor ring. If this polynomial is irreducible \nand of degree n, the resulting factor ring is isomorphic to \nGF(2 n ). In Rijndael, we mod out by the irreducible poly-\nnomial X8 \u0002 X4 \u0002 X3 \u0002 X \u0002 1 and so obtain a repre-\nsentation for GF(2 8 ). The Rijndael algorithm deals with \nfive units of data in the encryption scheme: \n ● Bit: A binary digit with a value of 0 or 1 \n ● Byte: A group of 8 bits \n ● Word: A group of 32 bits \n ● Block: A block in AES is defined to be 128, 192 or \n256 bits \n ● State: The data block is known as a state , and it is \nmade up of a 4 \u0007 4 matrix of 16 bytes (128 bits) \n State \n For our discussion purposes, we will consider a data \nblock of 128 bits with a ky size of 128 bits. The state is \n128 bits long. We think of the state as divided into 16 \nbytes, a ij where 0 \b i, j \b 3. We think of these 16 bytes \nas an array, or matrix, with 4 rows and 4 columns, such \nthat a 00 is the first byte, b 0 and so on (see Figure 24.1 ). \n AES uses several rounds (10, 12, or 14) of trans-\nformations, beginning with a 128-bit block. A round is \nmade up of four parts: S-box, permutation, mixing, and \nsubkey addition. We discuss each part here [2, 3]. \n The S-Box (SubByte) \n S-boxes, or substitution boxes, are common in block \nciphers. These are 1-to-1 and onto functions, and there-\nfore an inverse exists. Furthermore, these maps are \nnonlinear to make them immune to linear and differential \ncryptoanalysis. The S-box is the same in every round, and \nit acts independently on each byte. Each byte belongs to \nGF(2 8 ) domain with 256 elements. For a given byte we \ncompute the inverse of that byte in the GF(2 8 ) field. This \nsends a byte x to x \t 1 if x is nonzero and sends it to 0 if it \nis zero. This defines a nonlinear transformation, as shown \nin Table 24.9 . \n Next we apply an affine (over GF(2)) transformation. \nThink of the byte x as a vector in GF(2 8 ). Consider the \ninvertible matrix A, as shown in Figure 24.2 . \n The structure of matrix A is relatively simple, succes-\nsively shifting the prior row by 1. If we define the vector \nv \u0002 GF(2 8 ) to be (1, 1, 0, 0, 0, 1, 1, 0), then the second \nhalf of the S-box sends byte x to byte y through the aff-\nine transformation defined as: \n y\nA \n x\nb\n1\n\u0003\n\t\n⋅\n⊕ \n Since the matrix A has an inverse, it is possible to \nrecover x using the following procedure known as the \nInvSubByte: \n x\n[\n\u0003\n\t\n\t\nA\ny\nb\n1\n1\n(\n)]\n⊕\n \n We will demonstrate the action of an S-box by choos-\ning an uppercase letter S , for which the hexadecimal rep-\nresentation is 53 16 and binary representation is shown in \n Tables 24.10 and 24.11 . \n The letter S has a polynomial representation: \n (\n)\nx\nx\nx\n6\n4\n1\n\u0002\n\u0002\n\u0002\n \n The multiplicative inverse of (x 6 \u0002 x 4 \u0002 x \u0002 1) is \n(x 7 \u0002 x 6 \u0002 x 3 \u0002 x), which is derived using the Extended \nEuclidean algorithm . \n Next we multiply the multiplicative inverse x \t 1 with \nan invertible matrix A (see Figure 24.3 ) and add a col-\numn vector (b) and get the resulting column vector y \n(see Table 24.12 ). This corresponds to SubByte transfor-\nmation and it is nonlinear [2]. \n y\nA\nx\nb\n\u0003\n\u0002\n\t\n*\n1\n \n The column vector y represents a character ED 16 in \nhexadecimal representation. \n⎤\n⎥\n⎥\n⎥\n⎥\n⎥\n⎦\n⎡\n⎢\n⎢\n⎢\n⎢\n⎢\n⎣\na00\u0003 b0a01\u0003 b4a02\u0003 b8a03 \u0003 b12\na10\u0003 b1a11\u0003 b5a12\u0003 b9a13 \u0003 b13\na20\u0003 b2a21\u0003 b6a22\u0003 b10a23\u0003 b14\na30\u0003 b3a31\u0003 b7a32\u0003 b11a33\u0003 b15\n FIGURE 24.1 State. \n" }, { "page_number": 442, "text": "Chapter | 24 Data Encryption\n409\n The reader should note that this transformation using \nthe GF(2 8 ) field is a pretty tedious computation, so \ninstead we use an AES S-box lookup table (a 17 \u0007 17 \nmatrix expressed in hexadecimal) to replace the char-\nacter with a replacement character . This corresponds \nto the SubByte transformation, and corresponding to \nthe SubByte table there is an InvSubByte table that is \nthe inverse of the SubByte table. The InvSubByte can \nbe found in the references or is readily available on the \nInternet. \n We \nwill \nwork \nwith \nthe \nfollowing \nstring: \nQUANTUMCRYPTOGOD, which is 16 bytes long, to \nillustrate AES (see Table 24.13 ). The state represents \nour string as a 4 \u0007 4 matrix in the given arrangement \nusing a hexadecimal representation of each byte (see \n Figure 24.4 ). \n We apply SubByte transformation (see Figure 24.5 ) \nusing the lookup table, which replaces each byte as \ndefined in Table 24.13 \n The next two rounds of ShiftRows and Mixing in the \nencryption lead to a diffusion process. The ShiftRow is a \npermutation. \n ShiftRows \n In the first step, we take the state and apply the follow-\ning logic. The first row is kept as is. The second row is \nshifted left by one byte. The third row is shifted left by \ntwo bytes, and the last row is shifted left by three bytes. \nThe resulting state is shown in Figure 24.6 . \n InvShiftRows in decryption shift bytes toward the \nright, similar to ShiftRows. \n Mixing \n The second step, the MixColumns transformation, mixes \nthe columns. We interpret the bytes of each column as \n TABLE 24.9 SubByte Transformation \n \n 0 \n 1 \n 2 \n 3 \n 4 \n 5 \n 6 \n 7 \n 8 \n 9 \n A \n B \n C \n D \n E \n F \n 0 \n 63 \n 7C \n 77 \n 7B \n F2 \n 6B \n 6F \n C5 \n 30 \n 01 \n 67 \n 2B \n FE \n D7 \n AB \n 76 \n 1 \n CA \n 82 \n C9 \n 7D \n FA \n 59 \n 47 \n F0 \n AD \n D4 \n A2 \n AF \n 9C \n A4 \n 72 \n C0 \n 2 \n B7 \n FD \n 93 \n 26 \n 36 \n 3F \n F7 \n CC \n 34 \n A5 \n E5 \n F1 \n 71 \n D8 \n 31 \n 15 \n 3 \n 04 \n C7 \n 23 \n C3 \n 18 \n 96 \n 05 \n 9A \n 07 \n 12 \n 80 \n E2 \n EB \n 27 \n B2 \n 75 \n 4 \n 09 \n 83 \n 2C \n 1A \n 1B \n 6E \n 5A \n A0 \n 52 \n 3B \n D6 \n B3 \n 29 \n E3 \n 2F \n 84 \n 5 \n 53 \n D1 \n 00 \n ED \n 20 \n FC \n B1 \n 5B \n 6A \n CB \n BE \n 39 \n 4A \n 4C \n 58 \n CF \n 6 \n D0 \n EF \n AA \n FB \n 43 \n 4D \n 33 \n 85 \n 45 \n F9 \n 02 \n 7F \n 50 \n 3C \n 9F \n A8 \n 7 \n 51 \n A3 \n 40 \n 8F \n 92 \n 9D \n 38 \n F5 \n BC \n B6 \n DA \n 21 \n 10 \n FF \n F3 \n D2 \n 8 \n CD \n 0C \n 13 \n EC \n 5F \n 97 \n 44 \n 17 \n C4 \n A7 \n 7E \n 3D \n 64 \n 5D \n 19 \n 73 \n 9 \n 60 \n 81 \n 4F \n DC \n 22 \n 2A \n 90 \n 88 \n 46 \n EE \n B8 \n 14 \n DE \n 5E \n 0B \n DB \n A \n E0 \n 32 \n 3A \n 0A \n 49 \n 06 \n 24 \n 5C \n C2 \n D3 \n AC \n 62 \n 91 \n 95 \n E4 \n 79 \n B \n E7 \n CB \n 37 \n 6D \n 8D \n D5 \n 4E \n A9 \n 6C \n 56 \n F4 \n EA \n 65 \n 7A \n AE \n 08 \n C \n BA \n 78 \n 25 \n 2E \n 1C \n A6 \n B4 \n C6 \n E8 \n DD \n 74 \n 1F \n 4B \n BD \n 8B \n 8A \n D \n 70 \n 3E \n B5 \n 66 \n 48 \n 03 \n F6 \n 0E \n 61 \n 35 \n 57 \n B9 \n 86 \n C1 \n 1D \n 9E \n E \n E1 \n F8 \n 98 \n 11 \n 69 \n D9 \n 8E \n 94 \n 9B \n 1E \n 87 \n E9 \n CE \n 55 \n 28 \n DF \n F \n 8C \n A1 \n 89 \n 0D \n BF \n E6 \n 42 \n 68 \n 41 \n 99 \n 2D \n 0F \n B0 \n 54 \n BB \n 16 \nA \u0003\n10001111\n11000111\n11110001\n11110001\n01111100\n00111110\n00011111\nb \u0003\n1\n1\n0\n0\n0\n1\n1\n0\n FIGURE 24.2 The invertible matrix. \n TABLE 24.10 Hexadecimal and binary representation \n a 7 \n a 6 \n a 5 \n a 4 \n a 3 \n a 2 \n a 1 \n a 0 \n 0 \n 1 \n 0 \n 1 \n 0 \n 0 \n 1 \n 1 \n" }, { "page_number": 443, "text": "PART | III Encryption Technology\n410\n Subkey Addition \n From the original key, we produce a succession of 128-\nbit keys by means of a key schedule. Let’s recap that a \nword is a group of 32 bits. A 128-bit key is labeled as \nshown in Table 24.14 . \n word W 0 \u0003 (k 0 k 1 k 2 k 3 ) word W 1 \u0003 (k 4 k 5 k 6 k 7 ) \n word W 2 \u0003 (k 8 k 9 k 10 k 11 ) word W 3 \u0003 (k 12 k 13 k 14 k 15 ) \n which is then written as a 4 \u0007 4 matrix (see Figure 24.8 ), \nwhere W 0 is the first column, W 1 is the second column, \nW 2 is the third column, and W 3 is the fourth column. \n AES uses a process called key expansion that creates \n(10 \u0002 1) round keys from the given cipher key. We start \nwith four words and end with 44 words — four word per \nround key. Thus \n (\n)\n,........................,\n,\nW\nW\n W\n0\n42\n43 \n The algorithm to generate 10 round keys is as follows: \n The initial cipher key consists of words: W 0 W 1 W 2 W 3 \n The other 10 round keys are made using the following \nlogic: \n If (j mod 4) \u0004 0 \n W\nW\nW\nj\nj\nj\n\u0003\n\t\n\t\n1\n4\n⊕\n \n else \n W\nZ\nW\nj\nj\n\u0003\n\t\n⊕\n4 \n where Z \u0003 SubWord(RotWord( W j \t 1 ) \u0002 RCon j/4. \n RotWord (rotate word) takes a word as an array of \nfour bytes and shifts each byte to the left with wrapping. \nSubWord (substitute word) uses the SubByte lookup \ntable to substitute the byte in the word [2, 3]. RCon \n(round constants) is a four-byte value in which the right-\nmost three bytes are set to zero [2, 3]. \n Let’s work through an example, as shown in \n Figure 24.9 . \nthe coefficients of a polynomial in GF(2 8 )[x ]/(x 4 \u0002 1). \nThen we multiply each column by the polynomial ‘ 03 ’ \nx 3 \u0002 ‘ 02 ’ x 2 \u0002 ‘ 01’x \u0002 ‘ 02 ’ . Multiplication of the \nbytes is done in GF(2 8 ) with mod ( x 4 \u0002 1). \n The mixing transformation remaps the four bytes to \na new four bytes by changing the contents of the indi-\nvidual bytes (see Figure 24.7 ). The MixColumns trans-\nformation is applied to each column of the state, hence \neach column is multiplied by a constant matrix to obtain \na new state, S i\n0\n\u0006 . \n \nS\nS\nS\nS\nS\nS\nS\nS\nS\nS\nS\nS\ni\ni\ni\ni\ni\n0\n0\n1\n2\n3\n00\n00\n10\n20\n30\n20\n30\n2\n3\n2\n3\n53\n\u0006\n\u0006\n\u0003\n\u0003\n\u0003\n\u0002\n\u0002\n\u0002\n\u0002\n⊕\n⊕\n⊕\n⊕\n⊕\n⊕\n⊕\n⊕⊕\n⊕\n1\n01010011\n00011011\n01001000\n2\n00000010\n1\n00\nB\nD\n\u0003\n\u0003\n\u0003\n(\n)\n(\n)\n(\n)\n(\n)\n(\n)\n\u0002\n\u0002\nS\n\u0003\n\u0003\n\u0002\n\u0002\n\u0003\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n( )\n(\n)\n( )(\n)\n(\n)\n(\nx\nx x\nx\nx\nx\nx\nx\nx mod x\nx\nx\n\u0002 11010001\n1\n7\n6\n4\n8\n7\n5\n8\n4\n+\n3\n7\n5\n4\n3\n1\n1\n10111001\n\u0002\n\u0002\n\u0003\n\u0002\n\u0002\n\u0002\n\u0002\n\u0003\nx\nx\nx\nx\nx\n)\n(\n)\n(\n)\n \n10001111\n11000111\n11100011\n11110001\n11111000\n01111100\n00111110\n00011111\n(mod 2) \u0003\n0\n1\n0\n1\n0\n0\n1\n1\n1 \n1 \n0 \n0 \n0 \n1 \n1 \n0\n1 \n0 \n1 \n1 \n0 \n1 \n1 \n1\n\u0002\n FIGURE 24.3 Multiplying the multiplicative inverse with an invertible \nmatrix. \n TABLE 24.11 Hexadecimal and binary representation \n a 7 \n a 6 \n a 5 \n a 4 \n a 3 \n a 2 \n a 1 \n a 0 \n 1 \n 1 \n 0 \n 0 \n 1 \n 0 \n 1 \n 0 \n TABLE 24.12 Vector y \n y 7 \n y 6 \n y 5 \n y 4 \n y 3 \n y 2 \n y 1 \n y 0 \n 1 \n 1 \n 1 \n 0 \n 1 \n 1 \n 0 \n 1 \n3\n00000011\n00000011 11111100\n1\n10\n7\n6\n5\n\u0002 S\n\u0003\n\u0003\n\u0003\n\u0002\n\u0002\n\u0002\n(\n)\n(\n)(\n)\n{ (\n)(\n(FC)\n x\nx\nx\nx \u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0002\n\u0003\n\u0003\n\u0006\nx\nx\nx\n \nmod x\nx\nx\nx\n4\n3\n2\n8\n4\n3\n00\n1\n00011111\n10111001\n)}\n(\n)\n(\n)\n(\n)\nS\n⊕(\n)\n(\n)\n(\n)\n00011111\n01001000\n1110 1110\n0\n⊕\n\u0003\n\u0003\n \nxEE\n" }, { "page_number": 444, "text": "Chapter | 24 Data Encryption\n411\n Key: 2B 7E 15 16 28 AE D2 A6 AB F7 15 88 09 CF \n4F 3C \n W 0 \u0003 2B 7E 15 16 W 1 \u0003 28 AE D2 A6 W 2 \u0003 AB \nF7 15 88 W 3 \u0003 09 CF 4F 3C \n Compute W 4: \n \nW\nZ\nW\nRotWord(W )\nRotWord(\n CF F C)\nCF F C \nSubWo\n4\n0\n3\n09\n4\n3\n4\n3\n09\n\u0003\n\u0003\n\u0003\n⊕\n(\n)\nrd(CF F C \n)\nA \n EB \nZ\nA \n EB \n \n \n \n4\n3\n09\n8\n84\n01\n8\n84\n01\n01 00 00 00\n\u0003\n\u0003\n(\n)\n(\n)\n(\n⊕\n)16\n8\n84\n01\n\u0003 B \n EB \n \n Hence, \n W\nB \n EB \nB E \n \nA FA FE \n4\n8\n84\n01\n2\n7 15 16\n0\n17\n\u0003\n\u0003\n(\n)\n(\n)\n⊕\n \n Putting It Together \n Put the input into the state: XOR is the state with the \n0-th round key. We start with this because any actions \nbefore the first (or after the last) use of the key are \npointless, since they are publicly known and so can be \nundone by an attacker. Then apply 10 of the preceding \nrounds, skipping the column mixing on the last round \n(but proceeding to a final key XOR in that round). The \nresulting state is the ciphertext. We use the follow-\ning labels to describe the encryption procedure (see \n Table 24.15 ): \n Key 1 : K1 : W 0 W 1 W 2 W 3 \n Key 2 : K2 : W 4 W 5 W 6 W 7 \n Key 11: K11 : W 40 W 41 W 42 W 43 \n The Initial State (IS) is the plaintext \n The Output State (OS1) \n SubByte (SB), ShiftRows (SR), MixColumns (MC) \n Round \n Pre-round PlainText \u0002 K1 \u0003 \u0003 \u0003 \u0003 ➜ OS1 \n Next we cycle through the decryption procedure: \n \nInvSubByte (ISB), InvShiftRows (ISR), \nInvMixColumns (IMC)\n \n Round \n AES is a non-Feistel cipher, hence each set of trans-\nformations such as SubByte, ShiftRows, and Mix-\nColumns are invertible so that the decryption must \nconsist of steps to recover the plaintext. You will observe \nthat the round keys are used in the reverse order (see \n Table 24.16 ). \nState \u0003\n⎤\n⎥\n⎥\n⎥\n⎥\n⎥\n⎦\n⎡\n⎢\n⎢\n⎢\n⎢\n⎢\n⎣\n5154524F \n55555947 \n414D504F \n4E435444\n FIGURE 24.4 The state represents a string as a 4 \u0007 4 matrix in the \ngiven arrangement using hexadecimal representation of each byte. \nState \u0003\n⎤\n⎥\n⎥\n⎥\n⎥\n⎥\n⎦\n⎡\n⎢\n⎢\n⎢\n⎢\n⎢\n⎣\nD1200084 \nFCFCCBA0 \n83E35384\n2F1A201B\n FIGURE 24.5 Applying the SubByte transformation. \n TABLE 24.13 Illustrating AES \n b 0 \n b 1 \n b 2 \n b 3 \n b 4 \n b 5 \n b 6 \n b 7 \n b 8 \n b 9 \n b 10 \n b 11 \n b 12 \n b 13 \n b 14 \n b 15 \n Q \n U \n A \n N \n T \n U \n M \n C \n R \n Y \n P \n T \n O \n G \n O \n D \n 51 \n 55 \n 41 \n 4E \n 54 \n 55 \n 4D \n 43 \n 52 \n 59 \n 50 \n 54 \n 4F \n 47 \n 4F \n 44 \nState \u0003\n⎤\n⎥\n⎥\n⎥\n⎥\n⎥\n⎦\n⎡\n⎢\n⎢\n⎢\n⎢\n⎢\n⎣\nD1200084 \nFCCBA0FC \n538483E3\n1B2F1A20\n FIGURE 24.6 ShiftRows. \n\u0003\n⎤\n⎥\n⎥\n⎥\n⎥\n⎥\n⎦\n⎡\n⎢\n⎢\n⎢\n⎢\n⎢\n⎣\nS\u0002\n0i\nS\u0002\n1i\nS\u0002\n2i\nS\u0002\n3i\n⎤\n⎥\n⎥\n⎥\n⎥\n⎥\n⎦\n⎡\n⎢\n⎢\n⎢\n⎢\n⎢\n⎣\n2311 \n1231 \n1123 \n3112\n⎤\n⎥\n⎥\n⎥\n⎥\n⎥\n⎦\n⎡\n⎢\n⎢\n⎢\n⎢\n⎢\n⎣\nS0i\nS1i\nS2i\nS3i\n FIGURE 24.7 Mixing transformation. \n" }, { "page_number": 445, "text": "PART | III Encryption Technology\n412\n 7. USE OF MODERN BLOCK CIPHERS \n DES and AES are designed to encrypt and decrypt data \nblocks of fixed size. Most practical examples have data \nblocks of fewer than 64 bits or greater than 128 bits, and \nto address this issue currently, five different modes of \noperation have been set up. These five modes of opera-\ntion are known as Electronic Code Book (ECB), Cipher- \nBlock Chaining (CBC), Output Feedback (OFB), Cipher \nFeedback (CFB), and Counter (CTR) modes. \n The Electronic Code Book (ECB) \n In this mode, the message is split into blocks, and the \nblocks are sequentially encrypted. This mode is vulner-\nable to attack using the frequency analysis, the same sort \nused in simple substitution. Identical blocks would get \nencrypted to the same blocks, thus exposing the key [1]. \n Cipher-Block Chaining (CBC) \n A logical operation is performed on the first block with \nwhat is known as an initial vector using the secret key so \nas to randomize the first block. The output of this step \nis logically combined with the second block and the key \nto generate encrypted text, which is then used with the \nthird block and so on [1]. \n 8. PUBLIC-KEY CRYPTOGRAPHY \n In this section we cover what is known as asymmetric \nencryption, which uses a pair of keys rather than one \nkey, as used in symmetric encryption. This single-key \nencryption between the two parties requires that each \nparty has its secret key, so that as the number of par-\nties increases so does the number of keys. In addition, \nthe distribution of the secret key becomes unmanageable \nas the number of keys increases. Of course, a longtime \nuse of the same secret key between any pair would make \nit more vulnerable to cryptoanalysis attack. So, to deal \nwith these inextricable problems, a key distribution facil-\nity was born. Symmetric encryption is considered more \npractical in dealing with vast amounts of data consist-\ning of strings of zeros and ones. Yet another scheme was \ninvented to secure data while in transition, using tools \nfrom a branch of mathematics known as number theory. \nTo begin, let’s review the necessary number theory con-\ncepts [2, 3]. \n Review: Number Theory \n Asymmetric-key encryption uses prime numbers, which \nare a subset of positive integers. Positive integers are all \nodd and even numbers, including the number 1, such \nthat some of the numbers are composite, that is, products \nof numbers therein. This critical fact plays a significant \nrole in generating keys. Next we will go through some \nstatements of fact for the sake of completeness. \n Coprimes \n Two positive integers are said to be coprime or relatively \nprime if gcd(a,b) \u0003 1. \n Cardinality of Primes \n The number of primes is infinite. Given a number n, how \nmany prime numbers are smaller than or equal to n ? The \nanswer to this question was discovered by Gauss and \nLagrange as: \n {n/ln(n)\n(n)\n{n/ln(n)\n}\n\u0004\n\u0004\n\t\n∏\n1 08366\n.\n \n where \u0004 (n) is the number of primes smaller than or \nequal to n. \n Check whether a given number 107 is a prime \nnumber. We take the square root of 107 to the nearest \n TABLE 24.14 Subkey Addition \n k 0 \n k 1 \n k 2 \n k 3 \n k 4 \n k 5 \n k 6 \n k 7 \n k 8 \n k 9 \n k 10 \n k 11 \n k 12 \n k 13 \n k 14 \n k 15 \n⎤\n⎥\n⎥\n⎥\n⎥\n⎥\n⎦\n⎡\n⎢\n⎢\n⎢\n⎢\n⎢\n⎣\nk00k01k02k03\nk10k11k12k13 \nk20k21k22k23 \nk30k31k32k33\n FIGURE 24.8 A 4 \u0007 4 matrix. \n⎤\n⎥\n⎥\n⎥\n⎥\n⎥\n⎦\n⎡\n⎢\n⎢\n⎢\n⎢\n⎢\n⎣\n2B28AB09 \n7EAEF 7CF \n15D2154F \n16A6883C\n FIGURE 24.9 RotWord and SubWord. \n" }, { "page_number": 446, "text": "Chapter | 24 Data Encryption\n413\nwhole number, which is 10. Then count the number of \nprimes less than 10, which are 2, 3, 5, 7. Next we check \nwhether any one of these numbers will divide 107. In \nour example none of these numbers can divide 107, so \n107 is a prime number. \n Euler’s Phi-Function φ (n): Euler’s totient function \nfinds the number of integers that are both smaller than n \nand coprime to n . \n ● φ (1) \u0003 0 \n ● φ (p) \u0003 p \t 1 if p is a prime \n ● φ (m x n) \u0003 φ (n) x φ (m) if m and n are coprime \n ● φ (p e ) \u0003 p e \t p e \t 1 if p is a prime \n Examples: \n \nφ\nφ\nφ\nφ\nφ\nφ\nφ\n( )\n( )\n( )\n( )\n( )\n( )\n( )\n2\n1\n3\n2\n4\n2\n5\n4\n6\n2\n7\n6\n8\n4\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n; \n; \n; \n; \n; \n; \n \n Factoring \n The fundamental theorem of arithmetic states that every \npositive integer can be written as a product of prime \nnumbers. There are a number of algorithms to factor \nlarge composite numbers. \n TABLE 24.16 Round \n \n C \u0002 \n K11 ➔ \n \n \n \n OS10 \n 1 \n OS10 ➔ \n ISR ➔ \n ISB \u0002 \n K10 ➔ \n IMC ➔ \n OS9 \n 2 \n OS9 ➔ \n ISR ➔ \n ISB \u0002 \n K9 ➔ \n IMC ➔ \n OS8 \n 10 \n S1 ➔ \n \n ISR ➔ \n ISB \u0002 \n K1 ➔ \n PlainText \n Fermat’s Little Theorem \n In the 1970s, the creators of digital signatures and public-\nkey cryptography realized that the framework for their \nresearch was already laid out in the body of work by Fermat \nand Euler. Generation of a key in public-key cryptography \ninvolves exponentiation modulo of a given modulus. \n \na\nb (mod m) then a\nb (mod m) for \nany positive integer e\na\ne\ne\n≡\n≡\ne\nd\ne\nd\ne\ne\ne\nd e\nde\na . a\nmod m\n(ab)\na . b (mod m)\n(a )\na\n(mod m)\n\u0002\n≡\n≡\n≡\n(\n)\n \n Examples: \n \n2\n33\n2\n25 16 2\n32\n8\n33\n6\n13\n2\n3\n13\n8\n4 1\n43\n2\n2\n(\n)\n.\n.\n(\n)\n(\n)\nmod \n.\n25\nmod\nmod \n4\n≡\n≡\n≡\n≡\n≡\n\u0002 \u0002\n≡\n≡\n≡\n≡\n≡\n≡\n≡\n≡\n≡\n≡\n≡\n≡\n≡\n≡\n9\n2\n4\n16\n33\n3\n2\n3\n93\n9\n2\n9\n81\n33\n3\n2\n3\n9\n4\n2\n4\n8\n2\n8\n16\n2\n16\n32\n2\n mod \n(\n113\n3\n9\n13\n32\n)\n(\n)\n≡\n mod \n \n TABLE 24.15 The Encryption Procedure \n 1. \n OS1 ➔ \n SB ➔ \n SR ➔ \n MC \u0002 \n K2 ➔ \n OS2 \n 2. \n OS2 ➔ \n SB ➔ \n SR ➔ \n MC \u0002 \n K3 ➔ \n OS3 \n 3. \n OS3 ➔ \n SB ➔ \n SR ➔ \n MC \u0002 \n K4 ➔ \n OS4 \n 4. \n OS4 ➔ \n SB ➔ \n SR ➔ \n MC \u0002 \n K5 ➔ \n OS5 \n 5. \n OS5 ➔ \n SB ➔ \n SR ➔ \n MC \u0002 \n K6 ➔ \n OS6 \n 6. \n OS6 ➔ \n SB ➔ \n SR ➔ \n MC \u0002 \n K7 ➔ \n OS7 \n 7. \n OS7 ➔ \n SB ➔ \n SR ➔ \n MC \u0002 \n K8 ➔ \n OS8 \n 8. \n OS8 ➔ \n SB ➔ \n SR ➔ \n MC \u0002 \n K9 ➔ \n OS9 \n 9. \n OS9 ➔ \n SB ➔ \n SR ➔ \n MC \u0002 \n K10 ➔ \n OS10 \n 10. \n OS10 ➔ \n SB ➔ \n SR ➔ \n \u0002 \n K11 ➔ \n Cipher Text (C) \n" }, { "page_number": 447, "text": "PART | III Encryption Technology\n414\n Theorem. Let p be a prime number. \n 1. If a is coprime to p, then a p \t 1 \u0006 1 (mod p) \n 2. a p \u0006 a (mod p) for any integer a \n Examples: \n \n43\n1\n59\n86\n97\n58\n97\n≡\n≡\n (mod \n86 (mod \n)\n) \n Theorem: Let p and q be distinct primes. \n 1. If a is coprime to pq, then \n a\nmod pq , k is any integer\nk(p\nq\n\t\n\t\n1\n1\n1\n)(\n)\n(\n)\n≡\n \n 2. For any integer a, \n a\na mod pq , k is any positive integer\nk(p\nq\n\t\n\t \u0002\n1\n1\n1\n)(\n)\n(\n)\n≡\n \n Example: \n 62\n62\n1\n77\n60\n7 1\n11 1\n≡\n≡\n(\n)\n(\n)\n(\n)\n\t \t\n\t\n mod \n \n Discrete Logarithm \n Here we will deal with multiplicative group G \u0003 \u0004 Z n*, \nx \u0005 . The order of a finite group is the number of ele-\nments in the group G. Let’s take an example of a group, \n \nG\nZ\n x\n x \n x \n\u0003\u0004\n\u0005\n\u0003\n\u0003\n\u0003\n21\n21\n3\n7\n2\n6\n12\n*,\n(\n)\n( )\n( )\nφ\nφ\nφ\n \n that is, 12 elements in the group, and each is coprime to 21. \n { ,\n,\n,\n,\n,\n,\n,\n,\n,\n,\n,\n,\n}\n1 2 4 5 8 9 10 11 13 16 17 19 20\n \n \n \n \n \n \n \n \n \n \n \n \n \n The order of an element, ord(a) is the smallest inte-\nger i such that \n a\ne mod n\ni ≡\n(\n) \n where e \u0003 1. \n Find the order of all elements in G \u0003 \u0004 Z 10*, x \u0005 \n \nφ\nφ\nφ\n(\n)\n( )\n( )\n{ ,\n,\n,\n}\n10\n2\n5\n1\n4\n4\n1 3 7 9\n\u0003\n\u0003\n\u0003\n x \n x \n \n \n \n \n Lagrange’s theorem states that the order of an ele-\nment divides the order of the group. In our example { 1, \n2, 4 } each of them divide 4, therefore we need to check \nonly these powers to find the order of the element. \n \n1\n1\n10\n1\n1\n3\n3\n10\n3\n9\n10\n3\n1\n1\n1\n2\n4\n≡\n≡\n≡\n≡\n mod \nord\n mod \n; \n mod \n; \n \n(\n)\n( )\n(\n)\n(\n)\n(\n→\n\u0003\nmod \nord\n mod \n; \n mod \n; \n mod \n10\n3\n4\n7\n7\n10\n7\n9\n10\n7\n1\n1\n1\n2\n4\n)\n( )\n(\n)\n(\n)\n(\n→\n\u0003\n≡\n≡\n≡\n0\n7\n4\n9\n9\n10\n9\n1\n10\n9\n2\n1\n2\n)\n( )\n(\n)\n(\n)\n( )\n→\n→\nord\n mod \n; \n mod \nord\n\u0003\n\u0003\n≡\n≡\n \n If a \u0002 G \u0003 \u0004 Z n*, x \u0005 , then a φ (n) \u0003 1 mod n \n Euler’s theorem shows that the relationship a i \u0006 1 (mod \nn) holds whenever the order (i) of an element equals φ (n). \n Primitive Roots \n In the multiplicative group, if G \u0003 \u0004 Z n*, x \u0005 when the \norder of an element is the same as φ (n), then that element \nis called the primitive root of the group. This property \nof primitive root is used in ElGamal cryptosystem. \n G \u0003 \u0004 Z 8*, x \u0005 has no primitive roots. The order of \nthis group is φ (8) \u0003 4. \n Z\n \n \n \n8\n1 3 5 7\n*\n{ ,\n,\n,\n}\n\u0003\n \n 1, 2, 4 each divide the order of the group, which is 4. \n \n1\n1\n8\n1\n1\n3\n3\n8\n3\n1\n8\n3\n2\n5\n1\n1\n2\n1\n≡\n→\n≡\n≡\n mod \nord\n mod ; \n mod \nord\n(\n)\n( )\n(\n)\n(\n)\n( )\n\u0003\n\u0003\n→\n≡\n≡\n≡\n≡\n5\n8\n5\n1\n8\n5\n2\n7\n7\n8\n7\n1\n2\n1\n2\n mod ; \n mod \nord\n mod ; \n mod \n(\n)\n(\n)\n( )\n(\n)\n(\n→\n\u0003\n8\n7\n2\n)\n( )\n→ord\n\u0003\n \n In this example none of the elements has an order of \n4, hence this group has no primitive roots. We will rear-\nrange our data as shown in Table 24.17 [2, 3] . \n Let’s take another example: G \u0003 \u0004 Z 7*, x \u0005 , then \n φ (7) \u0003 6, hence the order of the group is 6 with these \nmembers { 1, 2, 3, 4, 5, 6 } , which are all coprime to 7. We \nnote that the order of each of these elements { 1, 2, 3, 4, 5, \n6 } is the smallest integer i such that a i \u0006 1 (mod 7). We \nnote that the order of an element divides the order of the \ngroup. Thus the only numbers that divide 6 are { 1, 2, 3, 6 } : \n \nA). \n mod \n; \n mod \n; \n mod \n; \n mod \n1\n1\n7 1\n1\n7 1\n1\n7 1\n1\n7\n1\n2\n3\n4\n≡\n≡\n≡\n≡\n(\n)\n(\n)\n(\n)\n(\n);;\n mod \n; \n1 mod \n; \n ord\nB). \n mod \n; \n1\n1\n7 1\n7\n1\n1\n2\n2\n7\n5\n6\n1\n≡\n≡\n≡\n(\n)\n(\n)\n( )\n(\n)\n➔\n\u0003\n2\n4\n7\n2\n1\n7\n2\n2\n7\n2\n4\n7\n2\n2\n3\n4\n5\n6\n≡\n≡\n≡\n≡\n≡\n mod \n; \n mod \n; \n mod \n mod \n; \n(\n)\n(\n)\n(\n)\n(\n)\n11\n7\n2\n3\n mod \n \n ord\n(\n)\n( )\n➔\n\u0003\n \n TABLE 24.17 No Primitive Group \n \n i \u0003 1 \n i \u0003 2 \n i \u0003 3 i \u0003 4 i \u0003 5 \n i \u0003 6 i \u0003 7 \n a \u0003 1 \n x:1 \n x:1 \n x:1 \n x:1 \n x:1 \n x:1 \n x:1 \n a \u0003 3 \n x:3 \n x:1 \n x:3 \n x:1 \n x:3 \n x:1 \n x:3 \n a \u0003 5 \n x:5 \n x:1 \n x:5 \n x:1 \n x:5 \n x:1 \n x:5 \n a \u0003 7 \n x:7 \n x:1 \n x:7 \n x:1 \n x:7 \n x:1 \n x:7 \n" }, { "page_number": 448, "text": "Chapter | 24 Data Encryption\n415\nC). 3\n3 (mod 7); 3\n2 (mod 7); 3\n6 (mod 7); 3\n4 (mod 7)\n1\n2\n3\n4\n≡\n≡\n≡\n≡\n;\n3\n5 (mod 7); 3\n1 (mod 7)\nord(3)\n6\n5\n6\n≡\n≡\n=\n \n \n \n; \n \nD). \n mod \n; \n➔\n4\n4\n7\n1 ≡\n(\n) 44\n2\n7\n4\n1\n7\n4\n4\n7\n4\n2\n7\n4\n2\n3\n4\n5\n6\n≡\n≡\n≡\n mod \n; \n (mod \n; \n mod \n;\n mod \n; \n(\n)\n)\n(\n)\n(\n)\n≡\n≡≡1\n7\n4\n3\n mod \n \n ord\n. \n \n \n(\n)\n( )\n➔\n\u0003\nE)\n (mod ;\n5\n5 (mod 7); 5\n4\n7) 5\n6\n1\n2\n3\n≡\n≡\n≡\n (mod 7); 5\n2 (mod 7);\n5\n3 (mod 7); 5\n1\n7;\n(5\n4\n5\n6\n \n \n \n \n \n≡\n≡\n≡\n (mod\nord\n➔\n)\n6\n=\nF). \n mod \n; \n mod \n ; \n mod \n ; \n \n6\n6\n7\n6\n1\n7\n6\n6\n7\n6\n1\n1\n2\n3\n4\n≡\n≡\n≡\n≡\n(\n)\n(\n)\n(\n)\n(mmod \n mod \n; \n mod \n \n \n7\n6\n6\n7\n6\n1\n7\n6\n2\n5\n6\n)\n(\n)\n(\n)\n≡\n≡\n=\n➔ord( )\n \n Since the order of the elements { 3, 5 } is 6, which is \nthe order of the group, therefore the primitive roots of \nthe group are { 3, 5 } . In here the smallest integer i \u0003 6, \n φ (7) \u0003 6. \n Solve for x in each of the following: \n 5\n6\n7\nx\n mod \n≡\n(\n) \n We can rewrite the above as: \n x\nlog mod \n\u0003\n5 6\n7\n(\n) \n Using the third term in E). we see that x must be \nequal to 3. \n The group G \u0003 \u0004 Z n*, x \u0005 has primitive roots only if \n n is 2, 4, p t , or 2p t , where p is an odd prime not includ-\ning 2, and t is an integer. \n If the group G \u0003 \u0004 Z n*, x \u0005 has any primitive roots, \nthe number of primitive roots is φ ( φ (n)). \n Group G \u0003 \u0004 Z n*, x \u0005 has primitive roots, then it is \ncyclic, and each of its primitive roots is a generator of \nthe whole group. \n Group G \u0003 \u0004 Z 10*, x \u0005 has two primitive roots because \n φ (10) \u0003 4, and φ ( φ (10)) \u0003 2. These two primitive roots \nare { 3, 7 } . \n \n3\n10\n3 3\n10\n9 3\n10\n7 3\n10\n1\n7\n10\n7 7\n1\n2\n3\n4\n1\n2\n mod\n \nmod \n \n mod \n \nmod \nmod\n \nmo\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\nd \n \nmod \n \nmod \n10\n9 7\n10\n3 7\n10\n1\n3\n4\n\u0003\n\u0003\n\u0003\n \n Group G \u0003 \u0004 Z p*, x \u0005 is always cyclic. \n The group G \u0003 \u0004 Z p*, x \u0005 has the following properties: \n ● Its elements are from 1 to (p \t 1) inclusive. \n ● It always has primitive roots. \n ● It is cyclic, and its elements can be generated using g \nwhere x is an integer from 1 to φ (n) \u0003 p \t 1. \n ● The primitive roots can be used as the base of a dis-\ncrete logarithm. \n Now that we have reviewed the necessary mathemat-\nical preliminaries, we will focus on the subject matter \nof asymmetric cryptography, which uses a public and a \nprivate key to encrypt and decrypt the plaintext. If Alice \nwants to send plaintext to Bob, she uses Bob’s public \nkey, which is advertised by Bob, to encrypt the plaintext \nand then send it to Bob via an unsecured channel. Bob \ndecrypts the data using his private key, which is known \nto him only. Of course this would appear to be an ideal \nreplacement for the asymmetric-key cipher, but it is \nmuch slower, since it has to encrypt each byte; hence it \nis useful in message authentication and communicating \nthe secret key (see sidebar, “ The RSA Cryptosystem ” ). \n The RSA Cryptosystem \n Key generation algorithm: \n 1. Select two prime numbers p and q such that p \u0004 q. \n 2. Construct m \u0003 p x q. \n 3. Set up a commutative ring R \u0003 \u0004 Z φ , \u0002 , x \u0005 which is \npublic since m is made public. \n 4. Set up a multiplicative group G \u0003 \u0004 Z r (m) * , x \u0005 which is \nused to generate public and private keys. This group is \nhidden from the public since φ (m) is kept hidden. \n φ(m)\np\nq\n\u0003\n\t\n\t\n(\n)(\n)\n1\n1 \n 5. Choose an integer e such that, 1 \u0004 e \u0004 φ (m) and e is \ncoprime to φ (m). \n 6. Compute the secret exponent d such that, 1 \u0004 d \u0004 φ (m) \nand that ed \u0006 1 (mod φ (m)). \n 7. The public key is “ e ” and the private key is “ d. ” The \nvalue of p, q, and φ (m) are kept private. \n Encryption: \n 1. Alice obtains Bob’s public key (m, e). \n 2. The plaintext x is treated as a number to lie in the range \n1 \u0004 x \u0004 m \t 1. \n 3. The ciphertext corresponding to x is y \u0003 x e (mod m). \n 4. Send the ciphertext y to Bob. \n" }, { "page_number": 449, "text": "PART | III Encryption Technology\n416\n 9. CRYPTANALYSIS OF RSA \n RSA algorithm relies that p and q , the distinct prime \nnumbers, are kept secret, even though m \u0003 p x q is made \npublic. So if n is an extremely large number, the prob-\nlem reduces to find the factors that make up the number \n n , which is known as the factorization attack . \n Factorization Attack \n If the middleman, Eve, can factor n correctly, then she \ncorrectly guesses p, q, and φ (m). Reminding ourselves \nthat the public key e is public, then Eve has to compute \nthe multiplicative inverse of e: \n d\ne\n (mod m)\n≡\n\t1\n \n So if the modulus m is chosen to be 1024 bits long, \nit would take considerable time to break the RSA sys-\ntem unless an efficient factorization algorithm could be \nfound [2, 3] (see sidebars “ Chosen-Ciphertext Attack ” \nand “ The e th Roots Problem ” ). \n Decryption: \n 1. Bob uses his private key (m, d). \n 2. Compute the x \u0003 y d (mod m). \n Why RSA works: \n \ny\nx mod m\nx\n mod m\nd . e\nkm\n1\nk p\nq\ny\nx\nx\nd\ne\nd\ned\nd\ned\n≡\n≡\n≡\n(\n)\n(\n)\n(\n)(\n)\n\u0003\n\u0002\n\u0003\n\u0002\n\t\n\t\n1\n1\n1\n1\u0002\n\t\n\t\nk p\nq\nx mod m\n(\n)(\n)\n(\n)\n1\n1 ≡\n \n Example: \n 1. Choose p \u0003 7 and q \u0003 11, then m \u0003 p x q \u0003 7 \u0007 11 \u0003 77 \n R\nZ\n, ,x\nand \n \n x \n\u0003\u0004\n\u0002\n\u0005\n\u0003\n\u0003\n\u0003\n77\n77\n7\n11\n6\n10\n60\nφ\nφ\nφ\n(\n)\n( )\n(\n)\n \n 2. The corresponding multiplicative group G \u0003 \u0004 Z 60 * , x \u0005 . \n 3. Choose e \u0003 13 and d \u0003 37 from Z 60 * such that e x d \u0006 \n1 (mod 60). \n \nPlaintext\ny\nx mod m\n mod \nx\ny mod m\n \ne\nd\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n\u0003\n5\n5\n77\n26\n26\n13\n37\n(\n)\n(\n)\n(\n)\n(m\nmod 77\n5\n) \u0003\n \n Note: 384-bit primes or larger are deemed sufficient to use RSA securely. The prime number e \u0003 2 16 \u0002 1 is often used in modern RSA \nimplementations [2, 3]. \n Chosen-Ciphertext Attack \n Z n is a set of all positive integers from 0 to (n \t 1). Z n * is a \nset all integers such that gcd(n,a) \u0003 1, where a \u0002 Z n * \n Z \nZ\nn\n*\nn\n⊂\n \n Φ (n) calculates the number of elements in Z n * that are \nsmaller than n and coprime to n. \n Φ\nΦ\nΦ\n(\n)\n( )\n( )\n21\n3\n7\n2\n6\n12\n\u0003\n\u0003\n\u0003\n x \n x \n \n Therefore, the number of integers in \u0002 Z 21 * is 12. \n Z\n \n \n \n \n \n \n \n \n \n \n \n21\n* \u0003 { ,\n,\n,\n,\n,\n,\n,\n,\n,\n,\n,\n}\n1 2 4 5 8 10 11 13 16 17 19 20 \n Each of which is coprime to 21. \n Z\n \n \n \n \n \n14\n1 3 5 9 11 13\n*\n{ ,\n,\n,\n,\n,\n}\n\u0003\n \n Each of which is coprime to 14. \n Φ\nΦ\nΦ\n(\n)\n( )\n( )\n*\n14\n2\n7\n1\n6\n6\n14\n\u0003\n\u0003\n\u0003\n x \n x \n number of integers in Z\n \n Example: Choose p \u0003 3 and q \u0003 7, then m \u0003 3 \u0007 7 \u0003 21. \n Encryption and decryption take place in the ring, R \u0003 \u0004 \nZ 21 , \u0002 , x \u0005 \n Φ\nΦ\nΦ\n(\n)\n(\n( )\n21\n2\n6\n12\n\u0003\n\u0003\n) \n \n Key-Generation Group , G \u0003 \u0004 Z 12 * , x \u0005 \n \nΦ\nΦ\nΦ\n(\n)\n( )\n( )\n{ ,\n,\n,\n}\n*\n*\n12\n4\n3\n2\n2\n4\n1 5 7 11\n12\n12\n\u0003\n\u0003\n\u0003\n\u0003\n \n x \n numbers in Z\nZ\n \n \n \n \n Alice encrypts the message P using the public key e of \nBob and sends the encrypted message C to Bob. \n C\nP mod m\ne\n\u0003\n \n Eve, the middleman, intercepts the message and manip-\nulates the message before forwarding to Bob. \n 1. Eve chooses a random integer X \u0002 Z m * (since m is public). \n 2. Eve calculates Y \u0003 C x X e (mod m). \n 3. Bob receives Y from Eve, and he decrypts Y using his \nprivate key d. \n 4. Z \u0003 Y d (mod m). \n 5. Eve can easily discover the plaintext P as follows: \n \nZ\nY (mod m)\n[C x X ] (mod m)\n[C x X\n] (mod m)\n[C x X]\nd\ne d\nd\ned\nd\n\u0003\n\u0003\n\u0003\n\u0003\n mod m\n(\n) \n Hence Z \u0003 [P x X ] (mod m). \n Using the Extended Euclidean algorithm, Eve can then \ncompute the multiplicative inverse of X, and thus obtain P: \n P\nZ x X\n (mod m) [2, 3]\n\u0003\n\t1\n \n" }, { "page_number": 450, "text": "Chapter | 24 Data Encryption\n417\n Discrete Logarithm Problem \n Discrete logarithms are perhaps simplest to understand \nin the group Z p* , where p is the prime number. Let g be \nthe generator of Z p* , then the discrete logarithm prob-\nlem reduces to computing a, given (g, p, g a mod p) for a \nrandomly chosen a \u0004 (p \t 1). \n If we want to find the k th power of one of the numbers \nin this group, we can do so by finding its k th power as \nan integer and then finding the remainder after division \nby p . This process is called discrete exponentiation . For \nexample, consider Z 23* . To compute 3 4 in this group, we \nfirst compute 3 4 \u0003 81, then we divide 81 by 23, obtaining \na remainder of 12. Thus 3 4 \u0003 12 in the group Z 23* \n A discrete logarithm is just the inverse operation. \nFor example, take the equation 3 k \u0006 12 (mod 23) for \n k . As shown above k \u0003 4 is a solution, but it is not the \nonly solution. Since 3 22 \u0006 1 (mod 23), it also follows \nthat if n is an integer, then 3 4 \u0002 22 n \u0006 12 \u0007 1 n \u0006 12 (mod \n23). Hence the equation has infinitely many solutions \nof the form 4 \u0002 22 n . Moreover, since 22 is the small-\nest positive integer m satisfying 3 m \u0006 1 (mod 23), that \nis, 22 is the order of 3 in Z 23* these are all solutions. \nEquivalently, the solution can be expressed as k \u0006 4 \n(mod 22) [2, 3]. \n 10. DIFFIE-HELLMAN ALGORITHM \n The purpose of this protocol is to allow two parties to set \nup a shared secret key over an insecure communication \nchannel so that they may exchange messages. Alice and \nBob agree on a finite cyclic group G and a generating \nelement g in G. We will write the group G multiplica-\ntively [2, 3]. \n 1. Alice picks a prime number p, with the base g, expo-\nnent a to generate a public key A \n 2. A \u0003 g a mod p \n 3. (g, p, A) are made public, and a is kept private. \n 4. Bob picks a prime number p, base b, and an expo-\nnent b to generate a public key B. \n 5. B \u0003 g b mod p \n 6. (g, p, B) are made public, and b is kept private, \n 7. Bob using A generates the shared secret key S. \n 8. S \u0003 A b mod p \n 9. Alice using B generates the shared secret key S. \n 10. S \u0003 B a mod p \n Thus the shared secret key S is established between \nBob and Alice. \n Example: \n Alice: p \u0003 53, g \u0003 18, a \u0003 10 \n \n A \u0003 18 10 mod 53 \u0003 24 \n Bob: p \u0003 53, g \u0003 18, b \u0003 11 \n \n B \u0003 18 11 mod 53 \u0003 48 \n \n S \u0003 24 11 mod 53 \u0003 48 10 mod 53 \u0003 15 \n Diffie-Hellman Problem \n The middleman Eve would know (g, p, A, B) since these \nare public. So for Eve to discover the secret key S, she \nwould have to tackle the following two congruences: \n g\nA mod p and g\nB mod p\na\nb\n≡\n≡\n \n If Eve had some way of solving the discrete loga-\nrithm problem (DLP) in a time-efficient manner, she \ncould discover the shared secret key S; no probabilistic \npolynomial-time algorithm exists that solves this prob-\nlem. The set of values: \n (\n)\ng mod p, g mod p, g b mod p\na\nb\na\n \n is called the Diffie-Hellman problem . \n If the DLP problem can be efficiently solved, then so \ncan the Diffie-Hellman problem. \n 11. ELLIPTIC CURVE CRYPTOSYSTEMS \n For simplicity, we shall restrict our attention to elliptic \ncurves over Z p , where p is a prime greater than 3. We \nmention, however, that elliptic curves can more gener-\nally be defined over any finite field [4]. An elliptic curve \nE over Z p is defined by an equation of the form \n \n y\nx\nax\nb\n2\n3\n\u0003\n\u0002\n\u0002\n \n (24.1) \n where a, b \u0002 Z p , and 4 a 3 \u0002 27 b 2 \u0004 0 (mod p ), together \nwith a special point O called the point at infinity. The set \n E (Z p ) consists of all points ( x , y ), x \u0002 Z p , y \u0002 Z p , which \nsatisfy the defining equation (1), together with O . \n The e th Roots Problem \n Given: \n A composite number n, product of two prime num-\nbers p and q \n An integer e \n 3 \n gcd (e, Φ (n)) \u0003 1 \n An integer c \u0002 Z 12 * \n Find an integer m such that m e \u0006 c mod n [2, 3]. \n" }, { "page_number": 451, "text": "PART | III Encryption Technology\n418\n An Example \n Let p \u0003 23 and consider the elliptic curve E: y 2 \u0003 \n x 3 \u0002 x \u0002 1, defined over Z 23 . (In the notation of \nEquation 24.1, we have a \u0003 1 and b \u0003 1.) Note that \n4 a 3 \u0002 27 b 2 \u0003 4 \u0002 4 \u0003 8 \u0004 0, so E is indeed an elliptic \ncurve. The points in E (Z 23 ) are O and the following are \nshown in Table 24.18 . \n Addition Formula \n There is a rule for adding two points on an elliptic curve \n E (Z p ) to give a third elliptic curve point. Together with \nthis addition operation, the set of points E (Z p ) forms a \ngroup with O serving as its identity. It is this group that \nis used in the construction of elliptic curve cryptosys-\ntems. The addition rule, which can be explained geo-\nmetrically, is presented here as a sequence of algebraic \nformula [4]. \n 1. P \u0002 O \u0003 O \u0002 P \u0003 P for all P \u0002 E (Z p ) \n 2. If P \u0003 ( x , y ) \u0002 E (Z p ) then ( x , y ) \u0002 ( x , \t y ) \u0003 O (The \npoint ( x , \t y ) is denoted by \t P , and is called the \n negative of P ; observe that \t P is indeed a point on \nthe curve.) \n 3. Let P \u0003 ( x 1, y 1) \u0002 E (Z p ) and Q \u0003 ( x 2, y 2) \u0002 E (Z p ) , \nwhere P \u0004 \t Q. Then P \u0002 Q \u0003 ( x 3, y 3), \nwhere: \n \nx\nx\nx\ny\nx\nx\ny\ny\ny\nx\nx\n3\n2\n1\n2\n3\n1\n3\n1\n2\n1\n2\n1\n\u0003\n\t\n\t\n\u0003\n\t\n\t\n\u0003\n\t\n\t\n(\n)\n( (\n)\n)\nλ\nλ\nλ\n mod p\n mod p\n mod p if P\nQ or \nmod p if P\nQ\n\u0004\nλ \u0003\n\u0002\n\u0003\n3\n2\n1\n2\n1\nx\na\ny\n \n We will digress to modular division: 4/3 mod 11. \nWe are looking for a number, say t , such that 3 * t mod \n11 \u0003 4. We need to multiply the left and right sides by 3 \t 1 \n \n3\nt mod \nt mod \n\t\n\t\n\t\n\u0003\n\u0003\n1\n1\n1\n3\n11\n3\n4\n11\n3\n4\n*\n*\n*\n*\n \n Next we use the Extended Euclidean algorithm and \nget (inverse) 3 \t 1 is 4 (3 * 4 \u0003 12 mod 11 \u0003 1). \n 4\n4\n11\n5\n*\n mod \n\u0003\n \n Hence, \n 4 3\n11\n5\n/ mod \n\u0003\n \n Example of Elliptic Curve Addition \n Consider the elliptic curve defined in the previous \nexample. (Also see sidebar, “ EC Diffie-Hellman \nAlgorithm. ” ) [4]. \n 1. Let P \u0003 (3, 10) and Q \u0003 (9 , 7). Then P \u0002 Q \u0003 ( x 3 , \ny 3) is computed as follows: \n \nλ\n∈\n≡\n\u0003\n\t\n\t\n\u0003 \t\n\u0003 \t \u0003\n\u0003\n\t \t \u0003 \t \t \u0003 \t\n7\n10\n9\n3\n3\n6\n1\n2\n11\n11\n3\n9\n6\n3\n9\n6\n17\n23\n23\n3\n2\nZ\nx\n(\n)\nmod\n, and\ny3\n11 3\n6\n10\n11 9\n10\n89\n20\n23\n\u0003\n\t \t\n\t\n\u0003\n\t\n\u0003\n(\n(\n))\n( )\n(\n)\n≡\nmod\n. \n Hence P \u0002 Q \u0003 (17, 20). \n 2. Let P \u0003 (3,10). Then 2 P \u0003 P \u0002 P \u0003 ( x 3 , y 3) is com-\nputed as follows: \n \nλ\n∈\n≡\n\u0003\n\u0002\n\u0003\n\u0003\n\u0003\n\u0003\n\t\n\u0003\n\u0003\n\t\n3 3\n1\n20\n5\n20\n1\n4\n6\n6\n6\n30\n7\n23\n6 3\n2\n23\n3\n2\n3\n(\n)\n(\n)\n(\nZ\nx\ny\n mod \n, and\n \n77\n10\n24\n10\n11\n12\n23\n)\n(\n)\n \n mod \n.\n\t\n\u0003 \t\n\t\n\u0003 \t\n∈\n \n Hence 2 P \u0003 (7, 12). \n Consider the following elliptic curve with Z p\n* \n y\nx\nax\nb\n2\n3\n mod p\n mod p\n\u0003\n\u0002\n\u0002\n(\n)\n \n Set p \u0003 11 and a \u0003 1 and b \u0003 2. Take a point P \n(4, 2) and multiply it by 3; the resulting point will be on \nthe curve with (4, 9). \n TABLE 24.18 Elliptic Curve Cryptosystems \n (0, 1) \n (6, 4) \n (12, 19) \n (0, 22) \n (6, 19) \n (13, 7) \n (1, 7) \n (7, 11) \n (13, 16) \n (1, 16) \n (7, 12) \n (17, 3) \n (3, 10) \n (9, 7) \n (17, 20) \n (3, 13) \n (9, 16) \n (18, 3) \n (4, 0) \n (11, 3) \n (18, 20) \n (5, 4) \n (11, 20) \n (19, 5) \n (5, 19) \n (12, 4) \n (19, 18) \n" }, { "page_number": 452, "text": "Chapter | 24 Data Encryption\n419\n EC Security \n Suppose Eve the middleman captures (p, a, b, Q A , Q B ). \nCan Eve figure out the shared secret key without know-\ning either (d B , d A )? Eve could use \n Q\nP\nd\nA\nA\n\u0003\n*\n \n to compute the unknown d A , which is known as the \nElliptic Curve Discrete Logarithm problem [4]. \n 12. MESSAGE INTEGRITY AND \nAUTHENTICATION \n We live in the Internet age, and a fair number of com-\nmercial transactions take place on the Internet. It has \noften been reported that transactions on the Internet \nbetween two parties have been hijacked by a third party, \nhence data integrity and authentication are critical if \necommerce is to survive and grow. \n This section deals with message integrity and authen-\ntication. So far we have discussed and shown how to \nkeep a message confidential. But on many occasions we \nneed to make sure that the content of a message has not \nbeen changed by a third party, and we need some way \nof ascertaining whether the message has been tampered \nwith. Since the message is transmitted electronically as a \nstring of ones and zeros, we need a mechanism to make \nsure that the count of the number of ones and zeros does \nnot become altered, and furthermore, that zeros and ones \nare not changed in their position within the string. \n We create a pair and label it as message and its corre-\nsponding message digest. A given block of messages is run \nthrough an algorithm hash function, which has its input \nthe message and the output is the compressed message, \nthe message digest, which is a fixed-size block but smaller \nin length. The receiver, say, Bob, can verify the integrity \nof the message by running the message through the hash \nfunction (the same hash function as used by Alice) and \ncomparing the message digest with the message digest that \nwas sent along with the message by, say, Alice. If the two \nmessage digests agree on their block size, the integrity of \nthe message was maintained in the transmission. \n Cryptographic Hash Functions \n A cryptographic hash function must satisfy three criteria: \n ● Preimage resistance \n ● Second preimage resistance (weak collision \nresistance) \n ● Strong collision resistance \n Preimage Resistance \n Given a message m and the hash function hash, if the \nhash value h \u0003 hash(m) is given, it should be hard to \nfind any m such that h \u0003 hash(m). \n Second Preimage Resistance (Weak Collision \nResistance) \n Given input m 1 , it should be hard to find another mes-\nsage m 2 such that hash(m 1 ) \u0003 hash(m 2 ) and that m 1 \u0004 m 2 \n Strong Collision Resistance \n It ought to be hard to find two messages m 1 \u0004 m 2 such \nthat hash(m 1 ) \u0003 hash(m 2 ). A hash function takes a fixed \nsize input n -bit string and produces a fixed size output \n m -bit string such that m less than n in length. The origi-\nnal hash function was defined by Merkle-Damgard, \nwhich is an iterated hash function. This hash func-\ntion first breaks up the original message into fixed-size \nblocks of size n . Next an initial vector H 0 (digest) is set \nup and combined with the message block M 1 to produce \nmessage digest H 1 , which is then combined with M 2 to \nproduce message digest H 1, and so on until the last mes-\nsage block produces the final message digest. \n H\nf H\n, M\ni\ni\ni\ni\n\u0003\n\n\t\n(\n)\n1\n1 \n EC Diffie-Hellman Algorithm \n 1. Alice has her elliptic curve, and she chooses a secret \nrandom number d and computes a number on the \ncurve Q A \u0003 d A *P [4]. \n Alice’s public key: (p, a, b, Q A ) \n Alice’s private key: d A \n 2. Bob has his elliptic curve, and he chooses a secret \nrandom number d and computes a number on the \ncurve Q B \u0003 d B * P: \n Bob’s public key: (p, a, b, Q B ) \n Bob’s private key: d B \n 3. Alice computes the shared secret key as \n S\nd\nQ\nA\nB\n\u0003\n*\n \n 4. Similarly, Bob computes the shared secret key as \n S\nd\nQ\nB\nA\n\u0003\n*\n \n 5. The shared secret key computed by Alice and Bob \nare the same for: \n S\nd\nQ\nd\nd\nP\nB\nA\nB\nA\n\u0003\n\u0003\n*\n*\n*\n \n" }, { "page_number": 453, "text": "PART | III Encryption Technology\n420\n Message digest MD2, MD4, and MD5 were designed \nby Ron Rivest. MD5 as input block size of 512 bits and \nproduces a message digest of 128 bits [1]. \n Secure Hash Algorithm (SHA) was developed by the \nNational Institute of Standards and Technology (NIST). \nSHA-1, SHA-224, SHA-256, SHA-384, and SHA-512 \nare examples of the secure hash algorithm. SHA-512 \nproduces a message digest of 512 bits. \n Message Authentication \n Alice sends a message to Bob. How can Bob be sure that \nthe message originated from Alice and not someone else \npretending to be Alice? If you are engaged in a transac-\ntion on the Internet using a Web client, you need to make \nsure that you are not engaged with a dummy Web site or \nelse you could submit your sensitive information to an \nunauthorized party. Alice in this case needs to demon-\nstrate that she is communicating and not an imposter. \n Alice creates a message digest using the message \n(M), then using the shared secret key (known to Bob \nonly) she combines the key with a message digest and \ncreates a message authentication code (MAC). She \nthen sends the MAC and the message (M) to Bob over \nan insecure channel. Bob uses the message (M) to cre-\nate a hash value and then recreates a MAC using the \nsecret shared key and the hash value. Next he compares \nthe received MAC from Alice with his MAC. If the two \nmatch, Bob is assured that Alice was indeed the origina-\ntor of the message [1]. \n Digital Signature \n Message authentication is implemented using the send-\ner’s private key and verified by the receiver using the \nsender’s public key. Hence if Alice uses her private \nkey, Bob can verify that the message was sent by Alice, \nsince Bob would have to use Alice’s public key to verify. \nAlice’s public key cannot verify the signature signed by \nEve’s private key [1]. \n Message Integrity Uses a Hash Function in \nSigning the Message \n Nonrepudiation is implemented using a third party that \ncan be trusted by parties that want to exchange messages \nwith one another. For example, Alice creates a signature \nfrom her message and sends the message, her identity, \nBob’s identity, and the signature to the third party, who \nthen verifies the message using Alice’s public key that \nthe message came from Alice. Next the third party saves \na copy of the message with the sender’s and the recipi-\nent’s identity and the time stamp of the message. \n The third party then creates another signature using \nits private key from the message that Alice left behind. \nThe third party then sends the message, the new signa-\nture, and Alice’s and Bob’s identity to Bob, who then \nuses the third party’s public key to ascertain that the \nmessage came from the third party [1]. \n RSA Digital Signature Scheme \n Alice and Bob are the two parties that are going to \nexchange the messages. So, we begin with Alice, who \nwill generate her public and private key using two dis-\ntinct prime numbers — say, p and q. Next she calcu-\nlates n \u0003 p x q. Using Φ (n) \u0003 (p \t 1)(q \t 1), picks e \nand computes d such that e x d \u0003 1 mod ( Φ (n). Alice \ndeclares (e, n) public, keeping her private key d secret. \n Signing: Alice takes the message and computes the \nsignature as: \n S\nM mod n\nd\n\u0003\n(\n) \n She then sends the message M and the signature S \nto Bob. \n Bob receives the message M and the signature S, \nand then, using Alice’s public key e and the signature S, \nrecreates the message M ’ \u0003 S e (mod n). Next Bob com-\npares M ’ with M, and if the two values are congruent, \nBob accepts the message [1]. \n RSA Digital Signature and the \nMessage Digest \n Alice and Bob agree on a hash function. Alice applies \nthe hash function to the message M and generates the \nmessage digest, D \u0003 hash(M). She then signs the mes-\nsage digest using her private key, \n S\nD mod n\nd\n\u0003\n(\n) \n Alice sends the signature S and the message M to Bob. \nHe then uses Alice’s public key, and the signature S recre-\nates the message digest D ’ \u0003 S e (mod n) as well as com-\nputes the message digest D \u0003 hash(M) from the received \nmessage M. Bob then compares D with D ’ , and if they \nare congruent modulo n, he accepts the message [1]. \n" }, { "page_number": 454, "text": "Chapter | 24 Data Encryption\n421\n 13. SUMMARY \n In this chapter we have attempted to cover cryptogra-\nphy from its very simple structure such as substitution \nciphers to the complex AES and elliptic curve crypto-\nsystems. There is a subject known as cryptoanalysis \nthat attempts to crack the encryption to expose the key, \npartially or fully. We briefly discussed this in the sec-\ntion on the discrete logarithm problem. Over the past 10 \nyears, we have seen the application of quantum theory to \nencryption in what is termed quantum cryptology , which \nis used to transmit the secret key securely over a public \nchannel. The reader will observe that we did not cover \nthe Public Key Infrastructure (PKI) due to lack of space \nin the chapter. \n REFERENCES \n [1] Thomas H. Barr , Invitation to Cryptology , Prentice Hall , 2002 . \n [2] Wenbo Mao , Modern Cryptography, Theory & Practice , Prentice \nHall , New York , 2004 . \n [3] Behrouz A. Forouzan , Cryptography and Network Security , \n McGraw-Hill , 2008 . \n [4] A. Jurisic , A. J. Menezes , Elliptic Curves and Cryptograph , \n Dr. Dobb’s Journals, April 01, 1997 , http://www.ddj.com/\narchitect/184410167 . \n" }, { "page_number": 455, "text": "This page intentionally left blank\n" }, { "page_number": 456, "text": "423\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Satellite Encryption \n Daniel S. Soper \n California State University \n Chapter 25 \n For virtually all of human history, the communication \nof information was relegated to the surface of the Earth. \nWhether written or spoken, transmitted by land, sea, or \nair, all messages had one thing in common: They were, \nlike those who created them, inescapably bound to the \nterrestrial surface. \n In February 1945, however, the landscape of human \ncommunication was forever altered when an article by the \nfamous science fiction writer Arthur C. Clarke proposed \nthe extraordinary possibility that artificial satellites placed \ninto orbit above the Earth could be used to facilitate mass \ncommunication on a global scale. A year later, a Project \nRAND report concluded that “ A satellite vehicle with \nappropriate instrumentation [could] be expected to be one \nof the most potent scientific tools of the 20 th century, ” and \nthat “ The achievement of a satellite craft would produce \nrepercussions comparable to the explosion of the atomic \nbomb. ” It was only 12 short years after Clarke’s historic \nprediction that mankind’s first artificial satellite, Sputnik 1 , \nwas transmitting information from orbit back to Earth. \n In the decades that followed, satellite technology \nevolved rapidly from its humble beginnings to become \nan essential tool for such diverse activities as astronomy, \n communications, scientific research, defense, navigation, \nand the monitoring of global weather patterns. In the \n21 st century, satellites are helping to fuel globalization, \nand societies are relying heavily on the power of satellite \ntechnology to enable the modern lifestyle. It is for these \nreasons that satellite communications must be protected. \nBefore examining satellite encryption in detail, however, \na brief review of satellite communication in general may \nbe useful. \n For communications purposes, modern satellites can \nbe classified into two categories: those that commu-\nnicate exclusively with the surface of the Earth (which \nwill be referred to here as “ Type 1 ” satellites) and those \nthat communicate not only with the surface of the Earth \nbut also with other satellites or spacecraft (referred to \nhere as “ Type 2 ” satellites). The distinction between \nthese two types of satellite communications is depicted \nin Figure 25.1 . \n As shown in the figure, there are several varieties of \ncommunication links that a satellite may support, and \nthe classification of a particular satellite as Type 1 or \nType 2 allows us to gain an understanding of its basic \ncommunications capabilities as well as insight into the \nsort of communications links that might need protecting. \nIn the case of Type 1 satellites, the spacecraft may sup-\nport uplink capabilities, downlink capabilities, or both. \nAn uplink channel is a communications channel through \nwhich information is transmitted from the surface of \nthe Earth to an orbiting satellite or other spacecraft. By \ncontrast, a downlink channel is a communications chan-\nnel through which information is transmitted from an \norbiting satellite or other spacecraft to the terrestrial sur-\nface. While Type 2 satellites may possess the uplink and \ndownlink capabilities of a Type 1 satellite, they are also \ncapable of establishing links with spacecraft or other \nType 2 satellites for purposes of extraplanetary commu-\nnication. Type 2 satellites that can act as intermediaries \nbetween other spacecraft and the ground may be clas-\nsified as relay satellites. Note that whether a particular \nlink is used for sending or receiving information depends \non the perspective of the viewer — from the ground, for \nexample, an uplink channel is used to send information, \nbut from the perspective of the satellite, the uplink chan-\nnel is used to receive information. \n 1. THE NEED FOR SATELLITE \nENCRYPTION \n Depending on the type of satellite communications link \nthat needs to be established, substantially different tech-\nnologies, frequencies, and data encryption techniques \nmight be required. The reasons for this lie as much in \n" }, { "page_number": 457, "text": "PART | III Encryption Technology\n424\nthe realm of human behavior as they do in the realm of \nphysics. Broadly speaking, it is not unreasonable to con-\nclude that satellite encryption would be entirely unnec-\nessary if every human being were perfectly trustworthy. \nThat is to say, barring a desire to protect our messages \nfrom the possibility of extraterrestrial interception, there \nwould be no need to encrypt satellite communications if \nonly those individuals entitled to send or receive a par-\nticular satellite transmission actually attempted to do so. \nIn reality, however, human beings, organizations, and \ngovernments commonly possess competing or contra-\ndictory agendas, thus implying the need to protect the \nconfidentiality, integrity, and availability of information \ntransmitted via satellite. \n With human behavioral considerations in mind, the \nneed for satellite encryption can be evaluated from a \nphysical perspective. Consider, for example, a Type 1 \ncommunications satellite that has been placed into orbit \nabove the Equator. Transmissions from the satellite to \nthe terrestrial surface (i.e., the downlink channel) would \ncommonly be made by way of a parabolic antenna. \nAlthough such an antenna facilitates focusing the signal, \nthe signal nevertheless disperses in a conical fashion as it \ndeparts the spacecraft and approaches the surface of the \nplanet. The result is that the signal may be made avail-\nable over a wider geographic area than would be opti-\nmally desirable for security purposes. As with terrestrial \nradio, in the absence of encryption anyone within range \nof the signal who possesses the requisite equipment \ncould receive the message. In this particular example, \nthe geographic area over which the signal would be dis-\npersed would depend on both the focal precision of the \nparabolic antenna and the altitude of the satellite above \nthe Earth. These concepts are illustrated in Figure 25.2 . \n Because the sender of a satellite message may have \nlittle or no control over to whom the transmission is \nmade available, protecting the message requires that its \ncontents be encrypted. For similar reasons, extraplan-\netary transmissions sent between Type 2 satellites must \nalso be protected; with thousands of satellites orbiting \nExtraplanetary Links\nDownlink\nUplink\nDownlink\nUplink\nType 2\nType 1\n FIGURE 25.1 Comparison of Type 1 and Type 2 satellite communication capabilities. \nVariable Altitude, Fixed Focal Precision\nFixed Altitude, Variable Focal Precision\n FIGURE 25.2 Effect of altitude and focal precision on satellite signal dispersion. \n" }, { "page_number": 458, "text": "Chapter | 25 Satellite Encryption\n425\nthe planet, the chances of an intersatellite communica-\ntion being intercepted are quite good! \n Aside from these considerations, the sensitivity of the \ninformation being transmitted must also be taken into \naccount. Various entities possess different motivations \nfor wanting to ensure the security of messages transmit-\nted via satellite. An individual, for example, might want \nher private telephone calls or bank transaction details \nto be protected. Likewise, an organization may want to \nprevent its proprietary data from falling into the hands \nof its competition, and a government may want to pro-\ntect its military communications and national security \nsecrets from being intercepted or compromised by an \nenemy. As with terrestrial communications, the sensitiv-\nity of the data being transmitted via satellite must dic-\ntate the extent to which those data are protected. If the \nemerging global information society is to fully capital-\nize on the benefits of satellite-based communication, \nits citizens, organizations, and governments must be \nassured that their sensitive data are not being exposed to \nunacceptable risk. In light of these considerations, satel-\nlite encryption can be expected to play a key role in the \nfuture advancement of mankind. \n 2. SATELLITE ENCRYPTION POLICY \n Given the rapid adoption of satellite communications, \nand the potential security implications associated there-\nwith, many governments and multinational coalitions are \nincreasingly establishing policy instruments with a view \ntoward controlling and regulating the availability and \nuse of satellite encryption in both the public and private \n sectors. Such policy instruments have wide-reaching eco-\nnomic, political, and cultural implications that commonly \nextend beyond national boundaries. \n One might, for example, consider the export con-\ntrols placed on satellite encryption technology by the \nU.S. government, which many consider the most strin-\ngent in the world. Broadly, these export controls were \nestablished in support of two primary objectives. First, \nthe maintenance of a restrictive export policy allows the \ngovernment to review and assess the merits of any newly \ndeveloped satellite encryption technologies that have \nbeen proposed for export. If the export of those technolo-\ngies is ultimately approved, the government will possess \na detailed understanding of how the technologies oper-\nate, potentially allowing for their encryption schemes to \nbe defeated if deemed necessary. Second, such controls \nallow the government to prevent satellite encryption \ntechnologies of particularly high merit from leaving the \ncountry, especially if the utilization of those technologies \nby foreign entities would interfere with U.S. intelligence-\ngathering activities. Although these stringent controls may \nappear to be a legacy of the xenophobic policies of the \nCold War, they are nevertheless still seen as prudent meas-\nures in a world where information and communication \ntechnologies can be readily leveraged to advance extreme \nagendas. Unfortunately, such controls have potentially \nnegative economic implications insofar as U.S. firms may \nbe barred from competing in the increasingly lucrative \nglobal market for satellite communication technologies. \n The establishment and maintenance of satellite \nencryption policy also needs to be considered in the \ncontext of satellite systems of global import. Consider, \nfor example, the NAVSTAR global positioning system \n(GPS), whose constellation of satellites enables anyone \nwith a GPS receiver to accurately determine their current \nlocation, time, velocity, and direction of travel anywhere \non or near the surface of the Earth. In recent years, GPS \ncapabilities have been incorporated into the navigational \nsystems of automobiles, oceangoing vessels, trains, com-\nmercial aircraft, military vehicles, and many other forms \nof transit all over the world. Despite their worldwide \nuse, the NAVSTAR GPS satellites are currently operated \nby the 50th Space Wing of the U.S. Air Force, implying \nthat satellite encryption policy decisions related to the \nGPS are controlled by the U.S. Department of Defense. \nOne of the options available to the U.S. government \nthrough this arrangement is the ability to selectively or \nentirely block the transmission of civilian GPS signals \nwhile retaining access to GPS signals for military pur-\nposes. Additionally, the U.S. government reserves the \nright to introduce errors into civilian GPS signals, thus \nmaking them less accurate. As the U.S. government \nexercises exclusive control over the GPS system, users \nall over the world are forced to place a great deal of trust \nin the goodwill of its operators and in the integrity of the \nencryption scheme for the NAVSTAR uplink channel. \nGiven the widespread use of GPS navigation, a global \ncatastrophe could ensue if the encryption scheme used \nto control the NAVSTAR GPS satellites were to be com-\npromised. Because the GPS system is controlled by the \nU.S. military, the resiliency and security of this encryp-\ntion scheme cannot be independently evaluated. \n In the reality of a rapidly globalizing and intercon-\nnected world, the effectiveness of national satellite encryp-\ntion policy efforts may not be sustainable in the long run. \nSatellite encryption policies face the same legal difficulties \nas so many other intrinsically international issues; outside \nof international agreements, the ability of a specific coun-\ntry to enforce its laws extends only so far as its geographic \n" }, { "page_number": 459, "text": "PART | III Encryption Technology\n426\nboundaries. This problem is particularly relevant in the \ncontext of satellite communications because the satellites \nthemselves orbit the Earth and hence do not lie within the \ngeographic boundaries of any nation. Considered in con-\njunction with governments ’ growing need to share intel-\nligence resources and information with their allies, future \nefforts targeted toward satellite encryption policy making \nmay increasingly fall under the auspices of international \norganizations and multinational coalitions. \n 3. IMPLEMENTING SATELLITE \nENCRYPTION \n It was noted earlier in this chapter that information can \nbe transmitted to or from satellites using three general \ntypes of communication links: surface-to-satellite links \n(uplinks), satellite-to-surface links (downlinks), and inter-\nsatellite or interspacecraft links (extraplanetary links). \nTechnological considerations notwithstanding, the spe-\ncific encryption mechanism used to secure a transmis-\nsion depends not only on which of these three types of \nlinks is being utilized but also on the nature and purpose \nof the message being transmitted. For purposes of sim-\nplicity, the value of transmitted information can be clas-\nsified along two dimensions: high value and low value. \nThe decision as to what constitutes high-value and low-\nvalue information largely depends on the perspective of \nthe beholder; after all, one man’s trash is another man’s \ntreasure. Nevertheless, establishing this broad distinction \nallows satellite encryption to be considered in the context \nof the conceptual model shown in Figure 25.3 . \n As shown in the figure, any satellite-based communi-\ncation can be classified into one of six possible categories. \nIn the subsections that follow, each of these categories will \nbe addressed by considering the encryption of both high-\nvalue and low-value data in the context of the three types \nof satellite communication links. Before we consider the \nspecific facets of encryption pertaining to satellite uplink, \nextraplanetary, and downlink transmissions, however, an \nexamination of several of the more general issues associ-\nated with satellite encryption may be prudent. \n General Satellite Encryption Issues \n One of the problems common to all forms of satellite \nencryption relates to signal degradation. Satellite sig-\nnals are typically sent over long distances using com-\nparatively low-power transmissions and must frequently \ncontend with many forms of interference, including ter-\nrestrial weather, solar and cosmic radiation, and many \nother forms of electromagnetic noise. Such disturbances \nmay result in the introduction of gaps or errors into a \nsatellite transmission. Depending on the encryption \nalgorithm chosen, this situation can be particularly prob-\nlematic for encrypted satellite transmissions because the \nentire encrypted message may be irretrievable if even a \nsingle bit of data is out of place. To resolve this prob-\nlem, a checksum or cryptographic hash function may \nbe applied to the encrypted message to allow errors to \nbe identified and reconciled on receipt. This approach \ncomes at a cost, however: Appending checksums or \nerror-correcting code to an encrypted message increases \nthe length of the message and by extension increases the \ntime required for the message to be transmitted and low-\ners the satellite’s overall communications capacity due \nto the extra burden placed on its limited resources. \n Another common problem associated with satel-\nlite encryption relates to establishing the identity of the \nsender of a message. A satellite, for example, needs \nto know that the control signals it is receiving from the \nground originate from an authorized source. Similarly, an \nintelligence agency receiving a satellite transmission from \none of its operatives needs to establish that the transmis-\nsion is authentic. To establish the identity of the sender, \nthe message needs to be encrypted in such a way that \nfrom the recipient’s perspective, only a legitimate sender \ncould have encoded the message. The sender, of course, \nalso wants to ensure that the message is protected while \nin transit and thus desires that only an authorized recipi-\nent would be able to decode the message on receipt. Both \nparties to the communication must therefore agree on an \nencryption algorithm that serves to identify the authen-\nticity of the sender while affording a sufficient level of \nprotection to the message while it is in transit. Although \nType of Satellite Communications Link\nUplink\nDownlink\nExtraplanetary Link\nCategory 01\nHigh-Value\nLow-Value\nData Value\nCategory 02\nCategory 03\nCategory 04\nCategory 05\nCategory 06\n FIGURE 25.3 Satellite communications categories as a function of data value and type of link. \n" }, { "page_number": 460, "text": "Chapter | 25 Satellite Encryption\n427\nkeyless encryption algorithms may satisfy these two cri-\nteria, such algorithms are usually avoided in satellite \ncommunications, since the satellite may become useless \nif the keyless encryption algorithm is broken. Instead, \nkeyed encryption algorithms are typically used to protect \ninformation transmitted via satellite. Using keyed encryp-\ntion algorithms can be problematic, however. \n To gain insight into the problems associated with \nkeyed encryption, one might first consider the case of a \nsymmetrically keyed encryption algorithm, wherein the \nsame key is used to both encode and decode a message. \nIf party A wants to communicate with party B via satel-\nlite using this method, then both A and B must agree on \na secret key. As long as the key remains secret, it also \nserves to authenticate both parties. If party A also wants to \ncommunicate with party C, however, A and C must agree \non their own unique secret key; otherwise party B could \nmasquerade as A or C, or vice versa. A keyed encryption \napproach to satellite communication thus requires that \neach party establish a unique secret key with every other \nparty with whom they would like to communicate. To fur-\nther compound this problem, each party must obtain all its \nsecret keys in advance because possession of an appropri-\nate key is a necessary prerequisite to establishing a secure \ncommunications channel with another party. \n To resolve these issues, an asymmetrically keyed \nencryption algorithm may be adopted wherein the key \nused to encrypt a message is different from the key used to \ndecrypt the message. Such an approach requires each party \nto maintain only two keys, one of which is kept private and \nthe other of which is made publicly available. If party A \nwants to send party B a secure transmission, A first asks \nB for her public key, which can be transmitted over an \nunsecured connection. Party A then encodes a secret mes-\nsage using B’s public key. The message is secure because \nonly B’s private key can decode the message. To authenti-\ncate herself to B, party A needs only to reencode the entire \nmessage using her own private key before transmitting the \nmessage to B. On receiving the message, B can establish \nwhether it was sent by A, because only A’s private key \ncould have encoded a message that can be decoded with \nA’s public key. This process is depicted in Figure 25.4 . \n Unfortunately, even this approach to satellite encryp-\ntion is not entirely foolproof. To understand why, con-\nsider how a malicious party M might interject himself \nbetween A and B to intercept the secure communication. \nTo initiate the secure transmission, A must request B’s \npublic key over an unsecured channel. If this request is \nintercepted by M, M can supply A with his own (that is, \nM’s) public key. A will then encrypt the message with \nM’s public key, after which she will reencrypt the result \nof the first encryption operation with her own private \nkey. A will then transmit the encrypted message to B, \nwhich will once again be intercepted by M. Using A’s \npublic key in conjunction with his own private key, \nM will be able to decrypt and read the message. \nRECEIVER\nSenderʼs \nPublic Key\nDoubly-encrypted\nmessage sent to\nreceiver\nReceiverʼs \nPublic Key\nParties exchange\npublic keys\nReceiver\nauthenticates\nsender and\ndecrypts message\nStep 01\nStep 02\nStep 03\nSENDER\nMessage\nEncrypted\nwith\nReceiverʼs\nPublic Key\nDecrypted\nwith\nSenderʼs\nPublic Key\nDecrypted\nwith\nReceiverʼs\nPrivate Key\nMessage\nEncrypted\nwith\nSenderʼs\nPrivate Key\n FIGURE 25.4 Ensuring sender identity and message security with asynchronously-keyed encryption .\n" }, { "page_number": 461, "text": "PART | III Encryption Technology\n428\n A will not know that the message has been inter-\ncepted, and B will not know that a message was even \nsent. Note that intercepting a secure communication is \nparticularly easy for M if he owns or controls the satel-\nlite through which the message is being routed. In addi-\ntion to the risk of interception, asynchronously keyed \nencryption algorithms are typically at least 10,000 times \nslower than synchronously keyed encryption algo-\nrithms — a situation that may place an enormously large \nburden on a satellite’s limited computational resources. \nUntil a means is developed of dynamically and securely \ndistributing synchronous keys, satellite-based encryption \nwill always require tradeoffs among security, computa-\ntional complexity, and ease of implementation. \n Uplink Encryption \n Protecting a transmission that is being sent to a satellite \nfrom at or near the surface of the Earth requires much \nmore than just cryptographic techniques — to wit, encrypt-\ning the message itself is a necessary but insufficient condi-\ntion for protecting the transmission. The reason for this is \nthat the actual transmission of the encrypted message to the \nsatellite is but the final step in a long chain of custody that \nbegins when the message is created and ends when \nthe message is successfully received by the satellite. Along \nthe way, the message may pass through many people, \nsystems, or networks, the control of which may or may not \nreside entirely in the hands of the sender. If one assumes \nthat the confidentiality and integrity of the message have \nnot been compromised as the message has passed through \nall these intermediaries, then but two primary security \nconcerns remain: the directional accuracy of the transmit-\nting antenna and the method used to encrypt the message. \nIn the case of the former, the transmitting antenna must be \nsufficiently well focused to allow the signal to be received \nby — and ideally only by — the target satellite. With thou-\nsands of satellites in orbit, a strong potential exists for a \npoorly focused transmission to be intercepted by another \nsatellite, in which case the only remaining line of defense \nfor a message is the strength of the encryption algorithm \nwith which it was encoded. For this reason, a prudent \nsender should always assume that her message could \nbe intercepted while in transit to the satellite and should \nimplement message encryption accordingly. \n When deciding on which encryption method to use, the \nsender must simultaneously consider the value of the data \nbeing transmitted, the purpose of the transmission, and the \ntechnological and computational limitations of the target \nsatellite. A satellite’s computational and technological \ncapabilities are a function of its design specifications, its \ncurrent workload, and any degradation that has occurred \nsince the satellite was placed into orbit. These properties \nof the satellite can therefore be considered constraints; \nany encrypted uplink communications must work within \nthe boundaries of these limitations. That having been said, \nthe purpose of the transmission also features prominently \nin the choice of which encryption method to use. Here \nwe must distinguish between two types of transmissions: \ncommands, which instruct the satellite to perform one or \nmore specific tasks, and transmissions in transit, which are \nintended to be retransmitted to the surface or to another \nsatellite or spacecraft. Not only are command instructions \nof high value, they are also not typically burdened with the \nsame low-latency requirements of transmissions in transit. \nCommand instructions should therefore always be highly \nencrypted because control of the satellite could be lost if \nthey were to be intercepted and compromised. \n What remains, then, are transmissions in transit, \nwhich may be of either high value or low value. One of \nthe basic tenants of cryptography states that the value of \nthe data should dictate the extent to which the data are \nprotected. As such, minimal encryption may be accept-\nable for low-value transmissions in transit. For such \ntransmissions, adding an unnecessarily complex layer of \nencryption may increase the computational burden on the \nsatellite, which in turn may delay message delivery and \nlimit the satellite’s ability to perform other tasks simul-\ntaneously. High-value transmissions in transit should be \nprotected with a robust encryption scheme that reflects \nthe value of the data being transmitted. The extent to \nwhich a highly encrypted transmission in transit will neg-\natively impact a satellite’s available resources depends \non whether or not the message needs to be processed \nbefore being retransmitted. If the message is simply \nbeing relayed through the satellite without any addi-\ntional processing, the burden on the satellite’s resources \nmay be comparatively small. If, however, a highly \nencrypted message needs to be processed by the satellite \nprior to retransmission (e.g., if the message needs to be \ndecrypted, processed, and then re-encrypted), the burden \non the satellite’s resources may be substantial. Processing \nhigh-value, highly encrypted transmissions in transit may \ntherefore vastly reduce a satellite’s throughput capabili-\nties when considered in conjunction with its technologi-\ncal and computational limitations. \n Extraplanetary Link Encryption \n Before a signal is sent to the terrestrial surface, it may \nneed to be transmitted across an extraplanetary link. \n" }, { "page_number": 462, "text": "Chapter | 25 Satellite Encryption\n429\nTelemetry from a remote spacecraft orbiting Mars, for \nexample, may need to be relayed to scientists by way of \nan Earth-orbiting satellite. Alternatively, a television sig-\nnal originating in China may need to be relayed around \nthe Earth by several intermediary satellites in order to \nreach its final destination in the United States. In such \ncircumstances, several unique encryption-related issues \nmay arise, each of which is associated with the routing of \nan extraplanetary transmission through one or more satel-\nlite nodes. Perhaps the most obvious of these issues is the \nscenario that arises when the signal transmitted from \nthe source satellite or spacecraft is not compatible with \nthe receiving capabilities of the target. For example, the \nvery low-power signals transmitted from a remote explor-\natory spacecraft may not be detectable by a particular lis-\ntening station on the planet’s surface, or the data rate or \nsignal modulation with which an extraplanetary transmis-\nsion is sent may not be supported by the final recipient. \nIn this scenario, the intermediary satellite through which \nthe signal is being routed must act as an interpreter or \ntranslator of sorts, a situation illustrated in Figure 25.5 . \n From an encryption perspective, the situation we’ve \nillustrated implies that the intermediary satellite may \nneed to decrypt the extraplanetary message and re-\nencrypt it using a different encryption scheme prior to \nretransmission. A similar issue may arise for legal or \npolitical reasons. Consider, for example, a message that \nis being transmitted from one country to another by way \nof several intermediary satellites. The first country may \nhave no standing policies regarding the encryption of \nmessages sent via satellite, whereas the second country \nmay have policies that strictly regulate the encryption \nstandards of messages received via satellite. In this case, \none or more of the orbiting satellites may need to alter \nthe encryption of a message in transit to satisfy the legal \nand regulatory guidelines of both countries. \n Downlink Encryption \n Several issues impact the information where it is \nprotected as it is transmitted from orbiting satellites \nto the surface of the Earth. As with uplink encryption, \nthe technological and computational capabilities of the \nspacecraft may constrain the extent to which a particu-\nlar message can be protected. If, for example, an older \ncommunications satellite does not possess the requisite \nhardware or software capabilities to support a newly \ndeveloped downlink encryption scheme, that scheme \nsimply cannot be used with the satellite. Similarly, if \nthe utilization of a particular encryption scheme would \nreduce the efficiency or message-handling capacity of a \nsatellite to a level that is deemed unacceptable, the sat-\nellite’s operators may choose to prioritize capacity over \ndownlink security. The precision with which a satellite is \nable to focus a downlink transmission may also impact \nthe choice of encryption scheme; as noted earlier in this \nchapter, a widely dispersed downlink signal can be more \nreadily intercepted than can a signal transmitted with \na narrow focus. Though each of these computational \nand technological limitations must be considered when \nselecting a downlink encryption scheme, they are by no \nmeans the only factors requiring consideration. \n Unlike uplink signals, which can only originate from \nthe surface of the planet, messages to be transmitted over \nMessage relayed\nwith encryption\nmethod “A”\nUplink station only\nsupports encryption\nmethod “A”\nDownlink station only\nsupports encryption\nmethod “B”\nMessage converted \nto encryption\nmethod “B”\n FIGURE 25.5 In-transit translation of message encryption in satellite communication. \n" }, { "page_number": 463, "text": "PART | III Encryption Technology\n430\na downlink channel can come from one of three sources: \nthe terrestrial surface, another spacecraft, or the satel-\nlite itself. The source of the message to be broadcast to \nthe planet’s surface plays a critical role in determining \nthe method of protection for that message. Consider, for \nexample, a message that originates from the planet’s sur-\nface or from another spacecraft. In this case, one of two \npossible scenarios may exist. First, the satellite trans-\nmitting the message to Earth may be serving only as a \nsimple signal router or amplifying transmitter; that is to \nsay, the message is already encrypted on receipt, and the \nsatellite is simply relaying the previously encrypted mes-\nsage to the surface. In this case, the satellite transmitting \nthe downlink signal has very little to do with the encryp-\ntion of the message, and only the integrity of the message \nand the retransmission capabilities of the satellite at the \ntime the message is received need be considered. In the \nsecond scenario, a satellite may need to filter a message \nor alter its encryption method prior to downlink trans-\nmission. For example, a satellite may receive signals that \nhave been optimized for extraplanetary communication \nfrom a robotic exploration spacecraft in the far reaches \nof the solar system. Prior to retransmission, the satel-\nlite may need to decrypt the data, process it, and then \nreencrypt the data using a different encryption scheme \nmore suited to a downlink transmission. In this case, the \ntechnological capabilities of the satellite, the timeliness \nwith which the data need to be delivered, and the value \nof the data themselves dictate the means through which \nthose data are protected prior to downlink transmission. \n Finally, one might consider the scenario in which \nthe data being transmitted to the terrestrial surface origi-\nnate from the satellite itself rather than from the surface \nor from another spacecraft. Such data can be classified \nas either telemetry relating to the status of the satellite \nor as information that the satellite has acquired or pro-\nduced while performing an assigned task. In the case of \nthe former, telemetry relating to the status of the satellite \nshould always be highly protected, since it may reveal \ndetails about the satellite’s capabilities, inner workings, \nor control systems if it were to be intercepted and com-\npromised. In the case of the latter, however, the value of \nthe data that the satellite has acquired or produced should \ndictate the extent to which those data are protected. \nCritical military intelligence, for example, should be sub-\njected to a much higher standard of encryption than data \nthat are comparatively less valuable. In the end, a satel-\nlite operator must weigh many factors when deciding \non the extent to which a particular downlink transmis-\nsion should be protected. It is tempting to conclude that \nthe maximum level of encryption should be applied to \nevery downlink transmission. Doing so, however, would \nunnecessarily burden satellites ’ limited resources and \nwould vastly reduce the communications capacity of the \nglobal satellite network. Instead, a harmonious balance \nneeds to be sought between a satellite’s technological \nand computational capabilities, and the source, volume, \nand value of the data that it is asked to handle. Only by \nachieving such a balance can the maximum utility of a \nsatellite be realized. \n 4. THE FUTURE OF SATELLITE \nENCRYPTION \n Despite the many challenges faced by satellite encryp-\ntion, the potential advantages afforded by satellites to \nmankind are so tantalizing and alluring that the utili-\nzation of satellite-based communication can only be \nexpected to grow for the foreseeable future. As glo-\nbalization continues its indefatigable march across the \nterrestrial surface, access to secure high-speed commu-\nnications will be needed from even the most remote and \nsparsely populated corners of the globe. Satellites by \ntheir very nature are well positioned to meet this demand \nand will therefore play a pivotal role in interconnecting \nhumanity and enabling the forthcoming global infor-\nmation society. Furthermore, recent developments in \nthe area of quantum cryptography promise to further \nimprove the security of satellite-based encryption. This \nrapidly advancing technology allows the quantum state \nof photons to be manipulated in such a way that the pho-\ntons themselves can carry a synchronous cryptographic \nkey. The parties involved in a secure communication can \nbe certain that the cryptographic key has not been inter-\ncepted, because eavesdropping on the key would intro-\nduce detectable quantum anomalies into the photonic \ntransmission. By using a constellation of satellites in low \nEarth orbit, synchronous cryptographic keys could be \nsecurely distributed via photons to parties that want to \ncommunicate, thus resolving the key exchange problem. \nThe parties could then establish secure communications \nusing more traditional satellite channels. The further \ndevelopment and adoption of technologies such as quan-\ntum cryptography ensures that satellite-based communi-\ncation has a bright — and secure — future. \n There are, of course, risks to relying heavily on satellite-\nbased communication. Specifically, if the ability to access \ncritical satellite systems fails due to interference or damage \nto the satellite, disastrous consequences may ensue. What \nmight happen, for example, if interference from a solar \nflare were to disrupt the constellation of global positioning \n" }, { "page_number": 464, "text": "Chapter | 25 Satellite Encryption\n431\nsatellites? What might happen if a micrometeoroid storm \nwere to damage all the weather satellites monitoring the \nprogress of a major hurricane? What might happen to a \nnation’s ability to make war if antisatellite weapons were \ndeployed to destroy its military communications and \nintelligence-gathering satellites? Questions such as these \nhighlight the risks of relying too heavily on artificial sat-\nellites. Nevertheless, as the costs associated with building, \nlaunching, and operating satellites continue to decline, the \nutilization of satellite technology will, for the foreseeable \nfuture, become an increasingly common part of the human \nexperience. \n" }, { "page_number": 465, "text": "This page intentionally left blank\n" }, { "page_number": 466, "text": "433\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Public Key Infrastructure \n Terence Spies \n Voltage Security, Inc. \n Chapter 26 \n The ability to create, manipulate, and share digital docu-\nments has created a host of new applications (email, word \nprocessing, ecommerce Web sites), but it also created a \nnew set of problems — namely how to protect the privacy \nand integrity of digital documents when they’re stored \nand transmitted. The invention of public key cryptogra-\nphy in the 1970s 1 — most important, the ability to encrypt \ndata without a shared key and the ability to “ sign ” data, \nensuring its origin and integrity — pointed the way to a \nsolution to those problems. Though these operations are \nquite conceptually simple, they both rely on the ability \nto bind a public key (which is typically a large math-\nematical object) reliably with an identity sensible to the \napplication using the operation (for example, a globally \nunique name, a legal identifier, or an email address). \nPublic Key Infrastructure (PKI) is the umbrella term used \nto refer to the protocols and machinery used to perform \nthis binding. \n This chapter explains the cryptographic background \nthat forms the foundation of PKI systems, the mechan-\nics of the X.509 PKI system (as elaborated by the \nInternet Engineering Task Force, or IETF), the practical \nissues surrounding the implementation of PKI systems, \na number of alternative PKI standards, and alterna-\ntive cryptographic strategies for solving the problem of \nsecure public key distribution. PKI systems are complex \nobjects that have proven difficult to implement properly. 2 \nThis chapter aims to survey the basic architecture of PKI \nsystems and some of the mechanisms used to implement \nthem. It does not aim to be a comprehensive guide to all \nPKI standards or to contain sufficient technical detail to \nallow implementation of a PKI system. These systems are \ncontinually evolving, and the reader interested in building \nor operating a PKI is advised to consult the current work \nof standards bodies referenced in this chapter. \n 1. CRYPTOGRAPHIC BACKGROUND \n To understand how PKI systems function, it is neces-\nsary to grasp the basics of public key cryptography. PKI \nsystems enable the use of public key cryptography, and \nthey also use public key cryptography as the basis for \ntheir operation. There are thousands of varieties of cryp-\ntographic algorithms, but we can understand PKI opera-\ntions by looking at only two: signatures and encryption. \n Digital Signatures \n The most important cryptographic operation in PKI sys-\ntems is the digital signature. If two parties are exchang-\ning some digital document, it may be important to protect \nthat data so that the recipient knows that the document \nhas not been altered since it was sent and that any docu-\nment received was indeed created by the sender. Digital \nsignatures provide these guarantees by creating a data \nitem, typically attached to the document in question that \nis uniquely tied to the data and the sender. The recipient \nthen has some verification operation that confirms that \nthe signature data matches the sender and the document. \n Figure 26.1 illustrates the basic security problem \nthat motivates signatures. An attacker controlling com-\nmunications between the sender and receiver can insert a \nbogus document, fooling the receiver. \n The aim of the digital signature is to block this attack \nby attaching a signature that can only be created by the \nsender, as shown in Figure 26.2 . \n Cryptographic algorithms can be used to construct \nsecure digital signatures. These techniques (for exam-\nple, the RSA or DSA algorithms) all have the same three \nbasic operations, as shown in Table 26.1 . \n 1 W. Diffi e and M. E. Hellman, “ New directions in cryptography, ” \n IEEE Trans. Inform. Theory, IT-22, 6, 1976, pp. 644 – 654. \n 2 P. Gutmann, “ Plug-and-Play PKI: A PKI Your Mother Can Use, ” in \n Proc. 12th Usenix Security Symp., Usenix Assoc., 2003, pp. 45 – 58. \n" }, { "page_number": 467, "text": "PART | III Encryption Technology\n434\n Public Key Encryption \n Variants of the three operations used to construct digital \nsignatures can also be used to encrypt data. Encryption \nuses a public key to scramble data in such a way that \nonly the holder of the corresponding private key can \nunscramble it (see Figure 26.3 ). \n Public key encryption is accomplished with variants \nof the same three operations used to sign data, as shown \nin Table 26.2 . \n The security of signature and encryption operations \ndepends on two factors: first, the ability to keep the pri-\nvate key private and, second, the ability to reliably tie a \npublic key to a sender. If a private key is known to an \nBlocks the\noriginal document\nand inserts a\nbogus one\nAttacker\nBogus\nDocument\nDocument\nSignature Security Model\nSender\nReceiver\n FIGURE 26.1 Block diagram of altering an unsigned document. \nCan insert\ndocument but cannot\ngenerate\nsignature\nSignature fails, so\nknows document\nis bogus\nAttacker\nBogus\nDocument\nDocument\nSignature\nSignature\nSigned Document\nSender\nReceiver\n FIGURE 26.2 Block diagram showing prevention of an alteration attack via digital signature. \n TABLE 26.1 The Three Fundamental Digital Signature Operations \n Key Generation \n Using some random source, the sender creates a public and private key, \ncalled Kpublic and Kprivate. Using Kpublic, it is cryptographically difficult \nto derive Kprivate. The sender then distributes Kpublic, and keeps Kprivate \nhidden. \n Signing \n Using a document and Kprivate, the sender generates the signature data. \n Verification \n Using the document, the signature, and Kpublic, the receiver (or any \nother entity with these elements) can test that the signature matches \nthe document, and could only be produced with the Kprivate matching \nKpublic. \n" }, { "page_number": 468, "text": "Chapter | 26 Public Key Infrastructure\n435\n attacker, they can then perform the signing operation \non arbitrary bogus documents and can also decrypt any \ndocument encrypted with the matching public key. The \nsame attacks can be performed if an attacker can con-\nvince a sender or receiver to use a bogus public key. \n PKI systems are built to securely distribute public \nkeys, thereby preventing attackers from inserting bogus \npublic keys. They do not directly address the security \nof private keys, which are typically defended by meas-\nures at a particular endpoint, such as keeping the private \nkey on a smartcard, encrypting private key data using \noperating system facilities, or other similar mecha-\nnisms. The remainder of this section details the design, \nimplementation, and operation of public key distribution \nsystems. \n 2. OVERVIEW OF PKI \n PKI systems solve the problem of associating meaning-\nful names with essentially meaningless cryptographic \nkeys. For example, when encrypting an email, a user will \ntypically specify a set of recipients that should be able \nto decrypt that mail. The user will want to specify these \nas some kind of name (email address or a name from \na directory), not as a set of public keys. In the same way, \nwhen signed data is received and verified, the user will \nwant to know what user signed the data, not what pub-\nlic key correctly verified the signature. The design goal \nof PKI systems is to securely and efficiently connect \nuser identities to the public keys used to encrypt and \nverify data. \n The original Diffie-Hellman paper 3 that outlined pub-\nlic key cryptography proposed that this binding would be \ndone through storing public keys in a trusted directory. \nWhenever a user wanted to encrypt data to another user, \nthey would consult the “ public file ” and request the pub-\nlic key corresponding to some user. The same operation \nwould yield the public key needed to verify the signature \non signed data. The disadvantage of this approach is that \nthe directory must be online and available for every new \nencryption and verification operation. (Though this older \napproach was never widely implemented, variants of this \napproach are now reappearing in newer PKI designs. \nFor more information, see the section on alternative PKI \narchitectures.) \n PKI systems solve this online problem and accom-\nplish identity binding by distributing “ digital certifi-\ncates, ” chunks of data that contain an identity and a key, \nall authenticated by digital signature, and providing a \nmechanism to validate these certificates. Certificates, \n 3 W. Diffi e and M. E. Hellman, “ New directions in cryptography, ” \n IEEE Trans. Inform. Theory, IT-22, 6, 1976, pp. 644 – 654. \nPlaintext\nDocument\nReceiver\nPublic Key\nSender\nEncrypted\nDocument\nPublic Key Encryption\nEncrypted\nDocument\nReceiver\nPrivate Key\nReceiver\nPlaintext\nDocument\nPublic Key Decryption\n FIGURE 26.3 The public key encryption and decryption process. \n TABLE 26.2 The three fundamental public key \nencryption operations \n Key Generation \n Using some random source, the sender \ncreates a public and private key, called \nKpublic and Kprivate. Using Kpublic, it \nis cryptographically difficult to derive \nKprivate. The sender then distributes \nKpublic, and keeps Kprivate hidden. \n Encryption \n Using a document and Kpublic, the \nsender encrypts the document. \n Decryption \n The receiver uses Kprivate to decrypt the \ndocument. \n" }, { "page_number": 469, "text": "PART | III Encryption Technology\n436\n invented by Kohnfelder in 1978, 4 are essentially a digit-\nally signed message from some authority stating “ Entity \nX is associated with public key Y. ” Communicating par-\nties can then rely on this statement (to the extent that \nthey trust the authority signing the certificate) to use the \npublic key Y to validate a signature from X or to send an \nencrypted message to X. Since time may pass between \nwhen the signed certificate is produced and when some-\none uses that certificate, it may be useful to have a vali-\ndation mechanism to check that the authority still stands \nby the certificate. We will describe PKI systems in terms \nof producing and validating certificates. \n There are multiple standards that describe the way cer-\ntificates are formatted. The X.509 standard, promulgated \nby the ITU, 5 is the most widely used and is the certificate \nformat used in the TLS/SSL protocols for secure Internet \nconnections and the S/MIME standards for secured email. \nThe X.509 certificate format also implies a particular \nmodel of how certification works. Other standards have \nattempted to define alternate models of operation and \nassociated certificate models. Among the other stand-\nards that describe certificates are Pretty Good Privacy \n(PGP) and the Simple Public Key Infrastructure (SPKI). \nIn this section, we’ll describe the X.509 PKI model, then \ndescribe how these other standards attempt to remediate \nproblems with X.509. \n 3. THE X.509 MODEL \n The X.509 model is the most prevalent standard for cer-\ntificate-based PKIs, though the standard has evolved such \nthat PKI-using applications on the Internet are mostly \nbased on the set of IETF standards that have evolved and \nextended the ideas in X.509. X.509-style certificates are \nthe basis for SSL, TLS, many VPNs, the U.S. federal gov-\nernment PKI, and many other widely deployed systems. \n The History of X.509 \n A quick historical preface here is useful to explain \nsome of the properties of X.509. X.509 is part of the \nX.500 directory standard owned by the International \nTelecommunications Union Telecommunications Stand-\nardization Sector (ITU-T). X.500 specifies a hierarchical \ndirectory useful for the X.400 set of messaging stand-\nards. As such, it includes a naming system (called distin-\nguished naming ) that describes entities by their position \nin some hierarchy. A sample X.500/X.400 name might \nlook like this: \n \nCN\nJoe Davis, OU\nHuman Resources, \nO\nWidgetCo, C\nUS\n\u0003\n\u0003\n\u0003\n\u0003\n \n This name describes a person with a common name \n(CN) of Joe Davis who works in an organizational unit \n(OU) called Human Resources, in an organization called \nWidgetCo in the United States. These name components \nwere intended to be run by their own directory compo-\nnents (so, for example, there would be Country directories \nthat would point to Organizational directories, etc.), and \nthis hierarchical description was ultimately reflected in the \ndesign of the X.509 system. Many of the changes made \nby IETF and other bodies that have evolved the X509 \nstandard were made to reconcile this hierarchical naming \nsystem with the more distributed nature of the Internet. \n The X.509 Certificate Model \n The X.509 model specifies a system of certifying authori-\nties (CAs) that issue certificates for end entities (users, \nWeb sites, or other entities that hold private keys). A CA-\nissued certificate will contain (among other data) the name \nof the end entity, the name of the CA, the end entity’s pub-\nlic key, a validity period, and a certificate serial number. \nAll this information is signed with the CA’s private key. \n(Additional details on the information in a certificate \nand how it is encoded appear in the section on the X.509 \nCertificate Format.) To validate a certificate, a relying \nparty uses the CA’s public key to verify the signature on \nthe certificate, checks that the time falls within the validity \nperiod, and may also perform some other online checks. \n This process leaves out one important detail: Where \ndid the CA’s public key come from? The answer is that \nanother certificate is typically used to certify the public \nkey of the CA. This “ chaining ” action of validating a cer-\ntificate by using the public key from another certificate \ncan be performed any number of times, allowing for arbi-\ntrarily deep hierarchies of CAs. Of course, this must ter-\nminate at some point, typically at a self-signed certificate \nthat is trusted by the relying party. Trusted self-signed \ncertificates are typically referred to as “ root ” certifi-\ncates. Once the relying party has verified the chain of \nsignatures from the end-entity certificate to a trusted root \ncertificate, it can conclude that the end-entity certificate \nis properly signed and then move onto whatever other \n 4 Kohnfelder, L., “ Towards a practical public-key cryptosystem, ” \nBachelor’s thesis, Department of Computer Science, Massachusetts \nInstitute of Technology (June 1978). \n 5 ITU-T Recommendation X.509 (1997 E): Information Technology – \nOpen Systems Interconnection – The Directory: Authentication Frame-\nwork, June 1997. \n" }, { "page_number": 470, "text": "Chapter | 26 Public Key Infrastructure\n437\n validation steps (proper key usage fields, validity dates \nin some time window, etc.) are required to fully trust the \ncertificate. Figure 26.4 shows the structure of a typical \ncertificate chain. \n One other element is required for this system to \nfunction securely: CAs must be able to “ undo ” a certi-\nfication action. Though a certificate binds an identity to \na key, there are many events that may cause that bind-\ning to become invalid. For example, a CA operated by a \nbank may issue a certificate to a newly hired employee \nthat gives that user the ability to sign messages as an \nemployee of the bank. If that person leaves the bank \nbefore the certificate expires, the bank needs some way \nof undoing that certification. The physical compromise \nof a private key is another circumstance that may require \ninvalidating a certificate. This is accomplished by a vali-\ndation protocol, where (in the abstract) a user examining \na certificate can ask the CA if a certificate is still valid. \nIn practice, revocation protocols are used that simulate \nthis action without actually contacting the CA. \n Root certificates are critical to the process of validat-\ning public keys through certificates. They must be inher-\nently trusted by the application, since no other certificate \nsigns these certificates. This is most commonly done by \ninstalling the certificates as part of the application that \nwill use the certificates under a set of root certificates. \nFor example, Internet Explorer uses X.509 certificates to \nvalidate keys used to make Secure Socket Layer (SSL) \nconnections. Internet Explorer has installed a large set \nof root certificates that can be examined by opening the \n Internet Options menu item and selecting Certificates \nin the Content tab of the Options dialog box. A list like \nthe one shown in Figure 26.5 will appear. \n This dialog box can also be used to inspect these root \ncertificates. The Microsoft Root certificate details look \nlike the ones shown in Figure 26.6 . \n The meaning of these fields is explored in subsequent \nparts of this chapter. \n 4. X.509 IMPLEMENTATION \nARCHITECTURES \n In theory, the Certification Authority is the entity that \ncreates and validates certificates, but in practice, it may \nbe desirable or necessary to delegate the actions of user \nauthentication and certificate validation to other serv-\ners. The security of the CA’s signing key is crucial to \nthe security of a PKI system. If we limit the functions of \nthe server that holds that key, it should be subject to less \nrisk of disclosure or illegitimate use. The X.509 archi-\ntecture defines a delegated server role, the Registration \nAuthority (RA), which allows delegation of authentica-\ntion. Subsequent extensions to the core X.509 architec-\nture have created a second delegated role, the Validation \nAuthority (VA), which answers queries about the valid-\nity of a certificate after creation. \n A Registration Authority is typically used to distrib-\nute the authentication function needed to issue a certifi-\ncate without needing to distribute the CA key. The RA’s \nSubject: ExampleCo RootCA\nIssuer: ExampleCo RootCA\nTrust originates with\nself-signed root\ncertificate\nRootCA key signs\nRegionalCA\ncertificate\nRegionalCA key\nsigns IssuerCA5\ncertificate\nIssuerCA5 key signs\nwww.example.com\ncertificate\nSubject: ExampleCo RegionalCA\nIssuer: ExampleCo RootCA\nSubject: ExampleCo IssuerCA5\nIssuer: ExampleCo RegionalCA\nSubject: www.example.com\nIssuer: ExampleCo IssuerCA5\n FIGURE 26.4 An example X.509 certificate chain. \n" }, { "page_number": 471, "text": "PART | III Encryption Technology\n438\n function is to perform the authentication needed to issue \na certificate, then send a signed statement containing the \nfact that it performed the authentication, the identity to \nbe certified, and the key to be certified. The CA validates \nthe RA’s message and issues a certificate in response. \n For example, a large multinational corporation wants \nto deploy a PKI system using a centralized CA. It wants to \nissue certificates on the basis of in-person authentication, \nso it needs some way to distribute authentication to multi-\nple locations in different countries. Copying and distribut-\ning the CA signing key creates a number of risks, not only \ndue to the fact that the CA key will be present on mul-\ntiple servers but also due to the complexities of creating \nand managing these copies. Sub-CAs could be created for \neach location, but this requires careful attention to control-\nling the identities allowed to be certified by each Sub-CA \n(otherwise, an attacker compromising one Sub-CA could \nissue a certificate for any identity he liked). One possible \nway to solve this problem is to create RAs at each loca-\ntion and have the CA check that the RA is authorized to \nauthenticate a particular employee when a certificate is \nrequested. If an attacker subverts a given RA signing key, \nhe can request certificates for employees in the purview \nof that RA, but it is straightforward, once discovered, to \ndeauthorize the RA, solve the security problem, and cre-\nate a new RA key. \n Validation Authorities are given the ability to revoke \ncertificates (the specific methods used to effect revoca-\ntion are detailed in the “ X.509 Revocation Protocols ” \nsection) and offload that function from the CA. \n Through judicious use of RAs and VAs, it is possible \nto construct certification architectures whereby the critical \nCA server is only accessible to a very small number of \n FIGURE 26.6 A view of the fields in an X.509 certificate using \nMicrosoft Internet Explorer. \n FIGURE 26.5 The Microsoft Internet Explorer trusted root certificates. \n" }, { "page_number": 472, "text": "Chapter | 26 Public Key Infrastructure\n439\n other servers, and network security controls can be used to \nreduce or eliminate threats from outside network entities. \n 5. X.509 CERTIFICATE VALIDATION \n X.509 certificate validation is a complex process and can \nbe done to several levels of confidence. This section out-\nlines a typical set of steps involved in validating a cer-\ntificate, but it is not an exhaustive catalog of the possible \nmethods that can be used. Various applications will often \nrequire different validation techniques, depending on \nthe application’s security policy. It is rare for an appli-\ncation to implement certificate validation, since there are \nseveral APIs and libraries available to perform this task. \nMicrosoft CryptoAPI, OpenSSL, and the Java JCE all \nprovide certificate validation interfaces. The Server-based \nCertificate Validity Protocol (SCVP) can also be used to \nvalidate a certificate. However, all these interfaces offer a \nvariety of options, and understanding the validation pro-\ncess is essential to properly using these interfaces. \n A complete specification of the certificate validation \nprocess would require hundreds of pages, so here we sup-\nply just a sketch of what happens during certificate vali-\ndation. It is not a complete description and is purposely \nsimplified. The certificate validation process typically \nproceeds in three steps and typically takes three inputs. \nThe first is the certificate to be validated, the second is \nany intermediate certificates acquired by the applications, \nand the third is a store containing the root and interme-\ndiate certificates trusted by the application. The follow-\ning steps are a simplified outline of how certificates are \ntypically validated. In practice, the introduction of bridge \nCAs and other nonhierarchical certification models have \nled to more complex validation procedures. IETF RFC \n3280 6 presents a complete specification for certificate \nvalidation, and RFC 4158 7 presents a specification for \nconstructing a certification path in environments where \nnonhierarchical certification structures are used. \n Validation Step 1: Construct the Chain and \nValidate Signatures \n The contents of the target certificate cannot be trusted \nuntil the signature on the certificate is validated, so the \nfirst step is to check the signature. To do so, the certifi-\ncate for the authority that signed the target certificate \nmust be located. This is done by searching the interme-\ndiate certificates and certificate store for a certificate \nwith a subject field that matches the issuer field of the \ntarget certificate. If multiple certificates match, the vali-\ndator can search the matching certificates for a Subject \nKey Identifier extension that matches the Issuer Key \nIdentifier extension in the candidate certificates. If mul-\ntiple certificates still match, the most recently issued \ncandidate certificate can be used. (Note that, because of \npotentially revoked intermediate certificates, multiple \nchains may need to be constructed and examined through \nSteps 2 and 3 to find the actual valid chain.) Once the \nproper authority certificate is found, the validator checks \nthe signature on the target certificate using the public key \nin the authority certificate. If the signature check fails, \nthe validation process can be stopped, and the target \ncertificate deemed invalid. \n If the signature matches and the authority certificate is \na trusted certificate, the constructed chain is then subjected \nto Steps 2 – 4. If not, the authority certificate is treated as a \ntarget certificate, and Step 1 is called recursively until it \nreturns a chain to a trusted certificate or fails. \n Constructing the complete certificate path requires \nthat the validator is in possession of all the certificates in \nthat path. This requires that the validator keep a database \nof intermediate certificates or that the protocol using the \ncertificate supply the needed intermediates. The Server \nCertificate Validation Protocol (SCVP) provides a mech-\nanism to request a certificate chain from a server, which \ncan eliminate these requirements. The SCVP protocol is \ndescribed in more detail in a subsequent section. \n Validation Step 2: Check Validity Dates, \nPolicy and Key Usage \n Once a chain has been constructed, various fields in the \ncertificate are checked to ensure that the certificate was \nissued correctly and that it is currently valid. The follow-\ning checks should be run on the candidate chain: \n The certificate chain times are correct. Each certificate \nin the chain contains a validity period with a not-before \nand not-after time. For applications outside validating the \nsignature on a document, the current time must fall after \nthe not-before time and before the not-after time. Some \napplications may require time nesting , meaning that the \nvalidity period for a certificate must fall entirely within \nthe validity period of the issuer’s certificate. It is up to \nthe policy of the application whether it treats out-of-date \ncertificates as invalid or treats them as warning cases that \n 6 R. Housely, W. Ford, W. Polk, and D. Solo, “ Internet X.509 pub-\nlic key infrastructure certifi cate and certifi cate revocation list profi le, ” \nIETF RFC 3280, April 2002. \n 7 M. Cooper, Y. Dzambasow, P. Hesse, S. Joseph, and R. Nicholas, \n “ Internet X.509 public key infrastructure: certifi cation path building, ” \nIETF RFC 4158, September 2005. \n" }, { "page_number": 473, "text": "PART | III Encryption Technology\n440\n can be overridden by the user. Applications may also \ntreat certificates that are not yet valid differently than cer-\ntificates that have expired. \n Applications that are validating the certificate on a \nstored document may have to treat validity time as the \ntime that the document was signed as opposed to the time \nthat the signature was checked. There are three cases of \ninterest. The first, and easiest, is where the document sig-\nnature is checked and the certificate chain validating the \npublic key contains certificates that are currently within \ntheir validity time interval. In this case, the validity times \nare all good, and verification can proceed. The second \ncase is where the certificate chain validating the public \nkey is currently invalid because one or more certificates \nare out of date and the document is believed to be signed \nat a time when the chain was out of date. In this case, the \nvalidity times are all invalid, and the user should be at \nleast warned. \n The ambiguous case arises when the certificate chain \nis currently out of date, but the chain is believed to have \nbeen valid with respect to the time when the document \nwas signed. Depending on its policy, the application can \ntreat this case in several different ways. It can assume \nthat the certificate validity times are strict, and fail to \nvalidate the document. Alternatively, it can assume that \nthe certificates were good at the time of signing, and \nvalidate the document. The application can also take \nsteps to ensure that this case does not occur by using a \ntime-stamping mechanism in conjunction with signing \nthe document or provide some mechanism for resigning \ndocuments before certificate chains expire. \n Once the certificate chain has been constructed, the \nverifier must also verify that various X.509 extension \nfields are valid. Some common extensions that are rel-\nevant to the validity of a certificate path are: \n ● BasicConstraints. This extension is required for \nCAs and limits the depth of the certificate chain \nbelow a specific CA certificate. \n ● NameConstraints. This extension limits the \nnamespace of identities certified underneath the \ngiven CA certificate. This extension can be used \nto limit a specific CA to issuing certificates for a \ngiven domain or X.400 namespace. \n ● KeyUsage and ExtendedKeyUsage. These \nextensions limit the purposes for which a certified \nkey can be used. CA certificates must have \n KeyUsage set to allow certificate signing. Various \nvalues of ExtendedKeyUsage may be required for \nsome certification tasks. \n Validation Step 3: Consult Revocation \nAuthorities \n Once the verifier has concluded that it has a suitably \nsigned certificate chain with valid dates and proper key-\nUsage extensions, it may want to consult the revocation \nauthorities named in each certificate to check that the \ncertificates are currently valid. Certificates may contain \nextensions that point to Certificate Revocation List (CRL) \nstorage locations or to Online Certificate Status Protocol \n(OCSP) responders. These methods allow the veri-\nfier to check that a CA has not revoked the certificate in \nquestion. \n The next section discusses these methods in more \ndetail. Note that each certificate in the chain may need to \nbe checked for revocation status. The following section \non certificate revocation details the mechanisms used to \nrevoke certificates. \n 6. X.509 CERTIFICATE REVOCATION \n Since certificates are typically valid for a significant \nperiod of time, it is possible that during the validity \nperiod of the certificate, a key may be lost or stolen, an \nidentity may change, or some other event may occur that \ncauses a certificate’s identity binding to become invalid \nor suspect. To deal with these events, it must be possible \nfor a CA to revoke a certificate, typically by some kind of \nnotification that can be consulted by applications examin-\ning the validity of a certificate. Two mechanisms are used \nto perform this task: Certificate Revocation Lists (CRLs) \nand the Online Certificate Status Protocol (OCSP). \n The original X.509 architecture implemented revoca-\ntion via a CRL, a periodically issued document contain-\ning a list of certificate serial numbers that are revoked by \nthat CA. X.509 has defined two basic CRL formats, V1 \nand V2. When CA certificates are revoked by a higher-\nlevel CA, the serial number of the CA certificate is \nplaced on an Authority Revocation List (ARL), which \nis formatted identically to a CRL. CRLs and ARLs, as \ndefined in X.509 and IETF RFC 3280, are ASN.1 \nencoded objects that contain the information shown in \n Table 26.3 . \n This header is followed by a sequence of revoked \ncertificate records. Each record contains the information \nshown in Table 26.4 . \n The list of revoked certificates is optionally followed \nby a set of CRL extensions that supply additional infor-\nmation about the CRL and how it should be processed. \nTo process a CRL, the verifying party checks that the \n" }, { "page_number": 474, "text": "Chapter | 26 Public Key Infrastructure\n441\n CRL has been signed with the key of the named issuer \nand that the current date is between the thisUpdate time \nand the nextUpdate time. This time check is crucial \nbecause if it is not performed, an attacker could use a \nrevoked certificate by supplying an old CRL where the \ncertificate had not yet appeared. Note that expired cer-\ntificates are typically removed from the CRL, which pre-\nvents the CRL from growing unboundedly over time. \n Note that CRLs can only revoke certificates on time \nboundaries determined by the nextUpdate time. If a CA \npublishes a CRL every Monday, for example, a certifi-\ncate that is compromised on a Wednesday will continue \nto validate until its serial number is published in the CRL \non the following Monday. Clients validating certificates \nmay have downloaded the CA’s CRL on Monday and are \nfree to cache the CRL until the nextUpdate time occurs. \nThis caching is important because it means that the CRL \nis only downloaded once per client per publication period \nrather than for every certificate validation. However, it \nhas the unavoidable consequence of having a potential \ntime lag between a certificate becoming invalid and its \nappearance on a CRL. The online certificate validation \nprotocols detailed in the next section attempt to solve \nthis problem. \n The costs of maintaining and transmitting CRLs \nto verifying parties has been repeatedly identified as \nan important component of the cost of running a PKI \nsystem, 8 , 9 and several alternative revocation schemes have \nbeen proposed to lower this cost. The cost of CRL distri-\nbution was also a factor in the emergence of online certif-\nicate status-checking protocols such as OCSP and SCVP. \n Delta CRLs \n In large systems that issue many certificates, CRLs can \npotentially become quite lengthy. One approach to reduc-\ning the network overhead associated with sending the \ncomplete CRL to every verifier is to issue a Delta CRL \nalong with a Base CRL. The Base CRL contains the com-\nplete set of revoked certificates up to some point in time, \nand the accompanying Delta CRL contains only the addi-\ntional certificates added over some time period. Clients \nthat are capable of processing the Delta CRL can then \ndownload the Base CRL less frequently and download \nthe smaller Delta CRL to get recently revoked certificates. \nDelta CRLs are formatted identically to CRLs but have a \ncritical extension added in the CRL that denotes that they \nare a Delta, not a Base, CRL. IETF RFC 3280 10 details \nthe way Delta CRLs are formatted and the set of certifi-\ncate extensions that indicate that a CA issues Delta CRLs. \n Online Certificate Status Protocol \n The Online Certificate Status Protocol (OCSP) was \ndesigned with the goal of reducing the costs of CRL trans-\nmission and eliminating the time lag between certificate \ninvalidity and certificate revocation inherent in CRL-based \ndesigns. The idea behind OCSP is straightforward. A CA \ncertificate contains a reference to an OCSP server. A cli-\nent validating a certificate transmits the certificate serial \nnumber, a hash of the issuer name, and a hash of the sub-\nject name, to that OCSP server. The OCSP server checks \nthe certificate status and returns an indication as to the \ncurrent status of the certificate. This removes the need to \ndownload the entire list of revoked certificates and allows \nfor essentially instantaneous revocation of invalid certifi-\ncates. It has the design tradeoff of requiring that clients \nvalidating certificates have network connectivity to the \nrequired OCSP server. \n 8 Shimshon Berkovits, Santosh Chokhani, Judith A. Furlong, Jisoo A. \nGeiter, and Jonathan C. Guild, Public Key Infrastructure Study: Final \nReport , produced by the MITRE Corporation for NIST, April 1994. \n 9 S. Micali, “ Effi cient certifi cate revocation, ” technical report TM-\n542b, MIT Laboratory for Computer Science, March 22, 1996. http://\nciteseer.ist.psu.edu/micali96effi cient.html . \n 10 R. Housely, W. Ford, W. Polk, and D. Solo, “ Internet X.509 pub-\nlic key infrastructure certifi cate and certifi cate revocation list profi le, ” \nIETF RFC 3280, April 2002. \n TABLE 26.3 Data fields in an X.509 CRL \n Version \n Specifies the format of the CRL. \nCurrent version is 2 \n SignatureAlgorithm \n Specifies the algorithm used to sign \nthe CRL \n Issuer \n Name of the CA issuing the CRL \n thisUpdate \n Time from when this CRL is valid \n nextUpdate \n Time when the next CRL will be \nissued \n TABLE 26.4 Format of a revocation record in an \nX.509 CRL \n Serial Number \n Serial number of a revoked certificate \n Revocation Date \n Date the revocation is effective \n CRL Extensions \n [Optional] specifies why the \ncertificate is revoked \n" }, { "page_number": 475, "text": "PART | III Encryption Technology\n442\n OCSP responses contain the basic information as \nto the status of the certificate, in the set of “ good, ” \n “ revoked, ” or “ unknown. ” They also contain a thisUpdate \ntime, similarly to a CRL, and are signed. Responses can \nalso contain a nextUpdate time, which indicates how long \nthe client can consider the OCSP response definitive. The \nreason the certificate was revoked can also be returned in \nthe response. OCSP is defined in IETF RFC 2560. 11 \n 7. SERVER-BASED CERTIFICATE VALIDITY \nPROTOCOL \n The X.509 certificate path construction and validation \nprocess requires a nontrivial amount of code, the ability \nto fetch and cache CRLs, and, in the case of mesh and \nbridge CAs, the ability to interpret CA policies. The \nServer-based Certificate Validity Protocol 12 was designed \nto reduce the cost of using X.509 certificates by allowing \napplications to delegate the task of certificate validation \nto an external server. SCVP offers two levels of function-\nality: Delegated Path Discovery (DPD), which attempts \nto locate and construct a complete certificate chain for a \ngiven certificate, and Delegated Path Validation (DPV), \nwhich performs a complete path validation, including rev-\nocation checking, on a certificate chain. The main reason \nfor this division of functionality is that a client can use an \nuntrusted SCVP server for DPD operations, since it will \nvalidate the resulting path itself. Only trusted SCVP serv-\ners can be used for DPV, since the client must trust the \nserver’s assessment of a certificate’s validity. \n SCVP also allows certificate checking according to \nsome defined certification policy. It can be used to cen-\ntralize policy management for an organization that wants \nall clients to follow some set of rules with respect to \nwhat sets of CAs or certification policies are trusted and \nso on. To use SCVP, the client sends a query to an SCVP \nserver that contains the following parameters: \n ● QueriedCerts . This is the set of certificates for which \nthe client wants the server to construct (and option-\nally validate) paths. \n ● Checks . The Checks parameter specifies what the \nclient wants the server to do. This parameter can be \nused to specify that the server should build a path, \nshould build a path and validate it without checking \nrevocation, or should build and fully validate \nthe path. \n ● WantBack . The WantBack parameter specifies what \nthe server should return from the request. This \ncan range from the public key from the validated \ncertificate path (in which case the client is fully \ndelegating certificate validation to the server) to all \ncertificate chains that the server can locate. \n ● ValidationPolicy . The ValidationPolicy parameter \ninstructs the server how to validate the resultant \ncertification chain. This parameter can be as simple \nas “ Use the default RFC 3280 validation algorithm ” \nor can specify a wide range of conditions that must \nbe satisfied. Some of the conditions that can be \nspecified with this parameter are: \n – KeyUsage and Extended Key Usage . The \nclient can specify a set of KeyUsage or \n ExtendedKeyUsage fields that must be present in \nthe end-entity certificate. This allows the client \nto only accept, for example, certificates that are \nallowed to perform digital signatures. \n – UserPolicySet . The client can specify a set of cer-\ntification policy OIDs that must be present in the \nCAs used to construct the chain. CAs can assert \nthat they follow some formally defined policy \nwhen issuing certificates, and this parameter \nallows the client to only accept certificates issued \nunder some set of these policies. For example, if \na client wanted to only accept certificates accept-\nable under the Medium Assurance Federal Bridge \nCA policies, it could assert that policy identifier \nin this parameter. For more information on policy \nidentifiers, see the section on X.509 extensions. \n – InhibitPolicyMapping . When issuing bridge or \ncross-certificates, a CA can assert that a certifi-\ncate policy identifier in one domain is equivalent \nto some other policy identifier within its domain. \nUsing this parameter, the client can state that it \ndoes not want to allow these policy equivalences \nto be used in validating certificates against values \nin the UserPolicySet parameter. \n – TrustAnchors . The client can use this parameter \nto specify some set of certificates that must be \nat the top of any acceptable certificate chain. By \nusing this parameter a client could, for example, \nsay that only VeriSign Class 3 certificates were \nacceptable in this context. \n – ResponseFlags . This specifies various options as \nto how the server should respond (if it needs to \nsign or otherwise protect the response) and if a \ncached response is acceptable to the client. \n 11 M. Myers, R. Ankeny, A. Malpani, S. Galperin, and C. Adams, \n “ X.509 Internet public key infrastructure: Online Certifi cate Status \nProtocol – OCSP, ” IETF RFC 2560, June 1999. \n 12 T. Freeman, R. Housely, A. Malpani, D. Cooper, and W. Polk, \n “ Server-Based Certifi cate Validation Protocol (SCVP), ” IETF RFC \n5055, December 2007. \n" }, { "page_number": 476, "text": "Chapter | 26 Public Key Infrastructure\n443\n – ValidationTime . The client may want a valida-\ntion performed as though it was a specific time so \nthat it can find out whether a certificate was valid \nat some point in the past. Note that SCVP does \nnot allow for “ speculative ” validation in terms of \nasking whether a certificate will be valid in the \nfuture. This parameter allows the client to specify \nthe validation time to be used by the server. \n – IntermediateCerts . The client can use this \nparameter to give additional certificates that can \npotentially be used to construct the certificate \nchain. The server is not obligated to use these \ncertificates. This parameter is used where the \nclient may have received a set of intermediate \ncertificates from a communicating party and is \nnot certain that the SCVP server has possession \nof these certificates. \n – RevInfos . Like the IntermediateCerts parameter, \nthe RevInfos parameter supplies extra information \nthat may be needed to construct or validate the \npath. Instead of certificates, the RevInfos parame-\nter supplies revocation information such as OCSP \nresponses, CRLs, or Delta CRLs. \n 8. X.509 BRIDGE CERTIFICATION \nSYSTEMS \n In practice, large-scale PKI systems proved to be more \ncomplex than could be easily handled under the X.509 \nhierarchical model. For example, Polk and Hastings 13 \nidentified a number of policy complexities that presented \ndifficulties when attempting to build a PKI system for \nthe U.S. federal government. In this case, certainly one \nof the largest PKI projects ever undertaken, they found \nthat the traditional model of a hierarchical certification \nsystem was simply unworkable. They state: \n The initial designs for a federal PKI were hierarchical \nin nature because of government’s inherent hierarchical \norganizational structure. However, these initial PKI plans \nran into several obstacles. There was no clear organization \nwithin the government that could be identified and agreed \nupon to run a governmental “ root ” CA. While the search for \nan appropriate organization dragged on, federal agencies \nbegan to deploy autonomous PKIs to enable their electronic \nprocesses. The search for a “ root ” CA for a hierarchical \nfederal PKI was abandoned, due to the difficulties of impos-\ning a hierarchy after the fact. \n Their proposed solution to this problem was to \nuse a “ mesh CA ” system to establish a Federal Bridge \nCertification Authority. This Bridge architecture has \nsince been adopted in large PKI systems in Europe and \nthe financial services community in the United States. \nThe details of the European Bridge CA can be found at \n www.bridge-ca.org . This part of the chapter details the \ntechnical design of bridge CAs and the various X.509 \ncertificate features that enable bridges. \n Mesh PKIs and Bridge CAs \n Bridge CA architectures are implemented using a non-\nhierarchical certification structure called a Mesh PKI . \nThe classic X.509 architecture joins together multiple \nPKI systems by subordinating them under a higher-level \nCA. All certificates chain up to this CA, and that CA \nessentially creates trust between the CAs below it. Mesh \nPKIs join together multiple PKI systems using a process \ncalled cross-certification that does not create this type of \nhierarchy. To cross-certify, the top-level CA in a given \nhierarchy creates a certificate for an external CA, called \nthe bridge CA . This bridge CA then becomes, in a man-\nner of speaking, a Sub-CA under the organization’s CA. \nHowever, the bridge CA also creates a certificate for the \norganizational CA, so it can also be viewed as a top-\nlevel CA certifying that organizational CA. \n The end result of this cross-certification process is \nthat if two organizations, A and B, have joined the same \nbridge CA, they can both create certificate chains from \ntheir respective trusted CAs through the other organiza-\ntion’s CA to the end-entity certificates that it has created. \nThese chains will be longer than traditional hierarchical \nchains but will have the same basic verifiable properties. \n Figure 26.7 shows how two organizations might be con-\nnected through a bridge CA and what the resultant cer-\ntificate chains look like. \n In the case illustrated in the diagram, a user that trusts \ncertificates issued by PKI A (that is, PKI A Root is a \n “ trust anchor ” ) can construct a chain to certificates issued \nby the PKI B Sub-CA, since it can verify Certificate 2 \nvia its trust of the PKI A Root. Certificate 2 then chains \nto Certificate 3, which chains to Certificate 6. Certificate \n6 is then a trusted issuer certificate for certificates issued \nby the PKI B Sub-CA. \n Mesh architectures create two significant technical \nproblems: path construction and policy evaluation. In \na hierarchical PKI system, there is only one path from \nthe root certificate to an end-entity certificate. Creating \na certificate chain is as simple as taking the current cer-\ntificate, locating the issuer in the subject field of another \n 13 W.T. Polk and N.E. Hastings, “ Bridge certifi cation authorities: \nconnecting B2B public key infrastructures, ” white paper, U.S. National \nInstitute of Standards and Technology, 2001, Available at http://csrc.\nnist.gov/groups/ST/crypto_apps_infra/documents/B2B-article.pdf . \n" }, { "page_number": 477, "text": "PART | III Encryption Technology\n444\n certificate, and repeating until the root is reached (com-\npleting the chain) or no certificate can be found (failing \nto construct the chain). In a mesh system, there can now \nbe cyclical loops in which this process can fail to termi-\nnate with a failure or success. This is not a difficult prob-\nlem to solve, but it is more complex to deal with than the \nhierarchical case. \n Policy evaluation becomes much more complex in \nthe mesh case. In the hierarchical CA case, the top-level \nCA can establish policies that are followed by Sub-CAs, \nand these policies can be encoded into certificates in \nan unambiguous way. When multiple PKIs are joined \nby a bridge CA, these PKIs may have similar policies \nbut can be expressed with different names. PKI A and \nPKI B may both certify “ medium assurance ” CAs that \nperform a certain level of authentication before issuing \ncertificates, but they may have different identifiers for \nthese policies. When joined by a bridge CA, clients may \nreasonably want to validate certificates issued by both \nCAs and understand the policies under which those cer-\ntificates are issued. The PolicyMapping technique allows \nsimilar policies under different names from disjoint PKIs \nto be translated at the bridge CA. \n Though none of these problems are insurmountable, \nthey increase the complexity of certificate validation \ncode and helped drive the invention of server-based vali-\ndation protocols such as SCVP. These protocols delegate \npath discovery and validation to an external server rather \nthan require that applications integrate this functionality. \nThough this may lower application complexity, the main \nbenefit of this strategy is that questions of acceptable \npolicies and translation can be configured at one central \nverification server rather than distributed to every appli-\ncation doing certificate validation. \n 9. X.509 CERTIFICATE FORMAT \n The X.509 standard (and the related IETF RFCs) specify \na set of data fields that must be present in a properly for-\nmatted certificate, a set of optional extension data fields \nthat can be used to supply additional certificate informa-\ntion, how these fields must be signed, and how the sig-\nnature data is encoded. All these data fields (mandatory \nfields, optional fields, and the signature) are specified in \nAbstract Syntax Notation (aka ASN.1), a formal language \nthat allows for exact definitions of the content of data \nfields and how those fields are arranged in a data struc-\nture. An associated specification, Determined Encoding \nRules (DER), is used with specific certificate data and the \nASN.1 certificate format to create the actual binary certif-\nicate data. The ASN.1 standard is authoritatively defined \nin ITU Recommendation X.693. (For an introduction to \nASN.1 and DER, see Kaliski, November 1993. 14 ) \nCertificate 1\nIssuer: Bridge CA\nSubject: PKI A Root\nCertificate 2\nIssuer: PKI A Root\nSubject: Bridge CA\nCertificate 5\nIssuer: PKI A Root\nSubject: PKI A Sub-CA\nCertificate 6\nIssuer: PKI B Root\nSubject: PKI B Sub-CA\nCertificate 3\nIssuer: Bridge CA\nSubject: PKI B Root\nCertificate 4\nIssuer: PKI B Root\nSubject: Bridge CA\nPKI A Root\nPKI B Root\nPKI A\nSub-CA\nBridge CA\nPKI B\nSub-CA\n FIGURE 26.7 Showing the structure of two PKIs connected via a bridge CA. \n 14 B.S. Kaliski, Jr., “ A layman’s guide to a subset of ASN.1, BER, \nand DER ” An RSA Technical Note, Revised November 1, 1993, avail-\nable at ftp.rsasecurity.com/pub/pkcs/ascii/layman.asc. \n" }, { "page_number": 478, "text": "Chapter | 26 Public Key Infrastructure\n445\n X.509 V1 and V2 Format \n The first X.509 certificate standard was published in 1988 \nas part of the broader X.500 directory standard. X.509 \nwas intended to provide public key-based access control \nto an X.500 directory and defined a certificate format \nfor that use. This format, now referred to as X.509 v1, \ndefined a static format containing an X.400 issuer name \n(the name of the CA), an X.400 subject name, a valid-\nity period, the key to be certified, and the signature of the \nCA. Though this basic format allowed for all the basic \nPKI operations, the format required that all names be in \nthe X.400 form, and it did not allow for any other infor-\nmation to be added to the certificate. The X.509 v2 for-\nmat added two more Unique ID fields but did not fix the \nprimary deficiencies of the v1 format. As it became clear \nthat name formats would have to be more flexible and \ncertificates would have to accommodate a wider variety \nof information, work began on a new certificate format. \n X.509 V3 Format \n The X.509 certificate specification was revised in 1996 \nto add an optional extension field that allows encoding \na set of optional additional data fields into the certificate \n(see Table 26.5 ). Though this change might seem minor, \nin fact it allowed certificates to carry a wide array of \ninformation useful for PKI implementation and for the \ncertificate to contain multiple, non-X.400 identities. \nThese extension fields allow key usage policies, CA pol-\nicy information, revocation pointers, and other relevant \ninformation to live in the certificate. The v3 format is the \nmost widely used X.509 variant and is the basis for the \ncertificate profile in RFC 3280, 15 issued by the Internet \nEngineering Task Force. \n X.509 Certificate Extensions \n This section contains a partial catalog of common X.509 \nv3 extensions. There is no existing canonical directory \nof v3 extensions, so there are undoubtedly extensions in \nuse outside this list. \n The most common extensions are defined in RFC \n3280, 16 which contains the IETF certificate profile used \nby S/MIME and many SSL/TLS implementations. These \nextensions address a number of deficiencies in the base \nX.509 certificate specification, and, in many cases, are \nessential for constructing a practical PKI system. In par-\nticular, the Certificate Policy, Policy Mapping, and Policy \nConstraints extensions form the basis for the popular \nbridge CA architectures. \n Authority Key Identifier \n The Authority Key Identifier extension identifies which \nspecific private key owned by the certificate issuer was \nused to sign the certificate. The use of this extension \nallows a single issuer to use multiple private keys and \nunambiguously identifies which key was used. This allows \nissuer keys to be refreshed without changing the issuer \nname and enables handling of events such as an issuer key \nbeing compromised or lost. \n Subject Key Identifier \n The Subject Key Identifier extension, like the Authority \nKey Identifier, indicates which subject key is contained in \nthe certificate. This extension provides a way to quickly \nidentify which certificates belong to a specific key owned \nby a subject. If the certificate is a CA certificate, the \nSubject Key Identifier can be used to construct chains \nby connecting a Subject Key Identifier with a matching \nAuthority Key Identifier. \n 15 R. Housely, W. Ford, W. Polk, and D. Solo, “ Internet X.509 pub-\nlic key infrastructure certifi cate and certifi cate revocation list profi le, ” \nIETF RFC 3280, April 2002. \n 16 R. Housely, W. Ford, W. Polk, and D. Solo, “ Internet X.509 pub-\nlic key infrastructure certifi cate and certifi cate revocation list profi le, ” \nIETF RFC 3280, April 2002. \n TABLE 26.5 Data fields in an X.509 version 3 \ncertificate \n Version \n The version of the standard used to \nformat the certificate \n Serial Number \n A number, unique relative to the \nissuer, for this certificate \n Signature Algorithm \n The specific algorithm used to sign \nthe certificate \n Issuer \n Name of the authority issuing the \ncertificate \n Validity \n The time interval this certificate is \nvalid for \n Subject \n The identity being certified \n Subject Public Key \n The key being bound to the subject \n Issuer Unique ID \n Obsolete field \n Subject Unique ID \n Obsolete field \n Extensions \n A list of additional certificate \nattributes \n Signature \n A digital signature by the issuer over \nthe certificate data \n" }, { "page_number": 479, "text": "PART | III Encryption Technology\n446\n Key Usage \n A CA might want to issue a certificate that limits the use \nof a public key. This could lead to an increase in over-\nall system security by segregating encryption keys from \nsignature keys and even segregating signature keys by \nutilization. For example, an entity may have a key used \nfor signing documents and a key used for decryption of \ndocuments. The signing key may be protected by a smart \ncard mechanism that requires a PIN per signing, whereas \nthe encryption key is always available when the user is \nlogged in. The use of this extension allows the CA to \nexpress that the encryption key cannot be used to gen-\nerate signatures and notifies communicating users that \nthey should not encrypt data with the signing public key. \n The usage capabilities are defined in a bit field, \nwhich allows a single key to have any combination of the \ndefined capabilities. The extension defines the following \ncapabilities: \n ● digitalSignature . The key can be used to generate \ndigital signatures. \n ● nonRepudiation . Signatures generated from this key \ncan be tied back to the signer in such a way that the \nsigner cannot deny generating the signature. This \ncapability is used in electronic transaction scenarios \nin which it is important that signers cannot disavow \na transaction. \n ● keyEncipherment . The key can be used to wrap \na symmetric key that is then used to bulk-encrypt \ndata. This is used in communications protocols and \napplications such as S/MIME in which an algorithm \nlike AES is used to encrypt data, and the public key \nin the certificate is used to then encipher that AES \nkey. In practice, almost all encryption applications \nare structured in this manner, since public keys are \ngenerally unsuitable for the encryption of bulk data. \n ● dataEncipherment . The key can be used to directly \nencrypt data. Because of algorithmic limitations of \npublic encryption algorithms, the keyEncipherment \ntechnique is nearly always used instead of directly \nencrypting data. \n ● keyAgreement . The key can be used to create a \ncommunication key between two parties. This \ncapability can be used in conjunction with the \n encipherOnly and decipherOnly capabilities. \n ● keyCertSign . The key can be used to sign another \ncertificate. This is a crucial key usage capability \nbecause it essentially allows creation of subcertificates \nunder this certificate, subject to basicConstraints . All \nCA certificates must have this usage bit set, and all \nend-entity certificates must not have it set. \n ● cRLSign . The key can be used to sign a CRL. \nCA certificates may have this bit set, or they may \ndelegate CRL creation to a different key, in which \ncase this bit will be cleared. \n ● encipherOnly . When the key is used for \n keyAgreement , the resultant key can only be used for \nencryption. \n ● decipherOnly . When the key is used for \n keyAgreement , the resultant key can only be used for \ndecryption. \n Subject Alternative Name \n This extension allows the certificate to define non-X.400 \nformatted identities for the subject. It supports a variety \nof namespaces, including email addresses, DNS names \nfor servers, Electronic Document Interchange (EDI) party \nnames, Uniform Resource Identifiers (URIs), and IP \naddresses, among others. \n Policy Extensions \n Three important X.509 certificate extensions (Certificate \nPolicy, Policy Mapping, and Policy Constraints) form a \ncomplete system for communicating CA policies for the \nway that certificates are issued and revoked and CA secu-\nrity is maintained. They are interesting in that they com-\nmunicate information that is more relevant to business \nand policy decision-making than the other extensions that \nare used in the technical processes of certificate chain \nconstruction and validation. As an example, a variety of \nCAs run multiple Sub-CAs that issue certificates accord-\ning to a variety of issuance policies, ranging from “ Low \nAssurance ” to “ High Assurance. ” The CA will typically \nformally define in a policy document all of its operat-\ning policies, state them in a practice statement, define an \nASN.1 Object Identifier (OID) that names this policy, and \ndistribute it to parties that will validate those certificates. \n The policy extensions allow a CA to attach a policy \nOID to its certificate, translate policy OIDs between \nPKIs, and limit the policies that can be used by Sub-CAs. \n Certificate Policy \n The Certificate Policy extension, if present in an issuer \ncertificate, expresses the policies that are followed by the \nCA, both in terms of how identities are validated before \ncertificate issuance as well as how certificates are revoked \nand the operational practices that are used to ensure integ-\nrity of the CA. These policies can be expressed in two \nways: as an OID, which is a unique number that refers \n" }, { "page_number": 480, "text": "Chapter | 26 Public Key Infrastructure\n447\n to one given policy, and as a human-readable Certificate \nPractice Statement (CPS). One Certificate Policy exten-\nsion can contain both the computer-sensible OID and a \nprintable CPS. One special OID has been set aside for \n AnyPolicy , which states that the CA may issue certifi-\ncates under a free-form policy. \n IETF RFC 2527 17 gives a complete description of \nwhat should be present in a CA policy document and \nCPS. More details on the 2527 guidelines are given in the \n “ PKI Policy Description ” section. \n Policy Mapping \n The Policy Mapping extension contains two policy \nOIDs, one for the issuer domain, the other for the sub-\nject domain. When this extension is present, a validating \nparty can consider the two policies identical, which is to \nsay that the subject OID, when present in the chain below \nthe given certificate, can be considered to be the same as \nthe policy named in the issuer OID. This extension is \nused to join together two PKI systems with functionally \nsimilar policies that have different policy reference OIDs. \n Policy Constraints \n The Policy Constraints extension enables a CA to disable \npolicy mapping for CAs farther down the chain and to \nrequire explicit policies in all the CAs below a given CA. \n 10. PKI POLICY DESCRIPTION \n In many application contexts, it is important to under-\nstand how and when certifying authorities will issue and \nrevoke certificates. Especially when bridge architectures \nare used, an administrator may need to evaluate a certi-\nfying authority’s policy to determine how and when to \ntrust certificates issued under that authority. For example, \nthe U.S. Federal Bridge CA maintains a detailed specifi-\ncation of its operating procedures and requirements for \nbridged CAs at the U.S. CIO office Web site ( www.cio.\ngov/fpkipa/documents/FBCA_CP_RFC3647.pdf ). Many \nother commercial CAs, such as VeriSign, maintain similar \ndocuments. \n To make policy evaluation easier and more uni-\nform, IETF RFC 2527 18 specifies a standard format for \ncertifying authorities to communicate their policy for \nissuing and revoking certificates. This specification \ndivides a policy specification document into the follow-\ning sections: \n ● Introduction. This section describes the type of cer-\ntificates that the CA issues, the applications in which \nthose certificates can be used, and the OIDs used to \nidentify CA policies. The Introduction also contains \nthe contact information for the institution operating \nthe CA. \n ● General Provisions. This section details the legal \nobligations of the CA, any warranties given as to \nthe reliability of the bindings in the certificate, and \ndetails as to the legal operation of the CA, including \nfees and relationship to any relevant laws. \n ● Identification and Authentication. This section \ndetails how certificate requests are authenticated at \nthe CA or RA and how events like name disputes or \nrevocation requests are handled. \n ● Operational Requirements. This section details \nhow the CA will react in case of key compromise, \nhow it renews keys, how it publishes CRLs or other \nrevocation information, how it is audited, and what \nrecords are kept during CA operation. \n ● Physical, Procedural, and Personnel Security \nControls. This section details how the physical \nlocation of the CA is controlled and how employees \nare vetted. \n ● Technical Security Controls. This section explains \nhow the CA key is generated and protected though \nits life cycle. CA key generation is typically done \nthrough an audited, recorded key generation \nceremony to assure certificate users that the CA key \nwas not copied or otherwise compromised during \ngeneration. \n ● Certificate and CRL Profile. The specific policy \nOIDs published in certificates generated by the CA \nare given in this section. The information in this \nsection is sufficient to accomplish the technical \nevaluation of a certificate chain published by this CA. \n ● Specification Administration. The last section \nexplains the procedures used to maintain and update \nthe certificate policy statement itself. \n These policy statements can be substantial documents. \nThe Federal Bridge CA policy statement is 93 pages long; \nother certificate authorities have similarly exhaustive doc-\numents. The aim of these statements is to provide enough \nlegal backing for certificates produced by these CAs so \nthat they can be used to sign legally binding contracts and \nautomate other legally relevant applications. \n 17 S. Chokhani and W. Ford, “ Internet X.509 public key infrastructure: \ncertifi cate policy and certifi cation practices framework, ” IETF RFC \n2527, March 1999. \n 18 S. Chokhani and W. Ford, “ Internet X.509 public key infrastructure: \ncertifi cate policy and certifi cation practices framework, ” IETF RFC \n2527, March 1999. \n" }, { "page_number": 481, "text": "PART | III Encryption Technology\n448\n 11. PKI STANDARDS ORGANIZATIONS \n The PKIX Working Group was established in the fall of \n1995 with the goal of developing Internet standards to \nsupport X.509-based PKIs. These specifications form \nthe basis for numerous other IETF specifications that use \ncertificates to secure various protocols, such as S/MIME \n(for secure email), TLS (for secured TCP connections), \nand IPsec (for securing Internet packets.) \n IETF PKIX \n The PKIX working group has produced a complete set \nof specifications for an X.509-based PKI system. These \nspecifications span 36 RFCs, and at least eight more \nRFCs are being considered by the group. In addition to \nthe basic core of X.509 certificate profiles and verifica-\ntion strategies, the PKIX drafts cover the format of certif-\nicate request messages, certificates for arbitrary attributes \n(rather than for public keys), and a host of other certifi-\ncate techniques. \n Other IETF groups have produced a group of specifi-\ncations that detail the usage of certificates in various pro-\ntocols and applications. In particular, the S/MIME group, \nwhich details a method for encrypting email messages, \nand the SSL/TLS group, which details TCP/IP connec-\ntion security, use X.509 certificates. \n SDSI/SPKI \n The Simple Distributed Security Infrastructure (SDSI) \ngroup was chartered in 1996 to design a mechanism for \ndistributing public keys that would correct some of the \nperceived complexities inherent in X.509. In particular, \nthe SDSI group aimed at building a PKI architecture \nthat would not rely on a hierarchical naming system but \nwould instead work with local names that would not have \nto be enforced to be globally unique. The eventual SDSI \ndesign, produced by Ron Rivest and Butler Lampson, 19 \nhas a number of unique features: \n ● Public key-centric design . The SDSI design uses \nthe public key itself (or a hash of the key) as the \nprimary indentifying name. SDSI signature objects \ncan contain naming statements about the holder of a \ngiven key, but the names are not intended to be the \n “ durable ” name of a entity. \n ● Free - form namespaces . SDSI imposes no restrictions \non what form names must take and imposes no \nhierarchy that defines a canonical namespace. \nInstead, any signer may assert identity information \nabout the holder of a key, but no entity is required \nto use (or believe) the identity bindings of any other \nparticular signer. This allows each application to \ncreate a policy about who can create identities, \nhow those identities are verified, and even what \nconstitutes an identity. \n ● Support for groups and roles . The design of many \nsecurity constructions (access control lists, for \nexample) often include the ability to refer to groups \nor roles instead of the identity of individuals. This \nallows access control and encryption operations to \nprotect data for groups, which may be more natural \nin some situations. \n The Simple Public Key Infrastructure (SPKI) group \nwas started at nearly the same time, with goals similar \nto the SDSI effort. In 1997, the two groups were merged \nand the SDSI/SPKI 2.0 specification was produced, \nincorporating ideas from both architectures. \n IETF OpenPGP \n The Pretty Good Privacy (PGP) public key system, cre-\nated by Philip Zimmermann, is a widely deployed PKI \nsystem that allows for the signing and encryption of files \nand email. Unlike the X.509 PKI architecture, the PGP \nPKI system uses the notion of a “ Web of Trust ” to bind \nidentities to keys. The Web of Trust (WoT) 20 replaces the \nX.509 idea of identity binding via an authoritative server, \nwith identity binding via multiple semitrusted paths. \n In a WoT system, the end user maintains a data-\nbase of matching keys and identities, each of which are \ngiven two trust ratings. The first trust rating denotes how \ntrusted the binding between the key and the identity is, \nand the second denotes how trusted a particular identity \nis to “ introduce ” new bindings. Users can create and sign \na certificate as well as import certificates created by other \nusers. Importing a new certificate is treated as an intro-\nduction. When a given identity and key in a database are \nsigned by enough trusted identities, that binding is treated \nas trusted. \n Because PGP identities are not bound by an authori-\ntative server, there is also no authoritative server that can \nrevoke a key. Instead, the PGP model states that the holder \nof a key can revoke that key by posting a signed revoca-\ntion message to a public server. Any user seeing a properly \n 19 R. Rivest and B. Lampson, “ SDSI-A simple distributed security \ninfrastructure, ” Oct. 1996. \n 20 Alfarez Abdul-Rahman, “ The PGP trust model, ” EDI- Forum , April \n1997, available at www.cs.ucl.ac.uk/staff/F.AbdulRahman/docs/ . \n" }, { "page_number": 482, "text": "Chapter | 26 Public Key Infrastructure\n449\n signed revocation message then removes that key from her \ndatabase. Because revocation messages must be signed, \nonly the holder of the key can produce them, so it is impos-\nsible to produce a false revocation without compromising \nthe key. If an attacker does compromise the key, then pro-\nduction of a revocation message from that compromised \nkey actually improves the security of the overall system \nbecause it warns other users not to trust that key. \n 12. PGP CERTIFICATE FORMATS \n To support the unique features of the Web of Trust sys-\ntem, PGP invented a very flexible packetized message \nformat that can encode encrypted messages, signed mes-\nsages, key database entries, key revocation messages, and \ncertificates. This packetized design, described in IETF \nRFC 2440, allows a PGP certificate to contain a vari-\nable number of names and signatures, as opposed to the \nsingle-certification model used in X.509. \n A PGP certificate (known as a transferrable public \nkey ) contains three main sections of packetized data. The \nfirst section contains the main public key itself, poten-\ntially followed by some set of relevant revocation pack-\nets. The next section contains a set of User ID packets, \nwhich are identities to be bound to the main public key. \nEach User ID packet is optionally followed by a set of \nSignature packets, each of which contains an identity and \na signature of the User ID packet and the main public key. \nEach of these Signature packets essentially forms an iden-\ntity binding. Because each PGP certificate can contain \nany number of these User ID/Signature elements, a single \ncertificate can assert that a public key is bound to multi-\nple identities (for example, multiple email addresses that \ncorrespond to a single user), certified by multiple signers. \nThis multiple-signer approach enables the Web of Trust \nmodel. The last section of the certificate is optional and \nmay contain multiple subkeys, which are single-function \nkeys (for example, an encryption-only key) also owned \nby the holder of the main public key. Each of these sub-\nkeys must be signed by the main public key. \n PGP Signature packets contain all the information \nneeded to perform a certification, including time intervals \nfor which the signature is valid. Figure 26.8 shows how \nthe multiname, multisignature PGP format differs from \nthe single-name with single-signature X.509 format. \n 13. PGP PKI IMPLEMENTATIONS \n The PGP PKI system is implemented in commercial \nproducts sold by the PGP corporation, and several open-\nsource projects, including GNU Privacy Guard (GnuPG) \nand OpenPGP. Thawte offers a Web of Trust service that \nconnects people with “ Web of Trust notaries ” that can \nbuild trusted introductions. PGP Corporation operates a \nPGP Global Directory that contains PGP keys along with \nan email confirmation service to make key certification \neasier. \n The OpenPGP group ( www.openpgp.org ) maintains \nthe IETF specification (RFC 2440) for the PGP message \nand certificate format. \n 14. W3C \n The World Wide Web Consortium (W3C) standards \ngroup has published a series of standards on encrypting \nand signing XML documents. These standards, XML \nSignature and XML Encryption, have a companion PKI \nspecification called XKMS (XML Key Management \nSpecification). \n The XKMS specification describes a meta-PKI that \ncan be used to register, locate, and validate keys that \nmay be certified by an outside X.509 CA, a PGP refer-\nrer, a SPKI key signer, or the XKMS infrastructure itself. \nThe specification contains two protocol specifications, \nX-KISS (XML Key Information Service Specification) \nand X-KRSS (XML Key Registration Service Specifi-\ncation). X-KISS is used to find and validate a public \nKey\nSubject Name\nSubject Name\nIssuer\nIssuer\nValidity Time\nValidity Time\nSubject\nIssuer\nValidity Time\nKey\nSubject Alt Name\nSignature\nSignature\nSignature\n...\nSubkey\nSubkey\n...\nSimplified X.509\nCertificate\nStructure\nSimplified PGP\nCertificate\nStructure\n FIGURE 26.8 Comparing X.509 and PGP certificate structures. \n" }, { "page_number": 483, "text": "PART | III Encryption Technology\n450\n key referenced in an XML document, and X-KRSS is \nused to register a public key so that it can be located by \nX-KISS requests. \n 15. ALTERNATIVE PKI ARCHITECTURES \n PKI systems have proven remarkably effective tools for \nsome protocols, most notably SSL, which has emerged \nas the dominant standard for encrypting Internet traffic. \nDeploying PKI systems for other types of applications \nor as a general key management system has not been as \nsuccessful. The differentiating factor seems to be that \nPKI keys for machine end-entities (such as Web sites) do \nnot encounter usability hurdles that emerge when issuing \nPKI keys for human end-entities. Peter Gutmann 21 has a \nnumber of overviews of PKI that present the fundamen-\ntal difficulties of classic X.509 PKI architectures. Alma \nWhitten and Doug Tygar 22 published “ Why Johnny Can’t \nEncrypt, ” a study of various users attempting to encrypt \nemail messages using certificates. This study showed \nsubstantial user failure rates due to the complexities of \nunderstanding certificate naming and validation practices. \nA subsequent study 23 showed similar results when using \nX.509 certificates with S/MIME encryption in Microsoft \nOutlook Express. \n 16. MODIFIED X.509 ARCHITECTURES \n Some researchers have proposed modifications or rede-\nsigns of the X.509 architecture to make obtaining a certif-\nicate easier and to lower the cost of operating applications \nthat depend on certificates. The goal of these systems is \noften to allow Internet-based services to use certificate-\nbased signature and encryption services without requiring \nthe user to consciously interact with certification services \nor even understand that certificates are being utilized. \n Perlman and Kaufman’s User-Centric PKI \n Perlman and Kaufman proposed the User-Centric PKI, 24 \nwhich allows the user to act as his own CA, with authen-\ntication provided through individual registration with \nservice providers. This method has several features that \nattempt to protect user privacy through allowing the user \nto pick the attributes that are visible to a specific service \nprovider. \n Gutmann’s Plug and Play PKI \n Peter Gutmann’s proposed “ Plug and Play PKI ” 25 pro-\nvides for similar self-registration with a service provider \nand adds location protocols to establish ways to contact \ncertifying services. The goal is to build a PKI that pro-\nvides a reasonable level of security and that is essentially \ntransparent to the end user. \n Callas’s Self-Assembling PKI \n In 2003, Jon Callas 26 proposed a PKI system that would \nuse existing standard PKI elements bound together by a \n “ robot ” server that would examine messages sent between \nusers and attempt to find certificates that could be used \nto secure the message. In the absence of an available \ncertificate, the robot would create a key on behalf of the \nuser and send a message requesting authentication. This \nsystem has the benefit of speeding deployment of PKI \nsystems for email authentication, but it loses many of the \nstrict authentication attributes that drove the development \nof the X.509 and IETF PKI standards. \n 17. ALTERNATIVE KEY MANAGEMENT \nMODELS \n PKI systems can be used for encryption as well as dig-\nital signatures, but these two applications have different \noperational characteristics. In particular, systems that \nuse PKIs for encryption require that an encrypting party \nhas the ability to locate certificates for its desired set of \nrecipients. In digital signature applications, a signer only \nrequires access to his own private key and certificate. The \ncertificates required to verify the signature can be sent \nwith the signed document, so there is no requirement for \nverifiers to locate arbitrary certificates. These difficulties \nhave been identified as factors contributing to the diffi-\nculty of practical deployment of PKI-based encryption \nsystems such as S/MIME. \n In 1984, Adi Shamir 27 proposed an Identity-Based \nEncryption (IBE) system for email encryption. In the \n 21 P. Gutmann, “ Plug-and-play PKI: A PKI your mother can use, ” in \n Proc. 12th Usenix Security Symp., Usenix Assoc., 2003, pp. 45 – 58. \n 22 A. Whitten and J.D. Tygar, “ Why Johnny can’t encrypt: a usabil-\nity evaluation of PGP 5.0, ” in Proceedings of the 8th USENIX Security \nSymposium, August 1999. \n 23 S. Garfi nkel and R. Miller, “ Johnny 2: A user test of key continu-\nity management with S/MIME and outlook express, ” Symposium on \nUsable Privacy and Security, 2005. \n 24 R. Perlman and C. Kaufman, “ User-centric PKI ” , 7 th symposium \non identity and trust on the internet. \n 25 P. Gutmann, “ Plug-and-Play PKI: A PKI Your Mother Can Use, ” \nin Proc. 12th Usenix Security Symp., Usenix Assoc., 2003, pp. 45 – 58. \n 26 J. Callas, “ Improving Message Security With a Self-Assembling \nPKI, ” In 2nd Annual PKI Research Workshop Pre-Proceedings , April \n2003, http://citeseer.ist.psu.edu/callas03improving.html . \n 27 A. Shamir, “ Identity-based Cryptosystems and Signature Schemes, ” \n Advances in Cryptology – Crypto ’ 84, Lecture Notes in Computer \nScience, Vol. 196, Springer-Verlag, pp. 47 – 53, 1984. \n" }, { "page_number": 484, "text": "Chapter | 26 Public Key Infrastructure\n451\n identity-based model, any string can be mathematically \ntransformed into a public key, typically using some pub-\nlic information from a server. A message can then be \nencrypted with this key. To decrypt, the message recipient \ncontacts the server and requests a corresponding private \nkey. The server is able to mathematically derive a private \nkey, which is returned to the recipient. Shamir disclosed \nhow to perform a signature operation in this model but \ndid not give a solution for encryption. \n This approach has significant advantages over the tra-\nditional PKI model of encryption. The most obvious is \nthe ability to send an encrypted message without locating \na certificate for a given recipient. There are other points \nof differentiation: \n ● Key recovery. In the traditional PKI model, if a \nrecipient loses the private key corresponding to a \ncertificate, all messages encrypted to that certificate’s \npublic key cannot be decrypted. In the IBE model, \nthe server can recompute lost private keys. If mes-\nsages must be recoverable for legal or other business \nreasons, PKI systems typically add mandatory sec-\nondary public keys to which senders must encrypt \nmessages to. \n ● Group support. Since any string can be transformed \nto a public key, a group name can be supplied \ninstead of an individual identity. In the traditional \nPKI model, groups are done by either expanding a \ngroup to a set of individuals at encrypt time or issu-\ning group certificates. Group certificates pose serious \ndifficulties with revocation, since individuals can \nonly be removed from a group as often as revocation \nis updated. \n In 2001, Boneh and Franklin gave the first fully \ndescribed secure and efficient method for IBE. 28 This \nwas followed by a number of variant techniques, includ-\ning Hierarchical Identity-Based Encryption (HIBE) and \nCertificateless Encryption. HIBE allows multiple key \nservers to be used, each of which control part of the \nnamespace used for encryption. Certificateless 29 encryp-\ntion adds the ability to encrypt to an end user using an \nidentity but in such a way that the key server cannot read \nmessages. IBE systems have been commercialized and \nare the subject of standards under the IETF (RFC 5091) \nand IEEE (1363.3). \n 28 D. Boneh and M. Franklin, “ Identity-based encryption from the \nWeil Pairing, ” SIAM J. of Computing, Vol. 32, No. 3, pp. 586 – 615, \n2003. \n 29 S. S. Al-Riyami, K. Paterson, “ Certifi cateless public key cryptog-\nraphy, ” In: C. S. Laih (ed.), Advances in Cryptology – Asiacrypt 2003 , \n Lecture Notes in Computer Science, Vol. 2894, pp. 452 – 473, Springer-\nVerlag, 2003. \n" }, { "page_number": 485, "text": "This page intentionally left blank\n" }, { "page_number": 486, "text": "453\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Instant-Messaging Security \n Samuel J. J. Curry \n RSA \n Chapter 27 \n Instant messaging (IM) has emerged as one of the most \ncommonplace, prevalent technologies on the Internet. \nYou would have to practically live under a rock (or \nat least not own a computer, personal digital assistant \n[PDA], or cell phone) to not have used it, much less to \nnot know what it is. Luddites 1 notwithstanding, most \npeople who use IM and even many people in information \ntechnology (IT) do not know how it works, why it is here, \nand what it means. \n 1. WHY SHOULD I CARE ABOUT \nINSTANT MESSAGING? \n When considering IM, it is important to realize that it is \nfirst and foremost a technology; it is not a goal in and of \nitself. Like an ERP 2 system, an email system, a database \nor directory, or a provisioning system, IM must ultimately \nserve the business: It is a means to an end. The end should \nbe measured in terms of quantifiable returns and should \nbe put in context. Before engaging in an IM project, you \nshould be clear about why you are doing it. The basic \nreasons you should consider IM are: \n ● Employee satisfaction \n ● Improving efficiency \n ● Performing transactions (some business transactions \nhave been built, as you’ll see later, to use IM \ninfrastructures; those who do this are aware of it \nand those who have not seen it before are frequently \nhorrified) \n ● Improving communications \n ● Improving response times and timelines \n In many deployments, IM can be a valuable contribu-\ntor to business infrastructure, but it shouldn’t be adopted \nwithout due consideration of business value (that is, \nis this worth doing?), business risk (that is, what is at \nstake?), and the people, processes, and technologies that \nwill make it valuable and secure. This chapter should \nhelp you make a plan, keep it current, and make sure that \nit makes a difference. \n Business decisions revolve around the principle of \nacceptable risk for acceptable return, and as a result, \nyour security decisions with respect to IM are effectively \n business decisions. To those of you reading this with a \nsecurity hat on, you’ve probably seen the rapprochement \nof security and business in your place of work: IM is no \nexception to that. So let’s look at IM, trends, the busi-\nness around it, and then the security implications. \n 2. WHAT IS INSTANT MESSAGING? \n IM is a technology in a continuum of advances (q.v) that \nhave arisen for communicating and collaborating over the \nInternet. The most important characteristic of IM is that \nit has the appearance of being in “ real time, ” and for all \nintents and purposes it is in real time. 3 Of course, some \nIM systems allow for synchronization with folks who \n 1 Luddites were a British social movement in the textile industry who \nprotested against the technological changes of the Industrial Revolution \nin the 19 th century ( http://en.wikipedia.org/wiki/Luddites ). \n 2 Enterprise resource planning is a category of software that ties together \noperations, fi nance, staffi ng, and accounting. Common examples include \nSAP and Oracle software. \n 3 The debate over what constitutes real time is a favorite in many tech-\nnical circles and is materially important when dealing with events and \ntheir observation in systems in which volumes are high and distances \nand timing are signifi cant. When dealing with human beings and the \nrelatively simple instances of whom we interact with and our percep-\ntions of communications, it is far simpler to call this “ real time ” than \n “ near real time. ” \n" }, { "page_number": 487, "text": "PART | III Encryption Technology\n454\nare offline and come online later through buffering and \nbatching delivery. \n IM technologies predate the Internet, with many early \nmainframe systems and bulletin board systems (BBS) \nhaving early chat-like functionality, where two or more \npeople could have a continuous dialogue. In the post-\nmainframe world, when systems became more distrib-\nuted and autonomous, chatting and early IM technologies \ncame along, too. In the world of the Internet, two basic \ntechnologies have evolved: communications via a central \nserver and communications directly between two peers. \nMany IM solutions involve a hybrid of these two basic \ntechnologies, maintaining a directory or registry of users \nthat then enable a peer-to-peer (P2P) connection. \n Some salient features that are relevant for the pur-\nposes of technology and will be important later in our \napproaches to securing instant messaging are as follows: \n ● Simultaneity. IM is a real time or “ synchronous ” \nform of communication — real-time opportunity and \nreal-time risk. \n ● Recording. IM transactions are a form of written \ncommunication, which means that there are logs and \nsessions can be captured. This is directly analogous \nto email. \n ● Nonrepudiation. Instant messaging usually involves \na dialogue and the appearance of nonrepudiation \nby virtue of an exchange, but there is no inherent \nnonrepudiation in most IM infrastructures. For \nexample, talking to “ Bobby ” via IM does not in any \nway prove that it is registered to “ Bobby ” or is actually \n “ Bobby ” at the time you are talking to the other user. \n ● Lack of confidentiality and integrity. There is no \nguarantee that sessions are private or unaltered in \nmost IM infrastructures without the implementation \nof effective encryption solutions. \n ● Availability. Most companies do not have guaran-\nteed service-level agreements around availability \nand yet they depend on IM, either consciously or \nunknowingly. \n Most users of an IM infrastructure also treat IM as an \ninformal form of communication, not subject to the nor-\nmal rules of behavior, formatting, and formality of other \nforms of business communication, such as letters, memo-\nranda, and email. \n 3. THE EVOLUTION OF NETWORKING \nTECHNOLOGIES \n Over time, technology changes and, usually, advances. \nAdvancements in this context generally refer to being \nable to do more transactions with more people for more \nprofit. Of course, generalizations of this sort tend to be \ntrue on the macroscopic level only as the tiny deltas in \ncapabilities, offerings, and the vagaries of markets drive \nmany small changes, some of which are detrimental. \nHowever, over the long term and at the macroscopic \nlevel, advances in technology enable us to do more things \nwith more people more easily and in closer to real time. \n There are more than a few laws that track the evolu-\ntion of some distinct technology trends and the posi-\ntive effects that are expected. A good example of this \nis Moore’s Law, which has yet to be disproven and has \nproven true since the 1960s. Moore’s Law, 4 put simply, \npostulates that the number of transistors that can be effec-\ntively integrated doubles roughly every two years, and the \nresultant computing power or efficiency increases along \nwith that. Most of us in technology know Moore’s Law, \nand it is arguable that we as a society depend on it for \neconomic growth and stimulus: There is always more \ndemand for more computing power (or has been to date \nand for the foreseeable future). A less known further \nexample is Gilder’s Law 5 (which has since been dis-\nproven) that asserts that a similar, related growth in avail-\nable bandwidth occurs over time. \n Perhaps the most important “ law ” with respect to \nnetworks and for instant messaging is Metcalfe’s Law, 6 \nwhich states that the value of a telecommunications net-\nwork increases exponentially with a linear increase in \nthe number of users. (Actually, it states that it is propor-\ntional to the square of the users on the network.) \n Let’s also assume that over time the value of a net-\nwork will increase; the people who use it will find new \nways to get more value out of it. The number of con-\nnections or transactions will increase, and the value \nand importance of that network will go up. In a sense, \nit takes time once a network has increased in size for \nthe complexity and number of transactions promised by \nMetcalfe’s Law to be realized. \n What does all this have to do with IM? Let’s tie it \ntogether: \n ● Following from Moore’s Law (and to a lesser \nextent Gilder), computers (and their networks) will \nget faster and therefore more valuable — and so \nconnecting them in near real time (of which IM \nis an example of real-time communications) is a \n natural occurrence and will increase value . \n 4 http://en.wikipedia.org/wiki/Moore%27s_Law . \n 5 www.netlingo.com/lookup.cfm?term \u0003 Gilder ’ s%20Law . \n 6 http://en.wikipedia.org/wiki/Metcalfe%27s_Law . \n" }, { "page_number": 488, "text": "Chapter | 27 Instant-Messaging Security\n455\n Your Workforce \n Whether you work in an IT department or run a small \ncompany, your employees have things they need to do: \nprocess orders, work with peers, manage teams, talk to \ncustomers and to partners. IM is a tool they can and will \nuse to do these things. Look at your workforce and their \nhigh-level roles: Do they need IM to do their jobs? IM is \nan entitlement within a company, not a right. The man-\nagement of entitlements is a difficult undertaking, but it \nisn’t without precedent. For older companies, you prob-\nably had to make a decision similar to the IM decision \nwith respect to email or Internet access. The first response \nfrom a company is usually binary: Allow everyone or dis-\nallow everyone. This reactionary response is natural, and \nin many cases some roles are denied entitlements such as \nemail or Internet access on a regular basis. Keep in mind \nthat in some industries, employees make the difference \nbetween a successful, aggressively growing business and \none that is effectively in a “ maintenance mode ” or, worse, \nis actively shrinking. \n In the remainder of this chapter, we outline factors in \nyour decision making, as follows with the first factor. \n Factor #1 \n Examples of privileged IM entitlement include allowing \ndevelopers, brokers, sales teams, executives, operations, \nand/or customer support to have access. Warning: If you \nprovide IM for some parts of your company and not for \nothers, you will be in a situation in which some employ-\nees have a privilege that others do not. This will have \ntwo, perhaps unintended, consequences: \n ● Employees without IM will seek to get it by abusing \nbackdoors or processes to get this entitlement. \n ● Some employees will naturally be separated from \nother employees and may become the object \nof envy or of resentment. This could cause \nproblems. \n We have also touched on employee satisfaction, and \nit is important to understand the demographics of your \nworkplace and its social norms. It is important to also \nconsider physical location of employees and general \ndemographic considerations with respect to IM because \nthere could be cultural barriers, linguistic barriers, and, \nas we will see later, generational ones, too. \n ● Following from Metcalfe, over time networks will \nbecome increasingly valuable to users of those \nnetworks. \n In other words, IM as a phenomenon is really a tool \nfor increasing connections among systems and networks \nand for getting more value. For those of us in the busi-\nness world, this is good news: Using IM, we should be \nable to realize more value from that large IT investment \nand should be able to do more business with more peo-\nple more efficiently. That is the “ carrot, ” but there is a \nstick, too: It is not all good news, because where there is \nvalue and opportunity, there is also threat and risk. \n 4. GAME THEORY AND INSTANT \nMESSAGING \n Whenever gains or losses can be quantified for a given \npopulation, game theory 7 applies. Game theory is used \nin many fields to predict what is basically the social \nbehavior or organisms in a system: people in economics \nand political science, animals in biology and ecology, \nand so on. When you can tell how much someone stands \nto gain or lose, you can build reasonably accurate pre-\ndictive models for how they will behave; and this is in \nfact the foundation for many of our modern economic \ntheories. The fact of the matter is that now that the \nInternet is used for business, we can apply game theory \nto human behavior with Internet technologies, too, and \nthis includes IM. \n On the positive side, if you are seeing more of your \ncolleagues, employees, and friends adopt a technology, \nespecially in a business context, you can be reasonably \nsure that there is some gain and loss equation that points \nto an increase in value behind the technology. Generally, \npeople should not go out of their way to adopt technolo-\ngies simply for the sake of adopting them on a wide \nscale (though actually, many people do just this and then \nsuffer for it; technology adoption on a wide scale and \nover a long period of time generally means that some-\nthing is showing a return on value). Unfortunately, in a \nbusiness context, this may not translate into more busi-\nness or more value for the business. Human beings not \nonly do things for quantifiable, money-driven reasons; \nthey also do things for moral and social reasons. \n Let’s explore the benefits of adopting IM technology \nwithin a company, and then we can explore the risks a \nlittle more deeply. \n 7 http://en.wikipedia.org/wiki/Game_Theory . \n Factor #1: Do your employees need IM? If so, which \nemployees need IM and what do they need it for? \n" }, { "page_number": 489, "text": "PART | III Encryption Technology\n456\nThis is a generation that grows up in a home with mul-\ntiple televisions, multiple computers, cell phones from a \nyoung age, PowerPoint in the classroom, text messaging, \nemail, 9 and IM. \n The typical older-generation values in a generational \nconflict that we must watch out for are assuming that the \nyounger generation is inherently more lazy, is looking \nfor unreasonable entitlement, wants instant gratification, \nor is lacking in intelligence and seasoning. If you catch \nyourself doing this, stop yourself and try to empathize \nwith the younger folks. Likewise, the younger genera-\ntion has its pitfalls and assumptions; but let’s focus on the \nyounger, emerging generation, because they will soon be \nentering the workforce. If you find yourself assuming that \nmultitasking and responding to multiple concurrent stim-\nuli is distracting and likely to produce a lack of efficiency, \nstop and run through a basic question: Is what’s true for \nyou immediately true for the people you are interacting \nwith? This leads to Factors 3 and 4. \n Factor #3 \n With respect to IM, does IM (and the interruptions it \ncreates) have to mean that someone is less efficient, or \ncould they be more efficient because of it? As we’ve \nseen, it is possible that many younger employees can \nhave multiple IM conversations and can potentially get \na lot more done in less time compared to either sending \nout multiple emails or waiting for responses. In many \nrespects, this question is similar to the questions that the \nBlackBerry raised when it was introduced to the work-\nforce, and there are three ways that it can be answered: \n ● In some cases, jobs require isolation and focus, and \na culture of IM can create conditions that are less \neffective. \n ● In some cases, it doesn’t matter. \n ● In some cases, some employees may be much more \neffective when they receive maximum stimulus and \ninput. \n Factor #4 \n Factor #2 \n Economists generally hold that employees work for \nfinancial, moral, and social reasons. 8 Financial reasons \nare the most obvious and, as we’ve seen, are the ones \nmost easily quantified and therefore subject to game \ntheory; companies have money, and they can use it to \nincent the behaviors they want. However, we as human \nbeings also work on things for moral reasons, as is the \ncase with people working in nonprofit organizations or \nin the open-source movement. These people work on \nthings that matter to them for moral reasons. \n However, the social incentives are perhaps the most \nimportant for job satisfaction. In speaking recently \nwith the CIO of a large company that banned the use of \nexternal IM, the CIO was shocked that a large number \nof talented operations and development people refused \nlucrative offers on the grounds that IM was disallowed \nwith people outside the company. This led to a discus-\nsion of the motivators for an important asset for this \ncompany: attracting and keeping the right talent. The \nlesson is that if you want to attract the best, you may \nhave to allow them to use technologies such as IM. \n After you have determined whether or not IM is \nneeded for the job (Factor #1), interview employees on \nthe uses of IM in a social context: with whom do they IM \nand for what purposes? Social incentives include a large \nnumber of social factors, including keeping in touch with \nparents, siblings, spouses, and friends but also with col-\nleagues overseas, with mentors, and for team collabora-\ntion, especially over long distances. \n Generational Gaps \n An interesting phenomenon is observable in the generation \nnow in schools and training for the future: They multitask \nfrequently. Generational gaps and their attendant conflicts \nare nothing new; older generations are in power and are \nseen to bear larger burdens, and younger generations are \noften perceived in a negative light. Today IM may be at \nthe forefront of yet another generational conflict. \n If you’ve observed children recently, they do more \nall at once than adults have done in the past 20 years. \n 8 http://en.wikipedia.org/wiki/Incentive . The actual incentive group-\ning that works on human beings is subject to debate, but it is limited to \neconomic, moral, and social for the purposes of this chapter. This is a \nfascinating subject area well worth reading more about. \n 9 To my amusement, my goddaughter, who is 11, recently told me that \n “ email was old fashioned ” and she couldn’t believe that I used it so \nheavily for work! \n Factor #2: Is IM important as a job satisfaction \ncomponent? \n Factor #3: Does IM improve or lessen efficiency? \n Factor #4: Will this efficiency change over time, or is it in \nfact different for different demographics of my workforce? \n" }, { "page_number": 490, "text": "Chapter | 27 Instant-Messaging Security\n457\nMetcalfe’s Law. This is why some companies find that \na small team using IM (where a built in business pro-\ncess on an IM application has grown fast, with real-time \nresponse times for some processes) has now reached the \npoint where sizeable business and transactions are con-\nducted over IM. \n Factor #5 \n Factor #5: Does your company have a need or dependency \non IM to do business? \n If this is the case, you need to understand immediately \nwhich applications and infrastructures you rely on. You \nshould begin a process for examining the infrastructure \nand mapping out the business processes: \n ● Where are the single points of failure? \n ● Where does liability lie? \n ● What is availability like? \n ● What is the impact of downtime on the business? \n ● What is the business risk? \n ● What are your disaster recovery and business \ncontinuity options? \n The answers to these questions will lead to natural \naction plans and follow the basic rule of acceptable risk \nfor acceptable return, unless you find you have a regu-\nlatory implication (in which case, build action plans \nimmediately). \n Factor #6 \n Factor #6: Are you considering deploying a technology \nor process on an IM infrastructure? \n If this is the case, you need to understand immediately \nwhich applications and infrastructures you will rely on. \nYou should begin a process for understanding the infra-\nstructure and quantifying and managing the business \nrisk. Again, make sure, as with the workforce, that you \nin fact need the IM infrastructure in the first place. \n 5. THE NATURE OF THE THREAT \n There are some clear threats, internal and external, and \nboth inadvertent and malicious. These different threats \ncall for the implementation of different countermeasures. \n Figure 27.1 shows a simple grid of the populations that a \nsecurity professional will have to consider in the context \nof their security postures for IM. \n Consider generational differences and the evolution of \nyour workforce. This may all be moot, or there may be \na de facto acceptance of IM over time, much as there \nwas with email and other, older technologies in compa-\nnies. It is interesting to note that younger, newer com-\npanies never consider the important factors with respect \nto IM because they culturally assume it is a right . This \nassumption may form the basis of some real conflicts in \nthe years to come. Imagine companies that shut off new \ntechnologies being sued over “ cruel and unusual ” work \nconditions because they are removing what employees \nassume to be a right. Of course, this conflict is small \nnow, but it is important to have a process for dealing \nwith new technologies within the corporation rather than \nbeing blindsided by them as they emerge or, worse, as \na new generation of users find themselves cut off from \nstimuli and tools that they consider necessary for their \njob or for their quality of life. \n In the end, the first four factors should help you define \nthe following three items: \n ● Do you need IM as a company? \n ● Why do employees need it? \n – To do their jobs? \n – To improve efficiency? \n – To do more business? \n – To work with peers? \n – To improve employee satisfaction? \n ● Who needs it? \n Without answers to these questions, which are all about \nthe workforce as a whole, the role of IM will not be eas-\nily understood nor established within the company. \n Transactions \n Some companies have taken a bold step and use IM infra-\nstructure for actual business processes. These are typi-\ncally younger, fast-growing companies that are looking \nfor more real-time transactions or processes. They typi-\ncally accept a higher level of risk in general in exchange \nfor greater potential returns. The unfortunate companies \nare the ones that have built product systems and pro-\ncesses on an IM infrastructure unknowingly and now \nhave to deal with potentially unintended consequences of \nthat infrastructure. \n How does this happen? The infrastructures for IM \non the Internet are large, ubiquitous, and fairly reliable, \nand it is natural that such infrastructures will get used. \nAs we saw in the section on the evolution of Internet \ntechnologies, users will find ways to increase complexity \nand the value of networks over time, in essence fulfilling \n" }, { "page_number": 491, "text": "PART | III Encryption Technology\n458\n Malicious Threat \n We’ve looked at the good guys, who are basically look-\ning within the company or are perhaps partners looking to \nuse a powerful, real-time technology for positive reasons: \nmore business with more people more efficiently. Now it \nis time to look at the bad guys: black hats. 10 \n In the “ old days, ” black hats were seen to be young kids \nin their parents ’ basements, or perhaps a disgruntled techie \nwith an axe to grind. These were people who would invest \na disproportionate amount of time in an activity, spend-\ning hundreds of hours to gain fame or notoriety or to enact \nrevenge. This behavior led to the “ worm of the week ” and \nmacro-viruses. They were, in effect, not a systematic threat \nbut were rather background noise. We will calls these folks \n “ amateurs ” for reasons that will become clear. \n There have also always been dedicated black hats, \nor “ professionals, ” who plied their trade for gain or as \nmercenaries. These folks were at first in the minority \nand generally hid well among the amateurs. In fact, they \nhad a vested interest in seeing the proliferation of “ script \nkiddies ” who could “ hack ” easily: This activity created \nbackground noise against which their actions would go \nunnoticed. Think of the flow of information and activ-\nity, of security incidents as a CSI scene where a smart \ncriminal has visited barber shops, collected discarded hair \nfrom the floors, and then liberally spread them around the \ncrime scene to throw off the DNA collection of forensic \ninvestigators. This is what the old-world professionals did \nand why they rejoiced at the “ worms of the week ” that \nprovided a constant background noise for them to hide \ntheir serious thefts. \n Now we come to the modern age, and the profession-\nals are in the majority. The amateurs have grown up and \nfound that they can leave their parents ’ basements and go \nout and make money working for real organizations and \ncompanies, plying their skills to abuse the Internet and \nsystems for real gain. This is what led to the proliferation \nof spyware; and because it is an economic activity, we \ncan quantify losses and gains for this population and can \nbegin to apply game theory to predicting their behaviors \nand the technologies that they will abuse for gain . \n The bad guys are now a vested interest, as has been \nwell documented 11 and analyzed; they are a sustained, real, \ncommercial interest and present a clear and present risk to \nmost IT infrastructures. Keep in mind the following gen-\neral rules about these online criminals (spammers, spyware \nwriters, virus writers, phishers, pharmers, and the like): \n ● It is not about ego or a particular trick; they are not \nabove using or abusing any technology. \n ● They do what they do to make money. This is your \nmoney they are taking. They are a risk to you and to \nyour company. \n ● They are sophisticated; they have supply and distri-\nbution agreements and partners, they have SLAs 12 \nand business relationships, and they even have qual-\nity labs and conferences. \n In general, online criminals will seek to exploit IM \nif they can realize value in the target and if they can effi-\nciently go after it. IM represents a technology against \nwhich it is easy for black hats to develop exploits, and \neven relatively small returns (such as a 1% click rate on \nSPIM, which stands for spam instant messaging ) would \nhave enormous potential value. \n Factor #7 \n Factor #7: Does the value to the company of informa-\ntion and processes carried over IM represent something \nthat is a valuable target (because it can either affect your \nbusiness or realize a gain)? This should include the abil-\nity to blackmail employees and partners: Can someone \nlearn things about key employees that they could use to \nthreaten or abuse employees and partners? \n The answer to this question will help put in perspective \nthe potential for IM technology to be abused. \nRandom\nAccidents\nEducation\nSomeone trying\nto be more\nproductive\nEducation and\na solid policy on\nlegitmate IM uses\nMotivated\nAmateurs and\nscript kiddies\nGood security hygiene\nProfessional\ncriminals\nEducation, tools,\npolicies and\nprocesses\nMalicious\nInadvertent\n FIGURE 27.1 Populations that present a corporate risk and the correct \nresponses to each. \n 10 This term was a diffi cult one to choose. I opted not to go with \n crackers or hackers but rather with black hats because that is the most \nneutral term to refer to malicious computer exploiters. \n 11 www.rsa.com/blog/blog.aspx#Security-Blog . \n 12 Service-level agreement. \n" }, { "page_number": 492, "text": "Chapter | 27 Instant-Messaging Security\n459\n Factor #8 \n Factor #8: If the IM technology were abused or compro-\nmised, what would be the risk to the business? \n SPIM, worms, viruses, spyware, Trojans, rootkits, back-\ndoors and other threats can spread over IM as readily as \nemail, file shares, and other transmission vectors — in \nfact, it is arguable that it can spread more readily via IM. \nWill an incident over IM cause an unacceptable risk to \nthe business? This should be answered in the same way \nas “ Will an incident over email cause an unacceptable \nrisk to the business? ” For most organizations the answer \nshould always be yes. \n Vulnerabilities \n Like any form of software (or hardware), IM applica-\ntions and infrastructure are subject to vulnerabilities and \nweaknesses from poor configuration and implementation. \nMost of these applications do not have the same degree \nof rigor around maintenance, support, and patching as \nother enterprise software applications. As a result, it is \nimportant to have processes for penetration testing and \nsecurity audits and to establish a relationship, if possible, \nwith manufacturers and distributors for enterprise caliber \nsupport. In many cases, the total cost of ownership of an \nIM infrastructure and applications may rise consider-\nably to make up for this lack. For this reason, using the \nfreeware services may be a temptation, but the risks may \nquickly outweigh the savings. \n Man-in-the-Middle Attacks \n A man-in-the-middle attack is a class of attack in which \na third party acts as a legitimate or even invisible bro-\nker. As shown in Figure 27.2 , an attacker is posing to \neach user in an IM transaction as a legitimate part of the \nprocess while in fact recording or relaying information. \nThis is a common attack philosophy, and without basic \nmutual authentication or encryption tools, it is inexpen-\nsive for black hats to carry out in a wide-scale manner. \n As a security professional, it is possible to monitor IM \nprotocols and the IP addresses with which they commu-\nnicate, allowing IM to and from only certain recognized \nhubs. Even this is not perfect, because “ X-in-the-middle ” \nattacks in their most generic form can include everything \nfrom Trojans and keyloggers to line taps. It is also pos-\nsible to monitor communications among peers, although \neffectively looking for man-in-the-middle attacks in this \nway is difficult. \n Phishing and Social Engineering \n Social engineering is the practice of fooling someone \ninto giving up something they wouldn’t otherwise surren-\nder through the use of psychological tricks. Social engi-\nneers rely on the normal behavior of people presented \nwith data or a social situation to respond in a predictable, \nhuman way. An attack of this sort will rely on present-\ning trusted logos and a context that seems normal but is \nin fact designed to create a vulnerability that the social \nengineer can exploit. This is relevant to IM because peo-\nple can choose IM identities similar to ones with whom \nthe user normally communicates . The simplest attack of \nall is to get an identity that is similar to a boss, sibling, \nfriend, or spouse and then provide information to get \ninformation. Employees should be educated to always \ndirectly check with end users to ensure that they are in \nfact communicating with whom they believe they are \ncommunicating. \n Knowledge Is the Commodity \n It goes without saying that knowledge of business trans-\nactions is itself something that can be turned to profit. \nThe contents of a formula, the nature of an experiment, \nthe value and type of a financial transaction are all \nimportant to competitors and to speculators. Stock values \nrise and fall on rumors of activity, and material knowl-\nedge of what is happening can be directly translated into \nprofit. \n There are companies, organizations, and individuals \nthat launder money and reap huge profits on the basis \nof insider information and intellectual property, and the \nbad guys are looking for exactly this information. Know \nwhat is being communicated and educate your employ-\nees about the open, real-time, and exposed nature of IM. \nMake sure that you have solid policies on what is accept-\nable to communicate over IM. \nUser A\nUser B\nUser A\nUser B\nMan-in-\nthe-Middle\nActual Communications\nNormal or Apparent Communications\n FIGURE 27.2 Normal versus man-in-the-middle communications. \n" }, { "page_number": 493, "text": "PART | III Encryption Technology\n460\n Factor #9 \n Factor #9: What intellectual property, material infor-\nmation, corporate documents, and transaction data are \nat risk over IM? \n Make sure that you know what people use IM for and, if \nyou do not have the means to control it, consider deny-\ning access to IM or implementing content-filtering tech-\nnologies. With false positives and technology failing to \ntrack context of communications, do not rely heavily \non a technological answer; make sure you have educa-\ntional options and that you document use of IM in your \nenvironment. \n Data and Traffic Analysis \n The mere presence of communications is enough, in some \ncircumstances, to indicate material facts about a busi-\nness or initiative. This is particularly well understood by \nnational governments and interests with “ signals intelli-\ngence. ” You do not have to know what someone is saying \nto have an edge. This has been seen throughout history, \nwith the most obvious and numerous examples during \nWorld War II: If you know that someone is communicat-\ning, that is in effect intelligence. \n Communicating with a lawyer, making a trade, and \nthe synchronizing in real time of that information with \npublic data may construe a material breach if it’s inter-\ncepted. Wherever possible, artificial traffic levels and data \ncontent flags to the outside world should be used to dis-\nguise the nature and times of communication for transac-\ntions that indicate insider information or transactions that \nyou otherwise wouldn’t want the world to know about. \n Factor #10 \n Factor #10: What do transaction types and times tell \npeople about your business? Is it acceptable for people \nto have access to this information? \n Some of the most important events in history have \noccurred because of intelligence not of what was said \nbut rather how, and most important why, a communica-\ntion occurred. \n Unintentional Threats \n Perhaps the most insidious threat isn’t the malicious one; \nit is the inadvertent one. Employees are generally seeking \nto do more work more efficiently. This is what leads to \nthem using their public, Web-based emails for working at \nhome rather than only working in the office. Very often \npeople who are working diligently do something to try to \nwork faster, better, or in more places and they inadvert-\nently cause a business risk. These are, in effect, human \nbehaviors that put the company at risk, and the answer is \nboth an educational one and one that can benefit from the \nuse of certain tools. \n Intellectual Property Leakage \n Just as employees shouldn’t leave laptops in cars or work \non sensitive documents in public places such as an air-\nport or coffee shop, they also should not use public IM \nfor sensitive material. Intellectual property leakage of \nsource code, insider information, trade secrets, and so on \nare major risks with IM when used incorrectly, although \nit should be noted that when deployed correctly, such \ntransactions can be done safely over IM (when encrypted \nand appropriate, with the right mutual authentication). \n Inappropriate Use \n As with Web browsing and email, inappropriate use of \nIM can lead to risks and threats to a business. In particu-\nlar, IM is an informal medium. People have their own \nlingo in IM, with acronyms 13 peculiar to the medium \n(e.g., B4N \u0003 bye for now, HAND \u0003 have a nice day, \nIRL \u0003 in real life, and so on). The very informal nature \nof the medium means that it is generally not conducted \nin a businesslike manner; it is more like our personal \ninteractions and notes. When the social glue of an office \nbecomes less business-oriented, it can lead to inappro-\npriate advances, commentary, and exposure. Outlining \ninappropriate use of IM, as with any technology, should \nbe part of the general HR policy of a company, and cor-\nrect business use should be part of a regular regimen of \nbusiness conduct training. \n Factor #11 \n Factor #11: What unintended exposure could the com-\npany face via IM? Which populations have this information \nand require special training, monitoring, or protection? \n Make sure that your company has a strategy for catego-\nrization and management of sensitive information, in \n 13 A good source on these is found at AOL ( www.aim.com/acronyms.\nadp ). \n" }, { "page_number": 494, "text": "Chapter | 27 Instant-Messaging Security\n461\nparticular personally identifiable information, trade \nsecrets, and material insider information. \n Regulatory Concerns \n Last, but far from least, are the things that simply must \nbe protected for legal reasons. In an age where customer \ninformation is a responsibility, not a privilege, where \ncredit-card numbers and Social Security numbers are bar-\ntered and traded and insider trading can occur in real time \n(sometimes over IM), it is imperative that regulatory con-\ncerns be addressed in an IM policy. If you can’t imple-\nment governance policies over IM technology, you might \nhave to ban IM use until such time as you can govern it \neffectively — and you very well may have to take steps to \nactively root out and remove IM applications, with dras-\ntic consequences for those who break the company’s IM \npolicy. Examples of these are common in financial insti-\ntutions, HR organizations, and healthcare organizations. \n Factor #12 \n Factor #12: Do you have regulatory requirements that \nrequire a certain IM posture and policy? \n No matter how attractive the technology, you may not \nbe able to adopt IM if the regulatory concerns aren’t \naddressed. If you absolutely need it, the project to adopt \ncompliant IM will be driven higher in the priority queue; \nbut the basic regulatory requirement and penalties could \nbe prohibitive if this isn’t done with utmost care and \nattention. \n Remember also that some countries have explicit \nregulations about monitoring employees. In some juris-\ndictions in Europe and Asia in particular, it is illegal to \nmonitor employee behavior and actions . This may seem \nalien to some in the United States, but multinationals and \ncompanies in other regions must conform to employee \nrights requirements. \n 6. COMMON IM APPLICATIONS \n IM is a fact of life. Now it is time to decide which appli-\ncations and infrastructures your company can and will be \nexposed to. You most likely will want to create a policy \nand to track various uses of IM and develop a posture and \neducational program about which ones are used in which \ncontexts. You will want to review common IM applica-\ntions in the consumer or home user domain because they \nwill find their ways into your environment and onto your \nassets. For example, many people install IM applications \nfor personal use on laptops that they then take out of the \ncompany environment; they aren’t using them at work, \nbut those applications are on systems that have company \nintellectual property and material on them and they are \nused on insecure networks once they leave the building. \n Consumer Instant Messaging \n Numbers of subscribers are hard to come by in a consist-\nent manner, although some analyst firms and public sites \npresent disparate numbers. Wikipedia has a current view \nand commentary on relative sizes of IM networks 14 and \nconcentrations that are worth examining and verifying \nin a more detailed fashion. The major IM programs are \nWindows Live Messenger/MSN, Skype, Jabber, AIM, \nYahoo! Messenger, and eBuddy. There is also a good \ncomparison of the technologies and infrastructure in use \nwith various tools. 15 Others include a host of smaller \napplications such as ICQ and local ones, the most nota-\nble of which is QQ, which is primarily in China. \n It is important to keep in mind who owns the infra-\nstructures for private IM applications. In effect, the IM \nbackbone passes information in the clear (unless an \nencryption program is used, and the owners of the infra-\nstructure can see and collect data on who is communi-\ncating with whom, how, and what they are saying. \n It could, for instance, with respect to quarterly earn-\nings or an investigation or lawsuit, be materially impor-\ntant to know with whom the CFO is communicating and \nat what times. As a result, companies are well advised to \neducate employees on the risks of IM in general and the \nacceptable uses for certain applications. It may be accept-\nable to talk to your husband or wife on Yahoo! or QQ, \nbut is it acceptable to talk to your lawyer that way? \n Enterprise Instant Messaging \n Some companies, such as IBM and Microsoft, offer IM \nsolutions for internal use only. These are readily deplo-\nyed and allow for good, real-time communications \nwithin the company. They of course do not address the \nissues of bridging communications with the outside \nworld and the general public, but they are good for meet-\ning some needs for improved productivity and efficiency \n 14 http://en.wikipedia.org/wiki/Instant_messaging . \n 15 http://en.wikipedia.org/wiki/Comparison_of_instant_messaging_\nclients . \n" }, { "page_number": 495, "text": "PART | III Encryption Technology\n462\nthat are clearly business related. The risk that these pose \nis in complacency — assuming that the IM application \nis exclusively used within the organization. Very often, \nthey are used from public places or from private homes \nand even in some cases from employee-owned assets. \nAs such, the ecosystem for remote access should be \ncarefully considered. \n Instant-Messaging Aggregators \n There are some programs, such as Trillian, 16 for pulling \ntogether multiple IM applications. The danger here is \nsimilar to many applications that aggregate passwords or \ninformation: They should be legitimate companies, such \nas Cerulean Studios, with real privacy and security poli-\ncies. Many illegitimate applications pose as IM aggrega-\ntors, especially “ free ” ones, and are really in the business \nof establishing a spyware presence on PCs, especially \ncorporate-owned PCs. To be clear, there are real, legiti-\nmate IM aggregators with real value, and you should \nlook to do business with them, read their end-user license \nagreements (EULAs), and deploy and manage them cor-\nrectly. There are also illegitimate companies and organi-\nzations that manufacture spyware but that pose as IM \naggregators; these will come into your environment via \nwell-meaning end users. \n Most illegitimate applications should be caught with \nantispyware applications (note that some antivirus appli-\ncations have antispyware capabilities; verify that this is \nthe case with your vendor) that are resident on PCs, but \nthere are some basic steps you should make sure that \nyour security policies and procedures take into account: \n ● Make sure that you have antispyware software, that it \nis up to date and that it is active. \n ● Make sure that end users know the risks of these \napplications, especially on noncorporate systems \n(i.e., home or user-owned systems). \n ● Make sure that you survey communications into and \nout of the network for “ phone home, ” especially \nencrypted communications and that you have a \nstandard policy and procedure on what course of \naction to take should suspicious communications be \ndiscovered. \n Backdoors: Instant Messaging Via Other \nMeans (HTML) \n Some IM applications have moved to HTML-based inte-\ngrations with their network. The reason is obvious: This \nis a way around the explicit IM protocols being blocked \non corporate networks. Employees want to keep using \ntheir IM tools for personal or professional reasons, and \nthey are finding workarounds. The obvious counter to \nthis is HTML content filtering, especially at the gate-\nways to networks. If your policy disallows IM, make \nsure the content-filtering blacklists are sensitive to IM IP \naddresses and communications. Many IM applications \nwill actively scan for ports that are available on which \nthey can piggyback communications, meaning that if \nyou have any permissive rules for communications, the \nIM application will find it. \n Mobile Dimension \n Computing platforms and systems keep getting smaller \nand converging with their larger cousins; PDAs and \nphones are everywhere, and most major IM networks \nhave an IM client for BlackBerry, cell phones, and the \nlike. On corporate-owned assets with the latest generation \nof mobile technologies, it is fairly simple to lock down \nthese applications to conform to the corporate IM policy. \nIn some instances, however, it is more complex, especially \nin situations where PDAs are employee-owned or man-\naged. In these cases, you actually have a larger potential \nproblem and need a PDA and phone policy; the IM policy \nis secondary to that. \n 7. DEFENSIVE STRATEGIES \n There are four basic postures that you can take within \nyour company with respect to IM (actually this applies \nto all end-user applications more generally, although it \nneeds to be rationalized for IM in particular): \n ● Ban all IM. This is the least permissive and most \nlikely to fail over time. It is normally only advisable \nin cases of extremely high risk and in businesses \nthat are very resistant to technology. This will be the \nhardest to enforce socially if not technically. \n ● Allow internal IM. This is the most common first \nstep, but it demands a careful understanding and \npolicies and procedures to enforce the ban on \n external or consumer IM. \n ● Allow all IM. Outright allowing all IM is not a \ncommon first policy, though some companies on due \nconsideration may consider it. \n ● Create a sophisticated IM policy. This is the most \ndifficult to do, but a sophisticated and granular IM \npolicy, integrated with classic security measures, \nasset management, and a rigorous information policy, \n 16 Manufactured by Cerulean Studios: http://www.ceruleanstudios.com . \n" }, { "page_number": 496, "text": "Chapter | 27 Instant-Messaging Security\n463\nis the hallmark of a mature security organization. \nIncidentally, many of these exist already. It is not \nnecessary to reinvent all of them if these sorts of \npolicies have already been worked through and \nare a Web search away. \n 8. INSTANT-MESSAGING SECURITY \nMATURITY AND SOLUTIONS \n Companies will likely go through an evolution of the \nsecurity policy with respect to IM. Many (particularly \nolder companies) begin with a ban and wind up over \ntime settling on a sophisticated IM policy that takes \ninto account asset policies, information policies, human \nbehavior, risk, and corporate goals around employee sat-\nisfaction, efficiency, and productivity. \n Asset Management \n Many asset management solutions such as CA Unicenter, \nIBM Tivoli, and Microsoft SMS manage systems and \nthe software they have. They are most effective for man-\naging corporate-owned assets; make sure that employ-\nees have the right tools, correctly licensed and correctly \nprovisioned. They also make sure that rogue, unlicensed \nsoftware is minimized and can help enforce IM applica-\ntions bans as in the case of a “ ban of all IM ” policy or a \nban on external or consumer IM. \n Built-In Security \n Enterprise IM applications are generally the most readily \nadopted of solutions within a company or organization. \nMany of these IM platforms and “ inside the firewall ” \nIM applications provide some built-in security meas-\nures, such as the ability to auto-block inbound requests \nfrom unknown parties. Many of these features are good \nfor hardening obvious deficiencies in the systems, but by \nthemselves they do not typically do enough to protect \nthe IM infrastructure. \n Keep in mind also that “ inside the firewall ” is often \nmisleading; people can readily sign on to these applica-\ntions via virtual private network from outside the fire-\nwall or even from home computers, with a little work. \nMake sure the access policies and security features built \ninto internal applications are understood and engaged \ncorrectly. \n It is also generally good to ensure that strong authenti-\ncation (multifactor authentication) is used in cases where \npeople will be gaining remote access to the internal IM \napplication. You want to make sure that the employee in \nquestion is in fact the employee and not a family member, \nfriend, or someone who has broken into the employee’s \nhome or hotel room. \n Content Filtering \n A class of relatively new security products has arisen \nover the past few years specifically designed to do con-\ntent filtering on IM. These are still not widely adopted \nand, when adopted, apply to only a limited set of users. \nMost companies that use these tools are those with strong \nregulatory requirements and then only to a limited set of \nusers who expose the company most. The most common \nexamples are insiders in financial firms or employees \nwho can access customer data. Real-time leaks in a regu-\nlated or privileged environment are generally the most \nserious drivers here. \n Classic Security \n The perimeter is dead — long live the Internet. It is \nalmost hackneyed these days to say that perimeter-\ncentric security is dead; firewalls and IDSs aren’t the \nsolution. In many ways, modern security is about prov-\ning a negative — it can’t be done. Human beings seeking \nto do better at their jobs, to communicate with friends \nand family, or to actively invade and bypass security will \nfind new ways and new vectors around existing secu-\nrity controls. Having said that, it is important to realize \nthat our job as security professionals is to first remove \nthe inexpensive and easy ways to do get around security \ncontrols and then to efficiently raise the security bar for \ngetting at what matters: the information. It is important \nto do a real, quantitative analysis of your cost threshold \nfor the controls you are going to put in place and the \namount by which it raises the bar and compare these to \nthe risk and likelihood of loss. \n In this regard, the traditional software for perimeter \nsecurity still serves a purpose; in fact, many of these \nvendors are quite innovative at adding new features and \nnew value over time. The classic products of proxies, \ncorporate firewalls, virtual private networks, intrusion \ndetection, and anti-malware continue to raise the bar for \nthe most simple and inexpensive attack vectors and for \nstopping inadvertent leakage. This is true of adding lay-\nered protection to IM, and the classic security products \nshould be leveraged to raise the security bar (that is, to \nlower basic business risk). They won’t solve the prob-\nlem, but it is important to use them in a layered approach \nto reducing business risk. \n" }, { "page_number": 497, "text": "PART | III Encryption Technology\n464\n Compliance \n Some industries have strict regulations that require that \ninformation be handled in certain ways. These industries \nhave no choice but to comply or face punitive and legal \ndamages and increased risk to their business. It is vital to \nconsult with auditors and compliance departments regard-\ning the implications of IM on corporate compliance. \n Also, explicitly keep in mind employee rights legis-\nlation that may prohibit monitoring employees ’ IM com-\nmunications. There are jurisdictions in which this kind \nof monitoring is illegal. \n Data Loss Prevention \n There is a new class of data loss prevention (DLP) prod-\nuct that can discover, classify, categorize, monitor, and \nenforce information-centric policies (at endpoint, net-\nwork, datacenter, and “ gateway ” touchpoints) for a user \npopulation — either the whole company or a subset of the \nworkforce population. There are tradeoffs to be made \namong applications of this type, in particular when it \ncomes to false positives (wrongly identifying informa-\ntion as sensitive) and efficiency at particular times (filter-\ning HR information during benefits enrollment periods \nor filtering financial-related information at the close of a \nquarter). This class of product can be massively powerful \nfor enforcing the policy as a useful tool, but its efficiency \nand impact on employee efficiency need to be carefully \napplied. Further, the vital discovery, classification, and \ncategorization features should be well understood, as \nshould monitoring and enforcement applications. \n Logging \n Security information and event management (SIEM) \nsystems are only as effective as what they instrument. In \ncombination with content-filtering and DLP, this tech-\nnology can be extremely effective. The best “ real-world ” \nanalogy is perhaps insider trading. It is not impossible \nto commit insider trading, but it isn’t as widespread \nas it could be because the Securities and Exchange \nCommission (and other regulatory bodies outside the \nUnited States) has effective logging and anomaly detec-\ntion and reporting functions to catch transgressors. \nLogging in the form of SIEM on the right controls is \nboth a good measure for active protection and a deter-\nrent; just be sure to mention in your education processes \nthat this is happening, to realize the deterrence benefit. \n Archival \n In conjunction with regulatory requirements, you may \nhave either a process or audit requirement to keep logs for \na certain period of time. Archival, storage, and retrieval \nsystems are likely to form an important part of your post-\nevent analysis and investigation and forensics policies and \nmay actually be legally required for regulatory purposes. \n 9. PROCESSES \n The lifeblood of any policy is the process and the people \nwho enforce that policy. You will need a body of pro-\ncesses and owners that is documented and well main-\ntained. Some of the processes you may need include, but \naren’t limited to, the following. \n Instant-Messaging Activation and \nProvisioning \n When someone is legitimately entitled to IM, how specif-\nically do they get access to the application? How are they \nmanaged and supported? If you have IM, you will have \nissues and will have to keep the application and service \ncurrent and functioning. \n Application Review \n Make sure that you know the state of the art in IM. Which \napplications have centralized structures, and what nations \nand private interests host these? Which applications are \npoorly written, contain weaknesses or, worse, have remote \ncontrol vulnerabilities and are potentially spyware? \n People \n Make sure your IT staff and employees know the policies \nand why they matter. Business relevance is the best incen-\ntive for conformity. Work on adding IM to the corporate \nethics, conduct, information, and general training policies. \n Revise \n Keep the policy up to date and relevant. Policies can eas-\nily fall into disuse or irrelevance, and given the nature of \nadvances in Internet technologies, it is vital that you regu-\nlarly revisit this policy on a quarterly or semiannual basis. \nAlso ensure that your policy is enforceable; a policy that \nis not enforceable or is counterintuitive is useless. \n Audit \n Be sure to audit your environment. This is not auditing in \nthe sense of a corporate audit, although that may also be \na requirement, but do periodic examinations of network \n" }, { "page_number": 498, "text": "Chapter | 27 Instant-Messaging Security\n465\ntraffic for sessions and traffic that are out of policy. IM \nwill leave proprietary protocol trails and even HTML \ntrails in the network. Look in particular for rogue gate-\nways and rogue proxies that have been set up by employ-\nees to work around the corporate policy. \n 10. CONCLUSION \n Remember game theory with respect to your workforce \nand business; people will find ways to do more with the \ntools, networks, and systems at their disposal. They will \nfind ways to use ungoverned technology, such as IM, to \ndo more things with more people more efficiently. The \nsiren call of real-time communications is too much to \nresist for a motivated IT department and a motivated \nworkforce who want to do more with the tools that are \nreadily available to them. \n Let’s review a few lists and what you must consider \nin formulating an IM security policy — and remember \nthat this must always be in service to the business. In \nthe following sidebar, “ The 12 Factors, ” consider the \nfactors and the posture in which these put you and your \ncompany. \n The 12 Factors \n Factor #1: Do your employees need IM? If so, which \nemployees need IM and what do they need it for ? \n Factor #2: Is IM important as a job satisfaction component? \n Factor #3: Does IM improve or lessen efficiency? \n Factor #4: Will this efficiency change over time, or is it in \nfact different for different demographics of my workforce? \n Factor #5: Does your company have a need or dependency \non IM to do business? \n Factor #6: Are you considering deploying a technology or \nprocess on an IM infrastructure? \n Factor #7: Does the value to the company of informa-\ntion and processes carried over IM represent something \nthat is a valuable target (because it can either affect your \nbusiness or realize a gain)? This should include the ability \nto blackmail employees and partners; can someone learn \nthings about key employees that they could use to threaten \nor abuse employees and partners? \n Factor #8: If the IM technology were abused or compro-\nmised, what would be the risk to the business? \n Factor #9: What intellectual property, material information, \ncorporate documents, and transaction information is at risk \nover IM? \n Factor #10: What do transaction types and times tell people \nabout your business? Is it okay for people to have access to \nthis information? \n Factor #11: What unintended exposure could the company \nface via IM? Which populations have this information and \nrequire special training, monitoring, or protection? \n Factor #12: Do you have regulatory requirements that \nrequire a certain IM posture and policy? \n Now consider your responses to these factors in the \ncontext of your employees, your partners, your competi-\ntors, and the active threats your company will face. Next, \nconsider the basic risks and returns and the infrastructure \nthat you deploy: \n ● Where are the single points of failure? \n ● Where does liability lie? \n ● What is availability like? \n ● What is the impact of downtime on the business? \n ● What is the business risk? \n ● What are your disaster recovery and business \ncontinuity options? \n Last, consider regulatory requirements and the basic \nbusiness assets you must protect. You will most likely \nhave to create or update your security policy to consider \nIM. This will mean having a posture on the following \nitems, at a minimum: \n ● You must have formal, written policies for each of \nthe following: \n – Intellectual property identification, monitoring, \nand protection \n – Sensitive information identification, monitoring, \nand protection \n – Entitlements by role for IM specifically \n – Legitimate, accepted uses for IM and specifically \nprohibited ones (if any) \n – Monitoring of IM traffic \n – Enforcement of IM policies \n – Logging of IM traffic \n – Archival of IM logs \n – Regulatory requirements and needs and the \nprocesses to satisfy them around IM \n ● Education \n – With respect to IM \n – With respect to regulations \n – With respect to intellectual property \n" }, { "page_number": 499, "text": "PART | III Encryption Technology\n466\n Acme Inc.’s Answers to the 12 Factors \n Factor #1: Do your employees need IM? If so, which \nemployees need IM and what do they need it for ? \n Yes! All nonline working employees should have IM to \nallow for increased internal communications. \n Factor #2: Is IM important as a job satisfaction component? \n Yes! \n Factor #3: Does IM improve or lessen efficiency? \n IM should improve efficiency by allowing employees to get \nimmediate answers/results and to be able to pull groups \ntogether quickly, compared to emails. \n Factor #4: Will this efficiency change over time, or is it in \nfact different for different demographics of my workforce? \n Efficiency should grow as adoption and comfort levels with \nIM technologies grow. \n Factor #5: Does your company have a need for or depend-\nency on IM to do business? \n IM cannot be used for external business transactions or \ndiscussions. \n Factor #6: Are you considering deploying a technology or \nprocess on an IM infrastructure? \n Yes. We would need to implement an internal tool to per-\nform IM services. \n Factor #7: Does the value to the company of information \nand processes carried over IM represent something that is a \nvaluable target (because it can either affect your business or \nrealize a gain)? This should include the ability to blackmail \nemployees and partners; can someone learn things about \nkey employees that they could use to threaten or abuse \nemployees and partners? \n Yes. All IM communications must remain internal. \n Factor #8: If the IM technology were abused or compro-\nmised, what would be the risk to the business? \n Data loss: intellectual property. \n Business plan loss: Sensitive information that we can’t \nafford to let the competition see. \n Customer data theft: Some Personally Identifiable Information \n(PII), but all customer-related information is treated as PII. \n Factor #9: What intellectual property, material information, \ncorporate documents, and transaction information is at risk \nover IM? \n All internal data would be at risk. \n Factor #10: What do transaction types and times tell people \nabout your business? Is it okay for people to have access to \nthis information? \n Data will always remain on a need-to-know basis and the \nIM implementation must not result in the loss of data. \n Factor #11: What unintended exposure could the company \nface via IM? Which populations have this information and \nrequire special training, monitoring, or protection? \n Unintentional internal transfer of restricted data to internal \nstaff without the required internal clearances. \n Factor #12: Do you have regulatory requirements that \nrequire a certain IM posture and policy? \n We are under PCI, HIPAA, ISO 27001, and SAS70 Type II \nguidelines. \n – With respect to social engineering \n – About enforcement, monitoring, and archival \nrequirements (remember that these can have \na deterrence benefit) \n ● Applications \n – Dealing with consumer IM and the applications \nemployees will try to use \n – Internal, enterprise IM applications \n – Asset management \n ● Processes \n – Provisioning IM and accounts \n – Deprovisioning (via asset management processes) \nillegal IM clients \n – Revoking IM entitlements and accounts \n ● Do you have basic security hygiene configured \ncorrectly for an environment that includes IM? \n – Asset management software \n – Firewalls and proxies \n – Intrusion detection systems \n – Anti-malware \n – Virtual private networks and remote access \n – Strong authentication \n ● Advanced security \n – Monitoring and enforcement with DLP \n – Monitoring and SIEM \n IM, like any other technology, can serve the business \nor be a risk to it. The best situation of all is where it is \nquantified like any other technology and helps promote \nthe ability to attract talent and keep it while serving the \nbusiness — driving more business with more people more \nefficiently and while minimizing risk for business return. \n Example Answers to Key Factors \n Let’s take the example of Acme Inc., as shown in the \nsidebar, “ Acme Inc.’s Answers to the 12 Factors. ” Acme \nis a publicly traded company that has international \noffices and groups that span geographies. \n Giving answers to these simple questions mean that the \nscope of risk and the business relevance is known. These \nanswers can now be used to formulate a security policy \nand begin the IT projects that are needed for enforcement \nand monitoring. \n" }, { "page_number": 500, "text": " Privacy and Access \nManagement \n Part IV \n CHAPTER 28 NET Privacy \n Marco Cremonini, Chiara Braghin and Claudio Agostino Ardagna \n CHAPTER 29 Personal Privacy Policies \n Dr. George Yee and Larry Korba \n CHAPTER 30 Virtual Private Networks \n Jim Harmening and Joe Wright \n CHAPTER 31 Identity Theft \n Markus Jacobsson and Alex Tsow \n CHAPTER 32 VoIP Security \n Dan Wing and Harsh Kupwade Patil \n" }, { "page_number": 501, "text": "This page intentionally left blank\n" }, { "page_number": 502, "text": "469\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n NET Privacy \n Marco Cremonini \n University of Milan \n Chiara Braghin \n University of Milan \n Claudio Agostino Ardagna \n University of Milan \n Chapter 28 \n In recent years, large-scale computer networks have \nbecome an essential aspect of our daily computing envi-\nronment. We often rely on a global information infrastruc-\nture for ebusiness activities such as home banking, ATM \ntransactions, or shopping online. One of the main scien-\ntific and technological challenges in this setting has been \nto provide security to individuals who operate in possibly \nuntrusted and unknown environments. However, beside \nthreats directly related to computer intrusions, epidemic \ndiffusion of malware, and outright frauds conducted \nonline, a more subtle though increasing erosion of indi-\nviduals ’ privacy has progressed and multiplied. Such an \nescalating violation of privacy has some direct harmful \nconsequences — for example, identity theft has spread in \nrecent years — and negative effects on the general percep-\ntion of insecurity that many individuals now experience \nwhen dealing with online services. \n Nevertheless, protecting personal privacy from the \nmany parties — business, government, social, or even \ncriminal — that examine the value of personal informa-\ntion is an old concern of modern society, now increased \nby the features of the digital infrastructure. In this chap-\nter, we address these privacy issues in the digital society \nfrom different points of view, investigating: \n ● The various aspects of the notion of privacy and \nthe debate that the intricate essence of privacy has \nstimulated \n ● The most common privacy threats and the possible \neconomic aspects that may influence the way \nprivacy is (and especially is not, in its current status) \nmanaged in most firms \n ● The efforts in the computer science community to \nface privacy threats, especially in the context of \nmobile and database systems \n ● The network-based technologies available to date to \nprovide anonymity in user communications over a \nprivate network \n 1. PRIVACY IN THE DIGITAL SOCIETY \n Privacy in today’s digital society is one of the most debated \nand controversial topics. Many different opinions about \nwhat privacy actually is and how it could be preserved have \nbeen expressed, but still we can set no clear-cut border that \ncannot be trespassed if privacy is to be safeguarded. \n The Origins, The Debate \n As often happens when a debate heats up, the extremes \nspeak louder and, about privacy, the extremes are those \nthat advocate the ban of the disclosure of whatever per-\nsonal information and those that say that all personal \ninformation is already out there, therefore privacy is \ndead. Supporters of the wide deployment and use of \nanonymizing technologies are perhaps the best repre-\nsentatives of one extreme. The chief executive officer of \nSun Microsystems, Scott McNealy, with his “ Get over \nit ” comment, has gained large notoriety for championing \nthe other extreme opinion. 1 \n 1 P. Sprenger, “ Sun on privacy: ‘ get over it, ’ ” WIRED , 1999, www.\nwired.com/politics/law/news/1999/01/17538 . \n" }, { "page_number": 503, "text": "PART | IV Privacy and Access Management\n470\n However, these are just the extremes; in reality net \nprivacy is a fluid concept that such radical positions can-\nnot fully contain. It is a fact that even those supporting \nfull anonymity recognize that there are several limitations \nto its adoption, either technical or functional. On the \nother side, even the most skeptical cannot avoid dealing \nwith privacy issues, either because of laws and norms or \nbecause of common sense. Sun Microsystems, for exam-\nple, is actually supporting privacy protection and is a \nmember of the Online Privacy Alliance, an industry coa-\nlition that fosters the protection of individuals ’ privacy \nonline. \n Looking at the origins of the concept of privacy, \nAristotle’s distinction between the public sphere of \npolitics and the private sphere of the family is often \nconsidered the root. Much later, the philosophical and \nanthropological debate around these two spheres of an \nindividual’s life evolved. John Stuart Mill, in his essay, \n On Liberty , introduced the distinction between the realm \nof governmental authority as opposed to the realm of \nself-regulation. Anthropologists such as Margaret Mead \nhave demonstrated how the need for privacy is innate in \ndifferent cultures that protect it through concealment or \nseclusion or by restricting access to secret ceremonies. \n More pragmatically, back in 1898, the concept of \nprivacy was expressed by U.S. Supreme Court Justice \nBrandeis, who defined privacy as “ The right to be let \nalone. ” 2 This straightforward definition represented for \ndecades the reference of any normative and operational \nprivacy consideration and derivate issues and, before \nthe advent of the digital society, a realistically enforce-\nable ultimate goal. The Net has changed the landscape \nbecause the very concept of being let alone while inter-\nconnected becomes fuzzy and fluid. \n In 1948, privacy gained the status of fundamental right \nof any individual, being explicitly mentioned in the United \nNations Universal Declaration of Human Rights (Article \n12): “ No one shall be subjected to arbitrary interference \nwith his privacy, family, home or correspondence, nor \nto attacks upon his honour and reputation. Everyone has \nthe right to the protection of the law against such inter-\nference or attacks. ” 3 However, although privacy has been \nrecognized as a fundamental right of each individual, the \nUniversal Declaration of Human Rights does not explic-\nitly define what privacy is, except for relating it to pos-\nsible interference or attacks. \n About the digital society, less rigorously but oth-\nerwise effectively in practical terms, in July 1993, The \nNew Yorker published a brilliant cartoon by Peter Steiner \nthat since then has been cited and reproduced dozen \nof times to refer to the supposed intrinsic level of pri-\nvacy — here in the sense of anonymity or hiding per-\nsonal traits — that can be achieved by carrying out social \nrelations over the Internet. That famous cartoon shows \none dog that types on a computer keyboard and says to \nthe other one: “ On the Internet, no one knows you’re a \ndog. ” 4 The Internet, at least at the very beginning of its \nhistory, was not perceived as threatening to individuals ’ \nprivacy; rather, it was seen as increasing it, sometimes \ntoo much, since it could easily let people disguise them-\nselves in the course of personal relationships. Today that \nbelief may look na ï ve with the rise of threats to individ-\nual privacy that have accompanied the diffusion of the \ndigital society. Nevertheless, there is still truth in that \ncartoon because, whereas privacy is much weaker on the \nNet than in real space, concealing a person’s identity and \npersonal traits is technically even easier. Both aspects \nconcur and should be considered. \n Yet an unambiguous definition of the concept of pri-\nvacy has still not been produced, nor has an assessment \nof its actual value and scope. It is clear, however, that \nwith the term privacy we refer to a fundamental right and \nan innate feeling of every individual, not to a vague and \nmysterious entity. An attempt to give a precise definition \nat least to some terms that are strictly related to (and often \nused in place of) the notion of privacy can be found in 5 , 6 , \nwhere the differences among anonymity , unobservability, \nand unlinkability are pointed out. In the digital society \nscenario, anonymity is defined as the state of not being \nidentifiable, unobservability as the state of being indistin-\nguishable, and unlinkability as the impossibility of cor-\nrelating two or more actions/items/pieces of information. \nPrivacy, however defined and valued, is a tangible state of \nlife that must be attainable in both the physical and the \ndigital society. \n The reason that in the two realms — the physical and \nthe digital — privacy behaves differently has been widely \ndebated, too, and many of the critical factors that make \n 2 S. D. Warren, and L. D. Brandeis, “ The right to privacy, ” Harvard \nLaw Review , Vol. IV, No. 5, 1890. \n 3 United Nations, Universal Declaration of Human Rights , 1948, \n www.un.org/Overview/rights.html . \n 4 P. Steiner, “ On the Internet, nobody knows you’re a dog, ” Cartoonbank, \n The New Yorker , 1993, www.cartoonbank.com/item/22230 . \n 5 A. Pfi tzmann, and M. Waidner, “ Networks without user observability – \ndesign options, ” Proceedings of Workshop on the Theory and Application \nof Cryptographic Techniques on Advances in Cryptology (EuroCrypt’85) , \nVol. 219 LNCS Springer, Linz, Austria, pp. 245 – 253, 1986. \n 6 A. Pfi tzmann, and M. K ö hntopp, “ Anonymity, unobservability, and \npseudonymity — a proposal for terminology, ” in Designing Privacy \nEnhancing Technologies , Springer Berlin, pp. 1 – 9, 2001. \n" }, { "page_number": 504, "text": "Chapter | 28 Net Privacy\n471\n a difference in the two realms, the impact of technology \nand the Internet, have been spelled out clearly. However, \ngiven the threats and safeguards that technologies make \npossible, it often remains unclear what the goal of pre-\nserving net privacy should be — being that extreme posi-\ntions are deemed unacceptable . \n Lessig, in his book Free Culture , 7 provided an excel-\nlent explanation of the difference between privacy in the \nphysical and in the digital world: “ The highly inefficient \narchitecture of real space means we all enjoy a fairly \nrobust amount of privacy. That privacy is guaranteed \nto us by friction. Not by law [ … ] and in many places, \nnot by norms [ … ] but instead, by the costs that friction \nimposes on anyone who would want to spy. [ … ] Enter \nthe Internet, where the cost of tracking browsing in par-\nticular has become quite tiny. [ … ] The friction has disap-\npeared, and hence any ‘ privacy ’ protected by the friction \ndisappears, too. ” \n Thus, privacy can be seen as the friction that reduces \nthe spread of personal information, that makes it more \ndifficult and economically inconvenient to gain access \nto it. The merit of this definition is to put privacy into \na relative perspective, which excludes the extremes that \nadvocate no friction at all or so much friction to stop the \nflow of information. It also reconciles privacy with secu-\nrity, being both aimed at setting an acceptable level of \nprotection while allowing the development of the digital \nsociety and economy rather than focusing on an ideal \nstate of perfect security and privacy. \n Even in an historic perspective, the analogy with fric-\ntion has sense. The natural path of evolution of a technol-\nogy is first to push for its spreading and best efficiency. \nWhen the technology matures, other requirements come \nto the surface and gain importance with respect to mere \nefficiency and functionalities. Here, those frictions that \nhave been eliminated because they are a waste of effi-\nciency acquire new meaning and become the way to sat-\nisfy the new requirements, in terms of either safety or \nsecurity, or even privacy. It is a sign that a technology \nhas matured but not yet found a good balance between \nold and new requirements when nonfunctional aspects \nsuch as security or privacy become critical because they \nare not well managed and integrated. \n Privacy Threats \n Threats to individual privacy have become publicly \nappalling since July 2003, when the California Security \nBreach Notification Law 8 went into effect. This law was \nthe first one to force state government agencies, compa-\nnies, and nonprofit organizations that conduct business \nin California to notify California customers if personally \nidentifiable information (PII) stored unencrypted in dig-\nital archives was, or is reasonably believed to have been, \nacquired by an unauthorized person. \n The premise for this law was the rise of identity \ntheft , which is the conventional expression that has \nbeen used to refer to the illicit impersonification car-\nried out by fraudsters who use PII of other people to \ncomplete electronic transactions and purchases. The \nCalifornia Security Breach Notification Law lists, as \nPII: Social Security number, driver’s license number, \nCalifornia Identification Card number, bank account \nnumber, credit- or debit-card number, security codes, \naccess codes, or passwords that would permit access to \nan individual’s financial account. 8 By requiring by law \nthe immediate notification to the PII owners, the aim is \nto avoid direct consequences such as financial losses and \nderivate consequences such as the burden to restore an \nindividual’s own credit history. Starting on January 1, \n2008, California’s innovative data security breach notifi-\ncation law also applies to medical information and health \ninsurance data. \n Besides the benefits to consumers, this law has been \nthe trigger for similar laws in the United States — today, \nthe majority of U.S. states have one — and has permit-\nted the flourishing of regular statistics about privacy \nbreaches, once almost absent. Privacy threats and anal-\nyses are now widely debated, and research focused on \nprivacy problems has become one of the most important. \n Figure 28.1 shows a chart produced by plotting data col-\nlected by Attrition.org Data Loss Archive and Database, 9 \none of the most complete references for privacy breaches \nand data losses. \n Looking at the data series, we see that some breaches \nare strikingly large. Etiolated.org maintains some statis-\ntics based on Attrition.org’s database: In 2007, about 94 \nmillion records were hacked at TJX stores in the United \nStates; confidential details of 25 million children have \nbeen lost by HM Revenue & Customs, U.K.; the Dai \nNippon Printing Company in Tokyo lost more than 8 \nmillion records; data about 8.5 million people stored by \na subsidiary of Fidelity National Information Services \n 7 L. Lessig, Free Culture , Penguin Group, 2003, www.free-culture.cc/ . \n 8 California Security Breach Notifi cation Law, Bill Number: SB 1386, \nFebruary \n2002, \n http://info.sen.ca.gov/pub/01-02/bill/sen/sb_1351-\n1400/sb_1386_bill_20020926_chaptered.html . \n 9 Attrition.org Data Loss Archive and Database (DLDOS), 2008, \n http://attrition.org/dataloss/ . \n" }, { "page_number": 505, "text": "PART | IV Privacy and Access Management\n472\n were stolen and sold for illegal usage by a former \nemployee. Similar paths were reported in previous years \nas well. In 2006, personal data of about 26.5 million \nU.S. military veterans was stolen from the residence of a \nDepartment of Veterans Affairs data analyst who improp-\nerly took the material home. In 2005, CardSystems \nSolutions — a credit card processing company managing \naccounts for Visa, MasterCard, and American Express —\n exposed 40 million debit- and credit-card accounts in a \ncyber break-in. In 2004, an employee of America Online \nInc. stole 92 million email addresses and sold them to \nspammers. Still recently, in March 2008, Hannaford \nBros. supermarket chain announced that, due to a secu-\nrity breach, about 4.2 million customer credit and debit \ncard numbers were stolen. 10 \n Whereas these incidents are the most notable, the phe-\nnomenon is distributed over the whole spectrum of breach \nsizes (see Figure 28.1 ). Hundreds of privacy breaches are \nreported in the order of a few thousand records lost and \nall categories of organizations are affected, from public \nagencies, universities, banks and financial institutions, \nmanufacturing and retail companies, and so on. \n The survey Enterprise@Risk: 2007 Privacy & \nData Protection , conducted by Deloitte & Touche and \nPonemon Institute, 11 provides another piece of data about \nthe incidence of privacy breaches. Among the survey’s \nrespondents, over 85% reported at least one breach and \nabout 63% reported multiple breaches requiring notifica-\ntion during the same time period. Breaches involving over \n1000 records were reported by 33.9% of respondents; \nof those, almost 10% suffered data losses of more than \n25,000 records. Astonishingly, about 21% of respondents \nwere not able to estimate the record loss. The picture that \nresults is that of a pervasive management problem with \nregard to PII and its protection, which causes a continu-\nous leakage of chunks of data and a few dramatic break-\ndowns when huge archives are lost or stolen. \n It is interesting to analyze the root causes for such \nbreaches and the type of information involved. One source \nof information is the Educational Security Incidents (ESI) \nYear in Review – 2007, 12 by Adam Dodge. This survey \nlists all breaches that occurred worldwide during 2007 at \ncolleges and universities around the world. \n Concerning the causes of breaches, the results over a \ntotal of 139 incidents are: \n ● 38% are due to unauthorized disclosure \n ● 28% to theft (disks, laptops) \n ● 22% to penetration/hacking \n ● 9% to loss of data \n Therefore, incidents to be accounted for by misman-\nagement by employees (unauthorized disclosure and \nloss) account for 47%, whereas criminal activity (pen-\netration/hacking and theft) account for 40%. \n With respect to the type of information exposed dur-\ning these breaches, the result is that: \n ● PII have been exposed in 42% of incidents \n ● Social Security numbers in 34% \n ● Educational information in 11% \n ● Financial information in 7% \n ● Medical information in 5% \n ● Login accounts in 2% \n Again, rather than direct economic consequences or \nillicit usage of computer facilities, such breaches repre-\nsents threats to individual privacy. \n100000000\n10000000\n1000000\n100000\n10000\n1000\nNo. of Records Lost\n100\n10\n1\nJan-05\nJan-06\nJan-07\nMonth/Year\nJan-08\n FIGURE 28.1 Privacy breaches from the Attrition.org Data Loss Archive and Database up to March 2008 (X-axis: Years 2000 – 2008; Y-axis \n(logarithmic): PII records lost). \n 10 Etiolated.org, “ Shedding light on who’s doing what with your pri-\nvate information, ” 2008, http://etiolated.org/ . \n 11 Deloitte & Touche LLP and Ponemon Institute LLC, Enterprise@\nRisk: 2007 Privacy & Data Protection Survey, 2007, www.deloitte.\ncom/dtt/cda/doc/content/us_risk_s%26P_2007%20Privacy10Dec2007\nfi nal.pdf . \n 12 A. Dodge, Educational Security Incidents (ESI) Year in Review \n2007 , 2008. www.adamdodge.com/esi/yir_2006 . \n" }, { "page_number": 506, "text": "Chapter | 28 Net Privacy\n473\n Privacy Rights Clearinghouse is another organization \nthat provides excellent data and statistics about privacy \nbreaches. Among other things, it is particularly remark-\nable for its analysis of root causes for different sectors, \nnamely the private sector, the public sector (military \nincluded), higher education, and medical centers. 13 Table \n28.1 reports its findings for 2006. \n Comparing these results with the previous statistics, \nthe Educational Security Incidents (ESI) Year in Review –\n 2007 , breaches caused by hackers in universities look \nremarkably different. Privacy Rights ClearingHouse esti-\nmates as largely prevalent the external criminal activity \n(hackers and theft), which accounts for 77%, and internal \nproblems, which account for 19%, whereas in the previ-\nous study the two classes were closer with a prevalence \nof internal problems. \n Hasan and Yurcik 14 analyzed data about privacy \nbreaches that occurred in 2005 and 2006 by fusing \ndatasets maintained by Attrition.org and Privacy Rights \nClearingHouse. The overall result partially clarifies the \ndiscrepancy that results from the previous two analyses. \nIn particular, it emerges that considering the number of \nprivacy breaches, education institutions are the most \nexposed, accounting for 35% of the total, followed by \ncompanies (25%) and state-level public agencies, medi-\ncal centers, and banks (all close to 10%). However, by \nconsidering personal records lost by sector, companies \nlead the score with 35.5%, followed by federal agen-\ncies with 29.5%, medical centers with 16%, and banks \nwith 11.6%. Educational institutions record a lost total \nof just 2.7% of the whole. Therefore, though universities \nare victimized by huge numbers of external attacks that \ncause a continuous leakage of PII, companies and federal \nagencies are those that have suffered or provoked ruin-\nous losses of enormous archives of PII. For these sectors, \nthe impact of external Internet attacks has been matched \nor even exceeded by internal fraud or misconduct. \n The case of consumer data broker ChoicePoint, \nInc., is perhaps the one that got the most publicity as \nan example of bad management practices that led to a \nhuge privacy incident. 15 In 2006, the Federal Trade \nCommission charged that ChoicePoint violated the \nFair Credit Reporting Act (FCRA) by furnishing con-\nsumer reports — credit histories — to subscribers who \ndid not have a permissible purpose to obtain them and \nby failing to maintain reasonable procedures to verify \nboth their identities and how they intended to use the \ninformation. 16 \n The opinion that threats due to hacking have been \noverhyped with respect to others is one shared by many \nin the security community. In fact, it appears that root \ncauses of privacy breaches, physical thefts (of lap-\ntops, disks, and portable memories) and bad manage-\nment practices (sloppiness, incompetence, and scarce \nallocation of resources) need to be considered at least \nas serious as hacking. This is confirmed by the survey \n Enterprise@Risk: 2007 Privacy & Data Protection , 11 \nwhich concludes that most enterprise privacy programs \nare just in the early or middle stage of the maturity cycle. \nRequirements imposed by laws and regulations have the \n 13 Privacy Rights ClearingHouse, Chronology of Data Breaches \n2006: Analysis, 2007, www.privacyrights.org/ar/DataBreaches2006-\nAnalysis.htm . \n 14 R. Hasan, and W. Yurcik, “ Beyond media hype: Empirical \nanalysis of disclosed privacy breaches 2005 – 2006 and a dataset/\ndatabase foundation for future work, ” Proceedings of Workshop on the \nEconomics of Securing the Information Infrastructure , Washington, \n2006. \n TABLE 28.1 Root causes of data breaches, 2006 \n \n Private Sector \n(126 Incidents) \n Public Sector (Inc. \nMilitary; 114 Incidents) \n Higher Education \n(52 Incidents) \n Medical Centers \n(30 Incidents) \n Outside hackers \n 15% \n 13% \n 40% \n 3% \n Insider malfeasance \n 10% \n 5% \n 2% \n 20% \n Human/software incompetence \n 20% \n 44% \n 21% \n 20% \n Theft (non-laptop) \n 15% \n 17% \n 17% \n 17% \n Laptop theft \n 40% \n 21% \n 20% \n 40% \n Source: Privacy Rights Clearinghouse. \n 15 S. D. Scalet, “ The fi ve most shocking things about the \nChoicePoint data security breach, ” CSO online, 2005, www.csoonline.\ncom/article/220340 . \n 16 Federal Trade Commission (FTC), “ ChoicePoint settles data secu-\nrity breach charges; to pay $10 million in civil penalties, $5 million for \nconsumer redress, 2006, ” www.ftc.gov/opa/2006/01/choicepoint.shtm . \n" }, { "page_number": 507, "text": "PART | IV Privacy and Access Management\n474\n highest rates of implementation; operational processes, \nrisk assessment, and training programs are less adopted. \nIn addition, a minority of organizations seem able to \nimplement measurable controls, a deficiency that makes \nprivacy management intrinsically feeble. Training pro-\ngrams dedicated to privacy, security, and risk manage-\nment look at the weakest spot. Respondents report that \ntraining on privacy and security is offered just annually \n(about 28%), just once (about 36.5%), or never (about \n11%). Risk management is never the subject of training \nfor almost 28% of respondents. With such figures, it is \nno surprise if internal negligence due to unfamiliarity \nwith privacy problems or insufficient resources is such a \nrelevant root cause for privacy breaches. \n The ChoicePoint incident is paradigmatic of another \nimportant aspect that has been considered for analyzing \nprivacy issues. The breach involved 163,000 records and \nit was carried out with the explicit intention of unauthor-\nized parties to capture those records. However, actually, \nin just 800 cases (about 0.5%), that breach leads to iden-\ntity theft, a severe offense suffered by ChoicePoint cus-\ntomers. Some analysts have questioned the actual value \nof privacy, which leads us to discuss an important strand \nof research about economic aspects of privacy. \n 2. THE ECONOMICS OF PRIVACY \n The existence of strong economic factors that influence \nthe way privacy is managed, breached, or even traded \noff has long been recognized. 17 , 18 However, it was with \nthe expansion of the online economy, in the 1990s and \n2000s, that privacy and economy become more and \nmore entangled. Many studies have been produced to \ninvestigate, from different perspectives and approaches, \nthe relation between the two. A comprehensive survey of \nworks that analyzed the economic aspects of privacy can \nbe found in 19 . \n Two issues among the many have gained most of the \nattention: assessing the value of privacy and examining \nto what extent privacy and business can coexist or are \ninevitably conflicting one with the other. For both issues \nthe debate is still open and no ultimate conclusion has \nbeen reached yet. \n The Value of Privacy \n To ascertain the value of privacy on the one hand, peo-\nple assign high value to their privacy when asked; on the \nother hand, privacy is more and more eroded and given \naway for small rewards. Several empirical studies have \ntested individuals ’ behavior when they are confronted \nwith the decision to trade off privacy for some rewards \nor incentives and when confronted with the decision \nto pay for protecting their personal information. The \napproaches to these studies vary, from investigating the \nactual economic factors that determine people’s choices \nto the psychological motivation and perception of risk or \nsafety. \n Syverson, 20 then Shostack and Syverson, 21 analyzed \nthe apparently irrational behavior of people who claim \nto highly value privacy and then, in practice, are keen \nto release personal information for small rewards. The \nusual conclusion is that people are not actually able to \nassess the value of privacy or that they are either irra-\ntional or unaware of the risks they are taking. Though \nthere is evidence that risks are often miscalculated or \njust unknown by most people, there are also some valid \nreasons that justify such paradoxical behavior. In par-\nticular, the analysis points to the cost of examining and \nunderstanding privacy policies and practices, which \noften make privacy a complex topic to manage. Another \nobservation regards the cost of protecting privacy, which \nis often inaccurately allocated. Better reallocation would \nalso provide government and business with incentives \nto increase rather than decrease protection of individual \nprivacy. \n One study that dates to 1999, by Culnan and \nArmstrong, 22 investigated how firms that demonstrate to \nadopt fair procedures and ethical behavior can mitigate \nconsumer concerns about privacy. Their finding was that \nconsumers who perceive that the collection of personal \ninformation is ruled by fair procedures are more willing \nto release their data for marketing use. This supported \nthe hypothesis that most privacy concerns are motivated \nby an unclear or distrustful stance toward privacy protec-\ntion that firms often exhibit. \n 17 J. Hirshleifer, “ The private and social value of information and the \nreward to inventive activity, ” American Economic Review , Vol. 61, pp. \n561 – 574, 1971. \n 18 R. A. Posner, “ The economics of privacy, ” American Economic \nReview , Vol. 71, No. 2, pp. 405 – 409, 1981. \n 19 K. L. Hui, and I. P. L. Png, “ Economics of privacy, ” in terrence \nhendershott (ed.), Handbooks in Information Systems, Vol. 1 , Elsevier, \npp. 471 – 497, 2006. \n 20 P. Syverson, “ The paradoxical value of privacy, ” Proceedings of the \n2nd Annual Workshop on Economics and Information Security (WEIS \n2003 ), 2003. \n 21 A. Shostack, and P. Syverson, “ What price privacy? (and why \nidentity theft is about neither identity nor theft), ” In Economics of \nInformation Security, Chapter 11, Kluwer Academic Publishers, 2004. \n 22 M. Culnan, and P. Armstrong, “ Information privacy concerns, \nprocedural fairness, and impersonal trust: an empirical evidence, ” \n Organization Science , Vol. 10, No. 1, pp. 104 – 1, 51999. \n" }, { "page_number": 508, "text": "Chapter | 28 Net Privacy\n475\n In 2007, Tsai et al. 23 published research that addresses \nmuch the same issue. The effect of privacy concerns \non online purchasing decisions has been tested and the \nresults are again that the role of incomplete information \nin privacy-relevant decisions is essential. Consumers \nare sensitive to the way privacy is managed and to \nwhat extent a merchant is trustful. However, in another \nstudy, Grosslack and Acquisti 24 found that individuals \nalmost always choose to sell their personal information \nwhen offered small compensation rather than keep it \nconfidential. \n Hann, Lee, Hui, and Png have carried out a more \nanalytic work in two studies about online information \nprivacy. This strand of research 25 estimated how much \nprivacy is worth for individuals and how economic \nincentives, such as monetary rewards and future conven-\nience, could influence such values. Their main findings \nare that individuals do not esteem privacy as an absolute \nvalue; rather, information is available to trade off for \neconomic benefits, and that improper access and second-\nary use of personal information are the most important \nclasses of privacy violation. In the second work, 26 the \nauthors considered firms that tried to mitigate privacy \nconcerns by offering privacy policies regarding the han-\ndling and use of personal information and by offering \nbenefits such as financial gains or convenience. These \nstrategies have been analyzed in the context of the infor-\nmation processing theory of motivation, which considers \nhow people form expectations and make decisions about \nwhat behavior to choose. Again, whether a firm may \noffer only partially complete privacy protection or some \nbenefits, economic rewards and convenience have been \nfound to be strong motivators for increasing individuals ’ \nwillingness to disclose personal information. \n Therefore, most works seems to converge to the same \nconclusion: Whether individuals react negatively when \nincomplete or distrustful information about privacy \nis presented, even a modest monetary reward is often \nsufficient for disclosing one’s personal information. \n Privacy and Business \n The relationship between privacy and business has been \nexamined from several angles by considering which \nincentives could be effective for integrating privacy with \nbusiness processes and, instead, which disincentives \nmake business motivations to prevail over privacy. \n Froomkin 27 analyzed what he called “ privacy-destroy-\ning technologies ” developed by governments and busi-\nnesses. Examples of such technologies are collections \nof transactional data, automated surveillance in public \nplaces, biometric technologies, and tracking mobile \ndevices and positioning systems. To further aggravate \nthe impact on privacy of each one of these technologies, \ntheir combination and integration result in a cumulative \nand reinforcing effect. On this premise, Froomkin intro-\nduces the role that legal responses may play to limit this \napparently unavoidable “ death of privacy. ” \n Odlyzko 28 , 29 , 30 , 31 is a leading author that holds a pes-\nsimistic view of the future of privacy, calling “ unsolv-\nable ” the problem of granting privacy because of price \ndiscrimination pressures on the market. His argument \nis based on the observation that the markets as a whole, \nespecially Internet-based markets, have strong incentives \nto price discriminate, that is, to charge varying prices \nwhen there are no cost justifications for the differences. \nThis practice, which has its roots long before the advent \nof the Internet and the modern economy — one of the \nmost illustrative examples is 19 th -century railroad pric-\ning practices — provides relevant economic benefits to \nthe vendors and, from a mere economic viewpoint, to \nthe efficiency of the economy. In general, charging dif-\nferent prices to different segments of the customer base \n 23 J. Tsai, S. Egelman, L. Cranor, and A. Acquisti, “ The effect of \nonline privacy information on purchasing behavior: an experimental \nstudy, ” Workshop on Economics and Information Security (WEIS 2007), \n2007. \n 24 J. Grossklags, and A. Acquisti, “ When 25 cents is too much: an \nexperiment on willingness-to-sell and willingness-to-protect per-\nsonal information, ” Proceedings of Workshop on the Economics of \nInformation Security (WEIS) , Pittsburgh, 2007. \n 25 Il-H. Hann, K. L. Hui, T. S. Lee, and I. P. L. Png, “ Online infor-\nmation privacy: measuring the cost-benefi t trade-off, ” Proceedings, \n23rd International Conference on Information Systems , Barcelona, \nSpain, 2002. \n 26 Il-H. Hann, K. L. Hui, T. S. Lee, and I. P. L. Png, “ Analyzing \nonline information privacy concerns: an information processing theory \napproach, ” Journal of Management Information Systems , Vol. 24, No. 2, \npp. 13 – 42, 2007. \n 27 A. M. Froomkin, “ The death of privacy?, ” 52 Stanford Law Review , \npp. 1461 – 1469, 2000. \n 28 Odlyzko, A. M., “ Privacy, economics, and price discrimination \non the internet, ” Proceedings of the Fifth International Conference on \nElectronic Commerce (ICEC2003) , N. Sadeh (ed.), ACM, pp. 355 – 366, \n2003. \n 29 A. M. Odlyzko, “ The unsolvable privacy problem and its implica-\ntions for security technologies, ” Proceedings of the 8th Australasian \nConference on Information Security and Privacy (ACISP 2003) , \nR. Safavi-Naini and J. Seberry (eds.), Lecture Notes in Computer \nScience 2727, Springer, pp. 51 – 54, 2003. \n 30 A. M. Odlyzko, “ The evolution of price discrimination in trans-\nportation and its implications for the Internet, ” Review of Network \nEconomics , Vol. 3, No. 3, pp. 323 – 346, 2004. \n 31 A. M. Odlyzko, “ Privacy and the clandestine evolution of ecom-\nmerce, ” Proceedings of the Ninth International Conference on Electronic \nCommerce (ICEC2007) , ACM, 2007. \n" }, { "page_number": 509, "text": "PART | IV Privacy and Access Management\n476\n permits vendors to complete transactions that would \nnot take place otherwise. On the other hand, the public \nhas often contrasted plain price discrimination prac-\ntices since they perceive them as unfair. For this reason, \nmany less evident price discrimination practices are in \nplace today, among which bundling is one of the most \nrecurrent. Privacy of actual and prospective customers is \nthreatened by such economic pressures toward price dis-\ncrimination because the more the customer base can be \nsegmented — and thus known with greatest detail — the \nbetter efficiency is achieved for vendors. The Internet-\nbased market has provided a new boost to such prac-\ntices and to the acquisition of personal information and \nknowledge of customer habits. \n Empirical studies seem to confirm such pessimistic \nviews. A first review of the largest privately held compa-\nnies listed in the Forbes Private 50 32 and a second study \nof firms listed in the Fortune 500 33 demonstrate a poor \nstate of privacy policies adopted in such firms. In gen-\neral, privately held companies are more likely to lack \nprivacy policies than public companies and are more \nreluctant to publicly disclose their procedures relative \nto fair information practices. Even the larger set of the \nFortune 500 firms exhibited a large majority of firms \nthat are just mildly addressing privacy concerns. \n More pragmatically, some analyses have pointed out \nthat given the current privacy concerns, an explicitly fair \nmanagement of customers ’ privacy may become a positive \ncompetitive factor. 34 Similarly, Hui et al. 35 have identified \nseven types of benefits that Internet businesses can provide \nto consumers in exchange for their personal information. \n 3. PRIVACY-ENHANCING \nTECHNOLOGIES \n Technical improvements of Web and location tech-\nnologies have fostered the development of online \napplications that use the private information of users \n(including physical position of individuals) to offer \nenhanced services. The increasing amount of available \npersonal data and the decreasing cost of data storage and \nprocessing make it technically possible and economically \njustifiable to gather and analyze large amounts of data. \nAlso, information technology gives organizations the power \nto manage and disclose users ’ personal information with-\nout restrictions. In this context, users are much more con-\ncerned about their privacy, and privacy has been recognized \nas one of the main reasons that prevent users from using \nthe Internet for accessing online services. Today’s global \nnetworked infrastructure requires the ability for parties to \ncommunicate in a secure environment while preserving \ntheir privacy. Support for digital identities and definition \nof privacy-enhanced protocols and techniques for their \nmanagement and exchange then become fundamental \nrequirements. \n A number of useful privacy-enhancing technolo-\ngies (PETs) have been developed for dealing with pri-\nvacy issues, and previous works on privacy protection \nhave focused on a wide variety of topics. 36 , 37 , 38 , 39 , 40 , 41 In \nthis section, we discuss the privacy protection problem \nin three different contexts. We start by describing lan-\nguages for the specification of access control policies \nand privacy preferences. We then describe the problem \nof data privacy protection, giving a brief description of \nsome solutions. Finally, we analyze the problem of pro-\ntecting privacy in mobile and pervasive environments. \n Languages for Access Control and\nPrivacy Preferences \n Access control systems have been introduced for regu-\nlating and protecting access to resources and data owned \n 32 A. R. Peslak, “ Privacy policies of the largest privately held com-\npanies: a review and analysis of the Forbes private 50, ” Proceedings of \nthe ACM SIGMIS CPR Conference on Computer Personnel Research , \nAtlanta, 2005. \n 33 K. S. Schwaig, G. C. Kane, and V. C. Storey, “ Compliance to the \nfair information practices: how are the fortune 500 handling online pri-\nvacy disclosures?, ” Inf. Manage. , Vol. 43, No. 7, pp. 805 – 820, 2006. \n 34 M. Brown, and R. Muchira, “ Investigating the relationship between \ninternet privacy concerns and online purchase behavior, ” Journal \nElectron. Commerce Res , Vol. 5, No. 1, pp. 62 – 70, 2004. \n 35 K. L. Hui, B. C. Y. Tan, and C. Y. Goh, “ Online information dis-\nclosure: Motivators and measurements, ” ACM Transaction on Internet \nTechnologies , Vol. 6, No. 4, pp. 415 – 441, 2006. \n 36 C. A. Ardagna, E. Damiani, S. De Capitani di Vimercati, and P. \nSamarati, “ Toward privacy-enhanced authorization policies and lan-\nguages, ” Proceedings of the 19th IFIP WG11.3 Working Conference on \nData and Application Security , Storrs, CT, pp. 16 – 27, 2005. \n 37 R. Chandramouli, “ Privacy protection of enterprise information \nthrough inference analysis, ” Proceedings of IEEE 6th International \nWorkshop on Policies for Distributed Systems and Networks (POLICY \n2005) , Stockholm, Sweden, pp. 47 – 56, 2005. \n 38 L. F. Cranor, Web Privacy with P3P , O’Reilly & Associates, 2002. \n 39 G. Karjoth, and M. Schunter, “ Privacy policy model for enter-\nprises, ” Proceedings of the 15th IEEE Computer Security Foundations \nWorkshop , Cape Breton, Nova Scotia, pp. 271 – 281, 2002. \n 40 B. Thuraisingham, “ Privacy constraint processing in a pri-\nvacy-enhanced database management system, ” Data & Knowledge \nEngineering , Vol. 55, No. 2, pp. 159 – 188, 2005. \n 41 M. Youssef, V. Atluri, and N. R. Adam, “ Preserving mobile cus-\ntomer privacy: An access control system for moving objects and cus-\ntomer profi les, ” Proceedings of the 6th International Conference on \nMobile Data Management (MDM 2005) , Ayia Napa, Cyprus, pp. 67 – 76, \n2005. \n" }, { "page_number": 510, "text": "Chapter | 28 Net Privacy\n477\n by parties. However, the importance gained by privacy \nrequirements has brought with it the definition of access \ncontrol models that are enriched with the ability of sup-\nporting privacy requirements. These enhanced access \ncontrol models encompass two aspects: to guarantee \nthe desired level of privacy of information exchanged \nbetween different parties by controlling the access to \nservices/resources, and to control all secondary uses of \ninformation disclosed for the purpose of access control \nenforcement. \n In this context, many languages for access con-\ntrol policies and privacy preferences specification have \nbeen defined, among which eXtensible Access Control \nMarkup Language (XACML), 42 Platform for Privacy \nPreferences Project (P3P), 38, 43 and Enterprise Privacy \nAuthorization Language (EPAL) 44 , 45 stand out. \n The eXtensible Access Control Markup Language \n(XACML), 42 which is the result of a standardization effort \nby OASIS, proposes an XML-based language to express \nand interchange access control policies. It is not specifi-\ncally designed for managing privacy, but it represents a \nrelevant innovation in the field of access control policies \nand has been used as the basis for following privacy-aware \nauthorization languages. Main features of XACML are: \n(1) policy combination, a method for combining policies \non the same resource independently specified by different \nentities; (2) combining algorithms, different algorithms \nrepresenting ways of combining multiple decisions into a \nsingle decision; (3) attribute-based restrictions, the defini-\ntion of policies based on properties associated with sub-\njects and resources rather than their identities; (4) multiple \nsubjects, the definition of more than one subject relevant \nto a decision request; (5) policy distribution, policies can \nbe defined by different parties and enforced at different \nenforcement points; (6) implementation independence, an \nabstraction layer that isolates the policy-writer from the \nimplementation details; and (7) obligations, 46 a method \nfor specifying the actions that must be fulfilled in con-\njunction with the policy enforcement. \n Platform for Privacy Preferences Project (P3P) 38,43 is \na World Wide Web Consortium (W3C) project aimed at \nprotecting the privacy of users by addressing their need \nto assess that the privacy practices adopted by a server \nprovider comply with users ’ privacy requirements. P3P \nprovides an XML-based language and a mechanism for \nensuring that users can be informed about privacy poli-\ncies of the server before the release of personal informa-\ntion. Therefore, P3P allows Web sites to declare their \nprivacy practices in a standard and machine-readable \nXML format known as P3P policy. A P3P policy contains \nthe specification of the data it protects, the data recipients \nallowed to access the private data, consequences of data \nrelease, purposes of data collection, data retention policy, \nand dispute resolution mechanisms. Supporting privacy \npreferences and policies in Web-based transactions allows \nusers to automatically understand and match server prac-\ntices against their privacy preferences. Thus, users need \nnot read the privacy policies at every site they interact \nwith, but they are always aware of the server practices in \ndata handling. In summary, the goal of P3P is twofold: \nIt allows Web sites to state their data-collection practices \nin a standardized, machine-readable way, and it provides \nusers with a solution to understand what data will be col-\nlected and how those data will be used. \n The corresponding language that would allow users \nto specify their preferences as a set of preference rules is \ncalled a P3P Preference Exchange Language (APPEL). 47 \nAPPEL can be used by users ’ agents to reach auto-\nmated or semiautomated decisions regarding the accept-\nability of privacy policies from P3P-enabled Web sites. \nUnfortunately, interactions between P3P and APPEL 48 \nhad shown that users can explicitly specify just what is \nunacceptable in a policy, whereas the APPEL syntax is \ncumbersome and error prone for users. \n Finally, Enterprise Privacy Authorization Language \n(EPAL) 44,45 is another XML-based language for speci-\nfying and enforcing enterprise-based privacy policies. \nEPAL is specifically designed to enable organizations to \ntranslate their privacy policies into IT control statements \nand to enforce policies that may be declared and com-\nmunicated according to P3P specifications. \n In this scenario, the need for access control frame-\nworks that integrate policy evaluation and privacy \n 42 eXtensible Access Control Markup Language (XACML) Version \n2.0, \nFebruary \n2005, \n http://docs.oasis-open.org/xacml/2.0/access_\ncontrol-xacml-2.0-core-spec-os.pdf . \n 43 World Wide Web Consortium (W3C), Platform for privacy prefer-\nences (P3P) project, 2002, www.w3.org/TR/P3P/ . \n 44 P. Ashley, S. Hada, G. Karjoth, and M. Schunter, “ E-P3P privacy \npolicies and privacy authorization, ” Proceedings of the ACM Workshop \non Privacy in the Electronic Society (WPES 2002) , Washington, \npp. 103 – 109, 2002. \n 45 P. Ashley, S. Hada, G. Karjoth, C. Powers, and M. Schunter, \n “ Enterprise privacy authorization language (epal 1.1), ” 2003, www.\nzurich.ibm.com/security/enterprise-privacy/epal . \n 46 C. Bettini, S. Jajodia, X. S. Wang, and D. Wijesekera, “ Provisions \nand obligations in policy management and security applications, ” \n Proceedings of 28th Conference Very Large Data Bases (VLDB ’02) , \nHong Kong, pp. 502 – 513, 2002. \n 47 World Wide Web Consortium (W3C). A P3P Preference Exchange \nLanguage 1.0 (APPEL1.0), 2002, www.w3.org/TR/P3P-preferences/ . \n 48 R. Agrawal, J., Kiernan, R., Srikant, and Y. Xu, “ An XPath based \npreference language for P3P, ” Proceedings of the 12th International \nWorld Wide Web Conference , Budapest, Hungary, pp. 629 – 639, 2003. \n" }, { "page_number": 511, "text": "478\n functionalities arose. A first attempt to provide a uniform \nframework for regulating information release over the \nWeb has been presented by Bonatti and Samarati. 49 Later \na solution that introduced a privacy-aware access control \nframework was defined by Ardagna et al. 50 This framework \nallows the integration, evaluation, and enforcement of poli-\ncies regulating access to service/data and release of personal \nidentifiable information, respectively, and provides a mech-\nanism to define constraints on the secondary use of personal \ndata for the protection of users ’ privacy. In particular, the \nfollowing types of privacy policies have been specified: \n ● Access control policies. These govern access/release \nof data/services managed by the party (as in tradi-\ntional access control). \n ● Release policies. These govern release of properties/\ncredentials/personal identifiable information (PII) of \nthe party and specify under which conditions they \ncan be released. \n ● Data handling policies. These define how personal \ninformation will be (or should be) dealt with at the \nreceiving parties. 51 \n An important feature of this framework is to support \nrequests for certified data, issued and signed by trusted \nauthorities, and uncertified data, signed by the owner \nitself. It also allows defining conditions that can be sat-\nisfied by means of zero-knowledge proof 52 , 53 and based \non physical position of the users. 54 In the context of the \nPrivacy and Identity Management for Europe (PRIME), 55 \na European Union project for which the goal is the devel-\nopment of privacy-aware solutions and frameworks has \nbeen created . \n Data Privacy Protection \n The concept of anonymity was first introduced in the \ncontext of relational databases to avoid linking between \npublished data and users ’ identity. Usually, to protect user \nanonymity, data holders encrypt or remove explicit iden-\ntifiers such as name and Social Security number (SSN). \nHowever, data deidentification does not provide full \nanonymity. Released data can in fact be linked to other \npublicly available information to reidentify users and to \ninfer data that should not be available to the recipients. \nFor instance, a set of anonymized data could contain \nattributes that almost uniquely identify a user, such as, \nrace, date of birth, and ZIP code. Table 28.2A and Table \n28.2B show an example of where the anonymous medical \ndata contained in a table are linked with the census data \nto reidentify users. It is easy to see that in Table 28.2a \nthere is a unique tuple with a male born on 03/30/1938 \nand living in the area with ZIP code 10249 . As a con-\nsequence, if this combination of attributes is also unique \nin the census data in Table 28.2b , John Doe is identified, \nrevealing that he suffers from obesity. \n If in the past limited interconnectivity and limited com-\nputational power represented a form of protection against \ninference processes over large amounts of data, today, with \nthe advent of the Internet, such an assumption no longer \nholds. Information technology in fact gives organizations \nthe power to gather and manage vast amounts of personal \ninformation. \n 49 P. Bonatti, and P. Samarati, “ A unifi ed framework for regulat-\ning access and information release on the web, ” Journal of Computer \nSecurity , Vol. 10, No. 3, pp. 241 – 272, 2002. \n 50 C. A. Ardagna, M. Cremonini, S. De Capitani di Vimercati, and \nP. Samarati, “ A privacy-aware access control system, ” Journal of \nComputer Security , 2008. \n 51 C. A. Ardagna, S. De Capitani di Vimercati, and P. Samarati, \n “ Enhancing user privacy through data handling policies, ” Proceedings \nof the 20th Annual IFIP WG 11.3 Working Conference on Data and \nApplications Security , Sophia Antipolis, France, pp. 224 – 236, 2006. \n 52 J. Camenisch, and A. Lysyanskaya, “ An effi cient system for non-\ntransferable anonymous credentials with optional anonymity revo-\ncation, ” Proceedings of the International Conference on the Theory \nand Application of Cryptographic Techniques (EUROCRYPT 2001) , \nInnsbruck, Austria, pp. 93 – 118, 2001. \n 53 J. Camenisch, and E. Van Herreweghen, “ Design and implemen-\ntation of the idemix anonymous credential system, ” Proceedings of \nthe 9th ACM Conference on Computer and Communications Security \n(CCS 2002) , Washington, pp. 21 – 30, 2002. \n 54 C. A. Ardagna, M. Cremonini, E. Damiani, S. De Capitani di \nVimercati, and P. Samarati, “ Supporting location-based conditions \nin access control policies, ” Proceedings of the ACM Symposium on \nInformation, Computer and Communications Security (ASIACCS ’06) , \nTaipei, pp. 212 – 222, 2006. \n TABLE 28.2A Census Data \n SSN \n Name \n Address \n City \n Date of Birth \n ZIP \n … \n … \n … \n … \n … \n … \n … \n … \n … \n John Doe \n … \n New York \n 03/30/1938 \n 10249 \n … \n … \n … \n … \n … \n … \n … \n … \nPART | I Overview of System and Network Security: A Comprehensive Introduction\n 55 Privacy and Identity Management for Europe (PRIME), 2004, \n www.prime-project.eu.org/ . \n" }, { "page_number": 512, "text": "479\n To address the problem of protecting anonymity while \nreleasing microdata, the concept of k- anonymity has been \ndefined. K- anonymity means that the observed data cannot \nbe related to fewer than k respondents. 56 Key to achiev-\ning k- anonymity is the identification of a quasi-identifier , \nwhich is the set of attributes in a dataset that can be linked \nwith external information to reidentify the data owner. It \nfollows that for each release of data, every combination of \nvalues of the quasi-identifier must be indistinctly matched \nto at least k tuples. \n Two approaches to achieve k- anonymity have \nbeen adopted: generalization and suppression . These \napproaches share the important feature that the truth-\nfulness of the information is preserved, that is, no false \ninformation is released. \n In more detail, the generalization process generalizes \nsome of the values stored in the table. For instance, con-\nsidering the ZIP code attribute in Table 28.2B and sup-\nposing for simplicity that it represents a quasi-identifier, \nthe ZIP code can be generalized by dropping, at each step \nof generalization, the least significant digit. As another \nexample, the date of birth can be generalized by first \nremoving the day, then the month, and eventually by gen-\neralizing the year. \n On the contrary, the suppression process removes some \ntuples from the table. Again, considering Table 28.2B, the \nZIP codes, and a k- anonymity requirement for k \u0003 2, it is \nclear that all tuples already satisfy the k \u0003 2 requirement \nexcept for the last one. In this case, to preserve the k \u0003 2, \nthe last tuple could be suppressed. \n Research on k -anonymity has been particularly rich \nin recent years. Samarati 56 presented an algorithm based \non generalization hierarchies and suppression that calcu-\nlates the minimal generalization. The algorithm relies on \na binary search on the domain generalization hierarchy \nto avoid an exhaustive visit of the whole generalization \nspace. Bayardo and Agrawal 57 developed an optimal \nbottom-up algorithm that starts from a fully generalized \ntable (with all tuples equal) and then specializes the data-\nset into a minimal k- anonymous table. LeFevre et al. 58 \nare the authors of Incognito , a framework for providing \n k- minimal generalization. Their algorithm is based on \na bottom-up aggregation along dimensional hierarchies \nand a priori aggregate computation. The same authors 59 \nalso introduced Mondrian k- anonymity, which models \nthe tuples as points in d -dimensional spaces and applies \na generalization process that consists of finding the \nminimal multidimensional partitioning that satisfy the k \npreference. \n Although there are advantages of k -anonymity for \nprotecting respondents ’ privacy, some weaknesses have \nbeen demonstrated. Machanavajjhala et al. 60 identified \n TABLE 28.2B User reidentification \n Anonymous Medical Data \n SSN \n Name \n Date of Birth \n Sex \n ZIP \n Marital Status \n Disease \n \n \n 09/11/1984 \n M \n 10249 \n Married \n HIV \n \n \n 09/01/1978 \n M \n 10242 \n Single \n HIV \n \n \n 01/06/1959 \n F \n 10242 \n Married \n Obesity \n \n \n 01/23/1954 \n M \n 10249 \n Single \n Hypertension \n \n \n 03/15/1953 \n F \n 10212 \n Divorced \n Hypertension \n \n \n 03/30/1938 \n M \n 10249 \n Single \n Obesity \n \n \n 09/18/1935 \n F \n 10212 \n Divorced \n Obesity \n \n \n 03/15/1933 \n F \n 10252 \n Divorced \n HIV \nChapter | 1 Building a Secure Organization\n 56 P. Samarati, “ Protecting respondents ’ identities in microdata \nrelease, ” IEEE Transactions on Knowledge and Data Engineering , Vol. \n13, No. 6, pp. 1010 – 1027, 2001. \n 57 R. J. Bayardo, and R. Agrawal, “ Data privacy through optimal \nk-anonymization, ” Proceedings of the 21st International Conference \non Data Engineering (ICDE’05) , Tokyo, pp. 217 – 228, 2005. \n 58 K. LeFevre, D. J. DeWitt, and R. Ramakrishnan, “ Incognito: \nEffi cient full-domain k-anonymity, ” Proceedings of the 24th ACM \nSIGMOD International Conference on Management of Data , Baltimore, \npp. 49 – 60, 2005. \n 59 K. LeFevre, D. J. DeWitt, and R. Ramakrishnan, “ Mondrian mul-\ntidimensional k-anonymity, ” Proceedings of the 22nd International \nConference on Data Engineering (ICDE’06) , Atlanta, 2006. \n 60 A. \nMachanavajjhala, \nJ. \nGehrke, \nD. \nKifer, \nand \nM. \nVenkitasubramaniam, “ l -diversity: Privacy beyond k-anonymity, ” \n Proceedings of the International Conference on Data Engineering \n(ICDE ’06) , Atlanta, 2006. \n" }, { "page_number": 513, "text": "PART | IV Privacy and Access Management\n480\n two successful attacks to k -anonymous table: the homo-\ngeneity attack and the background knowledge attack. To \nexplain the homogeneity attack , suppose that a k -anony-\nmous table contains a single sensitive attribute. Suppose \nalso that all tuples with a given quasi-identifier value have \nthe same value for that sensitive attribute, too. As a con-\nsequence, if the attacker knows the quasi-identifier value \nof a respondent, the attacker is able to learn the value of \nthe sensitive attribute associated with the respondent. For \ninstance, consider the 2-anonymous table shown in Table \n28.3 and assume that an attacker knows that Alice is born \nin 1966 and lives in the 10212 ZIP code. Since all tuples \nwith quasi-identifier \u0004 1966,F,10212 \u0005 suffer anorexia, \nthe attacker can infer that Alice suffers anorexia. Focusing \non the background knowledge attack , the attacker exploits \nsome a priori knowledge to infer some personal informa-\ntion. For instance, suppose that an attacker knows that \n Bob has quasi-identifier \u0004 1984,M,10249 \u0005 and that Bob \nis overweight. In this case, from Table 28.3 , the attacker \ncan infer that Bob suffers from HIV. \n To neutralize these attacks, the concept of l- diversity \nhas been introduced. 60 In particular, a cluster of tuples \nwith the same quasi-identifier is said to be l -diverse if it \ncontains at least l different values for the sensitive attribute \n(disease, in the example in Table 28.3 ). If a k -anonymous \ntable is l -diverse, the homogeneity attack is ineffective, \nsince each block of tuples has at least l \u0005 \u0003 2 distinct \nvalues for the sensitive attribute. Also, the background \nknowledge attack becomes more complex as l increases. \n Although l -diversity protects data against attribute dis-\nclosure, it leaves space for more sophisticated attacks based \non the distribution of values inside clusters of tuples with \nthe same quasi-identifier. 61 To prevent this kind of attack, \nthe t -closeness requirement has been defined. In particular, \na cluster of tuples with the same quasi-identifier is said to \nsatisfy t -closeness if the distance between the probabilistic \ndistribution of the sensitive attribute in the cluster and the \none in the original table is lower than t . A table satisfies \n t -closeness if all its clusters satisfy t -closeness. \n In the next section, where the problem of location \nprivacy protection is analyzed, we also discuss how the \nlocation privacy protection problem has adapted the \n k- anonymity principle to a pervasive and distributed sce-\nnario, where users move on the field carrying a mobile \ndevice. \n Privacy for Mobile Environments \n The widespread diffusion of mobile devices and the \naccuracy and reliability achieved by positioning tech-\nniques make available a great amount of location infor-\nmation about users. Such information has been used for \ndeveloping novel location-based services. However, if \non one side such a pervasive environment provides many \nadvantages and useful services to the users, on the other \nside privacy concerns arise, since users could be the tar-\nget of fraudulent location-based attacks. The most pes-\nsimistic have even predicted that the unrestricted and \nunregulated availability of location technologies and \ninformation could lead to a “ Big Brother ” society domi-\nnated by total surveillance of individuals. \n The concept of location privacy can be defined as the \nright of individuals to decide how, when, and for which \npurposes their location information could be released \nto other parties. The lack of location privacy protection \ncould be exploited by adversaries to perform various \nattacks. 62 \n ● Unsolicited advertising, when the location of a user \ncould be exploited, without her consent, to provide \nadvertisements of products and services available \nnearby the user position \n ● Physical attacks or harassment , when the location \nof a user could allow criminals to carry out physical \nassaults on specific individuals \n ● User profiling , when the location of a user could \nbe used to infer other sensitive information, such \nas state of health, personal habits, or professional \nduties, by correlating visited places or paths \n ● Denial of service , when the location of a user could \nmotivate an access denial to services under some \ncircumstances \n 61 N. Li, T. Li, and S. Venkatasubramanian, “ t-closeness: Privacy \nbeyond k-anonymity and l-diversity, ” Proceedings of the 23nd \nInternational Conference on Data Engineering , Istanbul, Turkey, \npp. 106 – 115, 2007. \n 62 M. Duckham, and L. Kulik, “ Location privacy and location-aware \ncomputing, ” Dynamic & Mobile GIS: Investigating Change in Space \nand Time , pp. 34 – 51, Taylor & Francis, 2006. \n TABLE 28.3 An example of a 2-Anonymous table \n Year of Birth \n Sex \n ZIP \n Disease \n 1984 \n M \n 10249 \n HIV \n 1984 \n M \n 10249 \n Anorexia \n 1984 \n M \n 10249 \n HIV \n 1966 \n F \n 10212 \n Anorexia \n 1966 \n F \n 10212 \n Anorexia \n … \n … \n … \n … \n" }, { "page_number": 514, "text": "Chapter | 28 Net Privacy\n481\n A further complicating factor is that location privacy \ncan assume several meanings and introduce different \nrequirements, depending on the scenario in which the \nusers are moving and on the services the users are inter-\nacting with. The following categories of location privacy \ncan then be identified: \n ● Identity privacy protects the identities of the users \nassociated with or inferable from location \ninformation. To this purpose, protection techniques \naim at minimizing the disclosure of data that can let \nan attacker infer a user identity. Identity privacy is \nsuitable in application contexts that do not require \nthe identification of the users for providing a service. \n ● Position privacy protects the position information \nof individual users by perturbing corresponding \ninformation and decreasing the accuracy of location \ninformation. Position privacy is suitable for \nenvironments where users ’ identities are required for \na successful service provisioning. A technique that \nmost solutions exploit, either explicitly or implicitly, \nconsists of reducing the accuracy by scaling a \nlocation to a coarser granularity (from meters to \nhundreds of meters, from a city block to the whole \ntown, and so on). \n ● Path privacy protects the privacy of information \nassociated with individuals movements, such as the \npath followed while travelling or walking in an urban \narea. Several location-based services (personal navi-\ngation systems) could be exploited to subvert path \nprivacy or to illicitly track users. \n Since location privacy definition and requirements dif-\nfer depending on the scenario, no single technique is able \nto address the requirements of all the location privacy cat-\negories. Therefore, in the past, the research community \nfocusing on providing solutions for the protection of loca-\ntion privacy of users has defined techniques that can be \ndivided into three main classes: anonymity-based , obfus-\ncation-based , and policy-based techniques. These classes \nof techniques are partially overlapped in scope and could \nbe potentially suitable to cover requirements coming from \none or more of the categories of location privacy. It is easy \nto see that anonymity-based and obfuscation-based tech-\nniques can be considered dual categories. Anonymity-\nbased techniques have been primarily defined to protect \nidentity privacy and are not suitable for protecting position \nprivacy, whereas obfuscation-based techniques are well \nsuited for position protection and not appropriate for iden-\ntity protection. Anonymity-based and obfuscation-based \ntechniques could also be exploited for protecting path pri-\nvacy. Policy-based techniques are in general suitable for \nall the location privacy categories, although they are often \ndifficult for end users to understand and manage. \n Among the class of techniques just introduced, current \nresearch on location privacy has mainly focused on sup-\nporting anonymity and partial identities. Beresford and \nStajano 63 , 64 proposed a method, called mix zones , which \nuses an anonymity service based on an infrastructure that \ndelays and reorders messages from subscribers. Within a \nmix zone (i.e., an area where a user cannot be tracked), \na user is anonymous in the sense that the identities of all \nusers coexisting in the same zone are mixed and become \nindiscernible. Other works are based on the concept of \n k- anonymity. Bettini et al. 65 designed a framework able to \nevaluate the risk of sensitive location-based information \ndissemination. Their proposal puts forward the idea that the \ngeo-localized history of the requests submitted by a user \ncan be considered as a quasi-identifier that can be used \nto discover sensitive information about the user. Gruteser \nand Grunwald 66 developed a middleware architecture \nand an adaptive algorithm to adjust location information \nresolution, in spatial or temporal dimensions, to com-\nply with users ’ anonymity requirements. To this purpose, \nthe authors introduced the concepts of spatial cloaking. \nSpatial cloaking guarantees the k- anonymity by enlarg-\ning the area where a user is located to an area containing k \nindistinguishable users. Gedik and Liu 67 described another \n k- anonymity model aimed at protecting location privacy \nagainst various privacy threats. In their proposal, each user \nis able to define the minimum level of anonymity and the \nmaximum acceptable temporal and spatial resolution for \nher location measurement. Mokbel et al. 68 designed a \nframework, named Casper , aimed at enhancing traditional \nlocation-based servers and query processors with anony-\nmous services, which satisfies both k -anonymity and spatial \n 63 A. R. Beresford, and F. Stajano, “ Location privacy in pervasive \ncomputing, ” IEEE Pervasive Computing , vol. 2, no. 1, pp. 46 – 55, 2003. \n 64 A. R. Beresford, and F. Stajano, “ Mix zones: User privacy in \nlocation-aware services, ” Proceedings of the 2nd IEEE Annual \nConference on Pervasive Computing and Communications Workshops \n(PERCOMW04) , Orlando, pp. 127 – 131, 2004. \n 65 C. Bettini, X. S. Wang, and S. Jajodia, “ Protecting privacy against \nlocation-based personal identifi cation, ” Proceedings of the 2nd VLDB \nWorkshop on Secure Data Management (SDM’05) , Trondheim, \nNorway, pp. 185 – 199, 2005. \n 66 M. Gruteser, and D. Grunwald, “ Anonymous usage of location-\nbased services through spatial and temporal cloaking, ” Proceedings of \nthe 1st International Conference on Mobile Systems, Applications, and \nServices (MobiSys) , San Francisco, pp. 31 – 42, 2003. \n 67 B. Gedik, and L. Liu, “ Protecting location privacy with personal-\nized k-anonymity: Architecture and algorithms, ” IEEE Transactions on \nMobile Computing , vol. 7, no. 1, pp. 1 – 18, 2008. \n 68 M. F. Mokbel, C. Y. Chow, and W. G. Aref, “ The new Casper: \nQuery processing for location services without compromising privacy, ” \n Proceedings of the 32nd International Conference on Very Large Data \nBases (VLDB 2006) , Seoul, South Korea, pp. 763 – 774, 2006. \n" }, { "page_number": 515, "text": "PART | IV Privacy and Access Management\n482\n user preferences in terms of the smallest location area that \ncan be released. Ghinita et al. 69 proposed PRIVE , a decen-\ntralized architecture for preserving query anonymization, \nwhich is based on the definition of k- anonymous areas \nobtained exploiting the Hilbert space-filling curve. Finally, \nanonymity has been exploited to protect the path privacy \nof the users 70 , 71 , 72 Although interesting, these solutions are \nstill at an early stage of development. \n Alternatively, when the users ’ identity is required \nfor location-based service provision, obfuscation-based \ntechniques has been deployed. The first work providing \nan obfuscation-based technique for protecting location \nprivacy was by Duckham and Kulik. 62 In particular, their \nframework provides a mechanism for balancing individual \nneeds for high-quality information services and for loca-\ntion privacy. The idea is to degrade location information \nquality by adding n fake positions to the real user posi-\ntion. Ardagna et al. 73 defined different obfuscation-based \ntechniques aimed at preserving location privacy by artifi-\ncially perturbing location information. These techniques \ndegrade the location information accuracy by (1) enlarg-\ning the radius of the measured location, (2) reducing the \nradius, and (3) shifting the center. In addition, a metric \ncalled relevance is used to evaluate the level of location \nprivacy and balance it with the accuracy needed for the \nprovision of reliable location-based services. \n Finally, policy-based techniques are based on the \nnotion of privacy policies and are suitable for all the cat-\negories of location privacy. In particular, privacy policies \ndefine restrictions that must be enforced when location of \nusers is used by or released to external parties. The IETF \nGeopriv working group 74 addresses privacy and security \nissues related to the disclosure of location information over \nthe Internet. The main goal is to define an environment \nsupporting both location information and policy data. \n 4. NETWORK ANONYMITY \n The wide diffusion of the Internet for many daily activities \nhas enormously increased interest in security and privacy \nissues. In particular, in such a distributed environment, \nprivacy should also imply anonymity: a person shopping \nonline may not want her visits to be tracked, the sending \nof email should keep the identities of the sender and the \nrecipient hidden from observers, and so on. That is, when \nsurfing the Web, users want to keep secret not only the \ninformation they exchange but also the fact that they are \nexchanging information and with whom. Such a problem \nhas to do with traffic analysis, and it requires ad hoc solu-\ntions. Traffic analysis is the process of intercepting and \nexamining messages to deduce information from patterns \nin communication. It can be performed even when the \nmessages are encrypted and cannot be decrypted. In gen-\neral, the greater the number of messages observed or even \nintercepted and stored, the more can be inferred from the \ntraffic. It cannot be solved just by encrypting the header \nof a packet or the payload: In the first case, the packet \ncould still be tracked as it moves through the network; the \nsecond case is ineffective as well since it would still be \npossible to identify who is talking to whom. \n In this section, we first describe the onion routing pro-\ntocol, 75 , 76 , 77 one of the better-known approaches that is not \napplication-oriented. Then we provide an overview of other \ntechniques for assuring anonymity and privacy over net-\nworks. The key approaches we discuss are mix networks, 78 , 79 \nthe Crowds system, 80 and the Freedom network. 81 \n 69 G. Ghinita, P. Kalnis, and S. Skiadopoulos, “ PRIVE: Anonymous \nlocation-based queries in distributed mobile systems, ” Proceedings of \nthe International World Wide Web Conference (WWW 2007) , Banff, \nCanada, pp. 371 – 380, 2007. \n 70 M. Gruteser, J. Bredin, and D. Grunwald, “ Path privacy in location-\naware computing, ” Proceedings of the Second International Conference \non Mobile Systems, Application and Services (MobiSys2004) , Boston, \n2004. \n 71 M. Gruteser, and X. Liu, “ Protecting privacy in continuous \nlocation-tracking applications, ” IEEE Security & Privacy Magazine , \nVol. 2, No. 2, pp. 28 – 34, 2004. \n 72 B. Ho, and M. Gruteser, “ Protecting location privacy through path \nconfusion, ” Proceedings of IEEE/CreateNet International Conference \non Security and Privacy for Emerging Areas in Communication \nNetworks (SecureComm) , Athens, Greece, pp. 194 – 205, 2005. \n 73 C.A. Ardagna, M. Cremonini, E. Damiani, S. De Capitani di \nVimercati, and P. Samarati, “ Location privacy protection through \nobfuscation-based techniques, ” Proceedings of the 21st Annual IFIP \nWG 11.3 Working Conference on Data and Applications Security , \nRedondo Beach, pp. 47 – 60, 2007. \n 74 Geographic \nLocation/Privacy \n(geopriv), \nSeptember \n2006, \n www.ietf.org/html.charters/geopriv-charter.html . \n 75 D. Goldschlag, M. Reed, and P. Syverson, “ Hiding routing infor-\nmation, ” In R. Anderson (ed.), Information Hiding: First International \nWorkshop , Volume 1174 of Lecture Notes in Computer Science, \nSpringer-Verlag, pp. 137 – 150, 1999. \n 76 D. Goldschlag, M. Reed, and P. Syverson, “ Onion routing for \nanonymous and private internet connections, ” Communication of the \nACM , Vol. 42, No. 2, pp. 39 – 41, 1999. \n 77 M. Reed, P. Syverson, and D. Goldschlag, “ Anonymous con-\nnections and onion routing, ” IEEE Journal on Selected Areas in \nCommunications , Vol. 16, No. 4, pp. 482 – 494, 1998. \n 78 D. Chaum, “ Untraceable electronic mail, return address, and \ndigital pseudonyms, ” Communications of the ACM , Vol. 24, No. 2, \npp. 84 – 88, 1981. \n 79 O. Berthold, H. Federrath, and S. Kopsell, “ Web MIXes: \nA system for anonymous and unobservable internet access, ” in \nH. Federrath (ed.), Anonymity 2000 , Volume 2009 of Lecture Notes in \nComputer Science, Springer-Verlag, pp. 115 – 129, 2000. \n 80 M. Reiter, and A. Rubin, “ Anonymous web transactions with \ncrowds, ” Communications of the ACM , Vol. 42, No. 2, pp. 32 – 48, 1999. \n 81 P. Boucher, A. Shostack, and I. Goldberg, Freedom Systems 2.0 \nArchitecture, \n2000, \n www.freedom.net/info/whitepapers/Freedom_\nSystem_2_Architecture.pdf . \n" }, { "page_number": 516, "text": "Chapter | 28 Net Privacy\n483\n Onion Routing \n Onion routing is intended to provide real-time bidirec-\ntional anonymous connections that are resistant to both \neavesdropping and traffic analysis in a way that’s trans-\nparent to applications. That is, if Alice and Bob commu-\nnicate over a public network by means of onion routing, \nthey are guaranteed that the content of the message \nremains confidential and no external observer or internal \nnode is able to infer that they are communicating. \n Onion routing works beneath the application layer, \nreplacing socket connections with anonymous connec-\ntions and without requiring any change to proxy-aware \nInternet services or applications. It was originally imple-\nmented on Sun Solaris 2.4 in 1997, including proxies \nfor Web browsing (HTTP), remote logins (rlogin), email \n(SMTP), and file transfer (FTP). The Tor 82 generation \n2 onion routing implementation runs on most common \noperating systems. It consists of a fixed infrastructure \nof onion routers, where each router has a longstanding \nsocket connection to a set of neighboring ones. Only \na few routers, called onion router proxies , know the \nwhole infrastructure topology. In onion routing, instead \nof making socket connections directly to a responding \nmachine, initiating applications make a socket connec-\ntion to an onion routing proxy that builds an anony-\nmous connection through several other onion routers \nto the destination. In this way, the onion routing net-\nwork allows the connection between the initiator and \nresponder to remain anonymous. Although the protocol \nis called onion routing, the routing that occurs during \nthe anonymous connection is at the application layer \nof the protocol stack, not at the IP layer. However, the \nunderlying IP network determines the route that data \nactually travels between individual onion routers. Given \nthe onion router infrastructure, the onion routing protocol \nworks in three phases: \n ● Anonymous connection setup \n ● Communication through the anonymous connection \n ● Anonymous connection destruction \n During the first phase, the initiator application, instead \nof connecting directly with the destination machine, opens \na socket connection with an onion routing proxy (which \nmay reside in the same machine, in a remote machine, or \nin a firewall machine). The proxy first establishes a path \nto the destination in the onion router infrastructure, then \nsends an onion to the first router of the path. The onion is \na layered data structure in which each layer of the onion \n(public-key encrypted) is intended for a particular onion \nrouter and contains (1) the identity of the next onion router \nin the path to be followed by the anonymous connection; \n(2) the expiration time of the onion; and (3) a key seed \nto be used to generate the keys to encode the data sent \nthrough the anonymous connection in both directions. The \nonion is sent through the path established by the proxy: \nan onion router that receives an onion peels off its layer, \nidentifies the next hop, records on a table the key seed, the \nexpiration time and the identifiers of incoming and out-\ngoing connections and the keys that are to be applied, \npads the onion and sends it to the next onion router. Since \nthe most internal layer contains the name of the desti-\nnation machine, the last router of the path will act as the \ndestination proxy and open a socket connection with the \ndestination machine. Note that only the intended onion \nrouter is able to peel off the layer intended to it. In this way, \neach intermediate onion router knows (and can commu-\nnicate with) only the previous and the next-hop router. \nMoreover, it is not capable of understanding the content \nof the following layers of the onion. The router, and any \nexternal observer, cannot know a priori the length of \nthe path since the onion size is kept constant by the fact \nthat each intermediate router is obliged to add padding \nto the onion corresponding to the fixed-size layer that it \nremoved. \n Figure 28.2 shows an onion for an anonymous con-\nnection following route WXYZ ; the router infrastructure \nis as depicted in Figure 28.3 , with W the onion router \nproxy. \n Once the anonymous connection is established, data \ncan be sent in both directions. The onion proxy receives \ndata from the initiator application, breaks it into fixed-\nsize packets, and adds a layer of encryption for each \nonion router in the path using the keys specified in the \nonion. As data packets travel through the anonymous \nconnection, each intermediate onion router removes one \nlayer of encryption. The last router in the path sends the \nplaintext to the destination through the socket connec-\ntion that was opened during the setup phase. This encryp-\ntion layering occurs in the reverse order when data is \nY, ExpirationTimeX, KeySeedX\nZ, ExpirationTimeY, KeySeedY\ndest, ExpirationTimeX, KeySeedX\n FIGURE 28.2 Onion message. \n 82 R. Dingledine N. Mathewson and P. Syverson “ Tor: The second-\ngeneration onion router, ” Proceedings of the 13th USENIX Security \nSymposium , San Diego, 2004. \n" }, { "page_number": 517, "text": "PART | IV Privacy and Access Management\n484\n sent backward from the destination machine to the ini-\ntiator application. In this case, the initiator proxy, which \nknows both the keys and the path, will decrypt each layer \nand send the plaintext to the application using its socket \nconnection with the application. As for the onion, data \npassed along the anonymous connection appears different \nto each intermediate router and external observer, so it \ncannot be tracked. Moreover, compromised onion routers \ncannot cooperate to correlate the data stream they see. \n When the initiator application decides to close the \nsocket connection with the proxy, the proxy sends a destroy \nmessage along the anonymous connection and each router \nremoves the entry of the table relative to that connection. \n There are several advantages in the onion routing pro-\ntocol. First, the most trusted element of the onion routing \ninfrastructure is the initiator proxy, which knows the net-\nwork topology and decides the path used by the anony-\nmous connection. If the proxy is moved in the initiator \nmachine, the trusted part is under the full control of the \ninitiator. Second, the total cryptographic overhead is the \nsame as for link encryption but, whereas in link encryp-\ntion one corrupted router is enough to disclose all the \ndata, in onion routing routers cannot cooperate to cor-\nrelate the little they know and disclose the information. \nThird, since an onion has an expiration time, replay \nattacks are not possible. Finally, if anonymity is also \ndesired, then all identifying information must be addition-\nally removed from the data stream before being sent over \nthe anonymous connection. However, onion routing is not \ncompletely invulnerable to traffic analysis attacks: if a \nhuge number of messages between routers is recorded and \nusage patterns analyzed, it would be possible to make a \nclose guess about the routing, that is, also about the initia-\ntor and the responder. Moreover, the topology of the onion \nrouter infrastructure must be static and known a priori by \nat least one onion router proxy, which make the protocol \nlittle adaptive to node/router failures. \n Tor 82 generation 2 onion routing addresses some of the \nlimitations highlighted earlier, providing a reasonable trade-\noff among anonymity, usability, and efficiency. In particular, \nit provides perfect forward secrecy and it does not require a \nproxy for each supported application protocol. \n Anonymity Services \n Some other approaches offer some possibilities for \nproviding anonymity and privacy, but they are still vul-\nnerable to some types of attacks. For instance, many \nof these approaches are designed for World Wide Web \naccess only; being protocol-specific, these approaches \nmay require further development to be used with other \napplications or Internet services, depending on the com-\nmunication protocols used in those systems. \n David Chaum 78,79 introduced the idea of mix networks \nin 1981 to enable unobservable communication between \nusers of the Internet. Mixes are intermediate nodes that \nmay reorder, delay, and pad incoming messages to com-\nplicate traffic analysis. A mix node stores a certain number \nof incoming messages that it receives and sends them to \nthe next mix node in a random order. Thus, messages \nare modified and reordered in such a way that it is nearly \nimpossible to correlate an incoming message with an out-\ngoing message. Messages are sent through a series of mix \nnodes and encrypted with mix keys. If participants exclu-\nsively use mixes for sending messages to each other, their \ncommunication relations will be unobservable, even if the \nattacker records all network connections. Also, without \nadditional information, the receiver does not have any clue \nabout the identity of the message’s sender. As in onion \nrouting, each mix node knows only the previous and \nResponder\nInitiator\nSocket connection\nLongstanding socket connection between onion routers \nPart of anonymous connection WXYZ\nOnion router\nOnion router proxy\nA\nB\nC\nZ\nY\nX\nW\n FIGURE 28.3 Onion routing network infrastructure. \n" }, { "page_number": 518, "text": "Chapter | 28 Net Privacy\n485\n next node in a received message’s route. Hence, unless \nthe route only goes through a single node, compromising \na mix node does not enable an attacker to violate either \nthe sender nor the recipient privacy. Mix networks are \nnot really efficient, since a mix needs to receive a large \ngroup of messages before forwarding them, thus delay-\ning network traffic. However, onion routing has many \nanalogies with this approach and an onion router can be \nseen as a real-time Chaum mix. \n Reiter and Rubin 80 proposed an alternative to mixes, \ncalled crowds , a system to make only browsing anonymous, \nhiding from Web servers and other parties information \nabout either the user or the information she retrieves. This is \nobtained by preventing a Web server from learning any infor-\nmation linked to the user, such as the IP address or domain \nname, the page that referred the user to its site, or the user’s \ncomputing platform. The approach is based on the idea of \n “ blending into a crowd, ” that is, hiding one’s actions within \nthe actions of many others. Before making any request, a \nuser joins a crowd of other users. Then, when the user sub-\nmits a request, it is forwarded to the final destination with \nprobability p and to some other member of the crowd with \nprobability 1-p . When the request is eventually submitted, \nthe end server cannot identify its true initiator. Even crowd \nmembers cannot identify the initiator of the request, since \nthe initiator is indistinguishable from a member of the crowd \nthat simply passed on a request from another. \n Freedom network 81 is an overlay network that runs \non top of the Internet, that is, on top of the application \nlayer. The network is composed of a set of nodes called \n anonymous Internet proxies , which run on top of the \nexisting infrastructure. As for onion routing and mix net-\nworks, the Freedom network is used to set up a commu-\nnication channel between the initiator and the responder, \nbut it uses different techniques to encrypt the messages \nsent along the channel. \n 5. CONCLUSION \n In this chapter we discussed net privacy from different \nviewpoints, from historical to technological. The very \nnature of the concept of privacy requires such an enlarged \nperspective because it often appears indefinite, being con-\nstrained into the tradeoff between the undeniable need of \nprotecting personal information and the evident utility, in \nmany contexts, of the availability of the same informa-\ntion. The digital society and the global interconnected \ninfrastructure eased accessing and spreading of personal \ninformation; therefore, developing technical means and \ndefining norms and fair usage procedures for privacy pro-\ntection are now more demanding than in the past. \n Economic aspects have been introduced since they \nare likely to strongly influence the way privacy is actu-\nally managed and protected. In this area, research has \nprovided useful insights about the incentive and disin-\ncentives toward better privacy. \n We presented some of the more advanced solutions \nthat research has developed to date, either for anonymiz-\ning stored data, hiding sensitive information in artifi-\ncially inaccurate clusters, or introducing third parties \nand middleware in charge of managing online transac-\ntions and services in a privacy-aware fashion. Location \nprivacy is a topic that has gained importance in recent \nyears with the advent of mobile devices and that is worth \na specific consideration. \n Furthermore, the important issue of anonymity over \nthe Net has been investigated. To let individuals surf the \nWeb, access online services, and interact with remote \nparties in an anonymous way has been the goal of many \nefforts for years. Some important technologies and tools \nare available and are gaining popularity. \n To conclude, whereas privacy over the Net and in \nthe digital society does not look to be in good shape, the \naugmented sensibility of individuals to its erosion, the \nmany scientific and technological efforts to introduce \nnovel solutions, and a better knowledge of the problem \nwith the help of fresh data contribute to stimulating the \nneed for better protection and fairer use of personal \ninformation. For this reason, it is likely that Net privacy \nwill remain an important topic in the years to come and \nmore innovations toward better management of privacy \nissues will emerge. \n" }, { "page_number": 519, "text": "This page intentionally left blank\n" }, { "page_number": 520, "text": "487\nComputer and Information Security Handbook\n© 2009, The Crown in right of Canada.\n Personal Privacy Policies 1 \n Dr. George Yee \n National Research Council of Canada, Ottawa \n Larry Korba \n National Research Council of Canada, Ottawa \n Chapter 29 \n The rapid growth of the Internet has been accompanied by \na similar growth in the availability of Internet e-services \n(such as online booksellers and stockbrokers). This prolif-\neration of e-services has in turn fueled the need to protect \nthe personal privacy of e-service users or consumers. This \nchapter proposes the use of personal privacy policies to \nprotect privacy. It is evident that the content must match the \nuser’s privacy preferences as well as privacy legislation. It is \nalso evident that the construction of a personal privacy pol-\nicy must be as easy as possible for the consumer. Further, \nthe content and construction must not result in negative \nunexpected outcomes (an unexpected outcome that harms \nthe user in some manner). The chapter begins with the deri-\nvation of policy content based on privacy legislation, fol-\nlowed by a description of how a personal privacy policy \nmay be constructed semiautomatically. It then shows how \nto additionally specify policies so that negative unexpected \noutcomes can be avoided. Finally, it describes our Privacy \nManagement Model that explains how to use personal pri-\nvacy policies to protect privacy, including what is meant by \na “ match ” of consumer and service provider policies and \nhow nonmatches can be resolved through negotiation. \n 1. INTRODUCTION \n The rapid growth of the Internet has been accompanied \nby a similar rapid growth in Internet based e-services \ntargeting consumers. E-services are available for bank-\ning, shopping, stock investing, and healthcare, to name \na few areas. However, each of these services requires a \nconsumer’s personal information in one form or another. \nThis leads to concerns over privacy. \n For e-services to be successful, privacy must \nbe protected. In a recent U.S. study by MasterCard \nInternational, 60% of respondents were concerned with \nthe privacy of transmitted data. 2 An effective and flexible \nway of protecting privacy is to manage it using privacy \npolicies. In this approach, each provider of an e-service \nhas a privacy policy specifying the private information \nrequired for that e-service. Similarly, each consumer of \nan e-service has a privacy policy specifying the private \ninformation she is willing to share for the e-service. \nPrior to the activation of an e-service, the consumer \nand provider of the e-service exchange privacy policies. \nThe service is only activated if the policies are compat-\nible (we define what “ compatible ” means in a moment). \nWhere the personal privacy policy of an e-service con-\nsumer conflicts with the privacy policy of an e-service \nprovider, we have advocated a negotiations approach to \nresolve the conflict. 3 , 4 \n In our approach, the provider requires private infor-\nmation from the consumer for use in its e-service and so \nreduces the consumer’s privacy by requesting such infor-\nmation. This reduction in consumer privacy is represented \nby the requirements for consumer private information in \nthe provider’s privacy policy. The consumer, on the other \nhand, would rather keep her private information to herself, \nso she tries to resist the provider’s attempt to reduce her \n 4 G. Yee, and L. Korba, “ The negotiation of privacy policies in dis-\ntance education, ” Proceedings, 14th IRMA International Conference , \nPhiladelphia, May 2003. \n 3 G. Yee, and L. Korba, “ Bilateral e-services negotiation under \nuncertainty, ” Proceedings, The 2003 International Symposium on \nApplications and the Internet (SAINT 2003) , Orlando, Jan 2003. \n 2 T. Greer, and M. Murtaza, “ E-commerce security and privacy: \nManagerial vs. technical perspectives, ” Proceedings, 15th IRMA \nInternational Conference (IRMA 2004), New Orleans, May 2004. \n 1 NRC Paper number: NRC 50334 \n" }, { "page_number": 521, "text": "PART | IV Privacy and Access Management\n488\nprivacy. This means that the consumer would only be will-\ning to have her privacy reduced by a certain amount, as \nrepresented by the privacy provisions in her privacy policy. \nThere is a match between a provider’s privacy policy and \nthe corresponding consumer’s policy where the amount of \nprivacy reduction allowed by the consumer’s policy is at \nleast as great as the amount of privacy reduction required \nby the provider’s policy (more details on policy match-\ning follow). Otherwise, there is a mismatch . Where time \nis involved, a private item held for less time is considered \nless private. A privacy policy is considered upgraded if the \nnew version represents more privacy than the prior version. \nSimilarly, a privacy policy is considered downgraded if the \nnew version represents less privacy than the prior version. \n So far so good, but what should go into a personal pri-\nvacy policy? How are these policies constructed? Moreover, \nwhat can be done to construct policies that do not lead to \nnegative unexpected outcomes (an outcome that is harm-\nful to the user in some manner)? Consumers need help \nin formulating personal privacy policies. The creation of \nsuch policies needs to be as easy as possible or consumers \nwould simply avoid using them. Existing privacy specifi-\ncation languages such as P3P and APPE 5 , 6 that are XML-\nbased are far too complicated for the average Internet user \nto understand. Understanding or changing a privacy policy \nexpressed in these languages effectively requires knowing \nhow to program. What is needed is an easy, semiautomated \nway of deriving a personal privacy policy. \n In this chapter, we present two semiautomated \napproaches for obtaining personal privacy policies for \nconsumers. We also show how these policies should \nbe specified to avoid negative unexpected outcomes. \nFinally, we describe our Privacy Management Model that \nexplains how personal privacy policies are used to protect \nconsumer privacy, including how policies may be negoti-\nated between e-service consumer and e-service provider. \n The “ Content of Personal Privacy Policies ” section \nexamines the content of personal privacy policies by \nidentifying some attributes of private information collec-\ntion. The “ Semiautomated Derivation of Personal Privacy \nPolicies ” section shows how personal privacy policies \ncan be semiautomatically generated. The “ Specifying \nWell-Formed Personal Privacy Policies ” section explains \nhow to ensure that personal privacy policies do not \nlead to negative unexpected outcomes. “ The Privacy \nManagement Model ” section presents our Privacy \nManagement Model, which explains how personal pri-\nvacy policies can be used to protect consumer privacy, \nincluding how they may be negotiated between e-serv-\nice consumer and e-service provider. The “ Discussion \nand Related Work ” section discusses our approaches and \npresents related work. The chapter ends with conclusions \nand a description of possible future work in these areas. \n 2. CONTENT OF PERSONAL PRIVACY \nPOLICIES \n In Canada, privacy legislation is enacted in the Personal \nInformation Protection and Electronic Documents Act \n(PIPEDA) 7 and is based on the Canadian Standards \nAssociation’s Model Code for the Protection of Personal \nInformation, 8 recognized as a national standard in 1996. \nThis code consists of ten Privacy Principles that for \nconvenience, we label CSAPP. \n Privacy Legislation and Directives \n Data privacy in the European Union is governed by a \nvery comprehensive set of regulations called the Data \nProtection Directive. 9 In the United States, privacy pro-\ntection is achieved through a patchwork of legislation at \nthe federal and state levels. Privacy legislation is largely \nsector-based. 10 \n Requirements from Privacy Principles \n In this section, we identify some attributes of private \ninformation collection or personally identifiable infor-\nmation (PII) collection using CSAPP as a guide. We then \napply the attributes to the specification of privacy policy \ncontents. Note that we use the terms private informa-\ntion and PII interchangeably. We use CSAPP because it \nis representative of privacy legislation in other countries \n(e.g., European Union, Australia) and has withstood the \n 10 Banisar, D., “ Privacy and data protection around the world, ” \nProceedings, 21 st International Conference on Privacy and Personal \nData Protection , September 13, 1999. \n 9 European Union, “ Directive 95/46/EC of the European Parliament \nand of the Council of 24 October 1995 on the protection of individuals \nwith regard to the processing of personal data and on the free move-\nment of such data, ” unoffi cial text retrieved Sept. 5, 2003, from http://\naspe.hhs.gov/datacncl/eudirect.htm . \n 8 Offi ce of the Privacy Commissioner of Canada, “ The personal infor-\nmation protection and electronic documents act, ” retrieved May 1, \n2008, from www.privcom.gc.ca/legislation/02_06_01_e.asp . \n 7 Canadian Standards Association, “ Model code for the protection of \npersonal information, ” retrieved Sept. 5, 2007, from www.csa.ca/stan-\ndards/privacy/code/Default.asp?articleID \u0003 5286 & language \u0003 English . \n 6 W3C APPEL, “ A P3P preference exchange language 1.0 \n(APPEL1.0), ” W3C Working Draft 15, April 2002, retrieved Sept. 2, \n2002, from: http://www.w3.org/TR/P3P-preferences/ . \n 5 W3C Platform, “ The platform for privacy preferences, ” retrieved \nSept. 2, 2002, from www.w3.org/P3P/ . \n" }, { "page_number": 522, "text": "Chapter | 29 Personal Privacy Policies\n489\ntest of time, originating from 1996. In addition, CSAPP \nis representative of the Fair Information Practices, a set \nof standards balancing the information needs of the busi-\nness with the privacy needs of the individual. 11 Table \n29.1 shows CSAPP. \n In Table 29.1 , we interpret organization as “ provider ” \nand individual as “ consumer. ” In the following, we use \nCSAPP. n to denote Principle n of CSAPP. Principle \nCSAPP.2 implies that there could be different provid-\ners requesting the information, thus implying a collec-\ntor attribute. Principle CSAPP.4 implies that there is a \n what attribute, that is, what private information is being \ncollected? Principles CSAPP.2, CSAPP.4, and CSAPP.5 \nstate that there are purposes for which the private \ninformation is being collected. Principles CSAPP.3, \nCSAPP.5, and CSAPP.9 imply that the private informa-\ntion can be disclosed to other parties, giving a disclose-to \nattribute. Principle CSAPP.5 implies a retention time \nattribute for the retention of private information. Thus, \nfrom the CSAPP we derive five attributes of private \ninformation collection: collector , what , purposes , reten-\ntion time , and disclose-to . \n The Privacy Principles also prescribe certain opera-\ntional requirements that must be satisfied between \nprovider and consumer, such as identifying purpose \nand consent. Our service model and the exchange of \nprivacy policies automatically satisfy some of these \nrequirements, namely Principles CSAPP.2, CSAPP.3, \nand CSAPP.8. The satisfaction of the remaining opera-\ntional requirements depends on compliance mechanisms \n(Principles CSAPP.1, CSAPP.4, CSAPP.5, CSAPP.6, \nCSAPP.9, and CSAPP.10) and security mechanisms \n(Principle CSAPP.7). \n TABLE 29.1 CSAPP: The Ten Privacy Principles from the Canadian Standards Association \n Principle \n Description \n 1. Accountability \n An organization is responsible for personal information under its control and \nshall designate an individual or individuals accountable for the organization’s \ncompliance with the privacy principles. \n 2. Identifying Purposes \n The purposes for which personal information is collected shall be identified by \nthe organization at or before the time the information is collected. \n 3. Consent \n The knowledge and consent of the individual are required for the collection, \nuse, or disclosure of personal information, except when inappropriate. \n 4. Limiting Collection \n The collection of personal information shall be limited to that which is \nnecessary for the purposes identified by the organization. Information shall be \ncollected by fair and lawful means. \n 5. Limiting Use, Disclosure, and Retention \n Personal information shall not be used or disclosed for purposes other than \nthose for which it was collected, except with the consent of the individual or as \nrequired by the law. In addition, personal information shall be retained only as \nlong as necessary for fulfillment of those purposes. \n 6. Accuracy \n Personal information shall be as accurate, complete, and up to date as is \nnecessary for the purposes for which it is to be used. \n 7. Safeguards \n Security safeguards appropriate to the sensitivity of the information shall be \nused to protect personal information. \n 8. Openness \n An organization shall make readily available to individuals specific information \nabout its policies and practices relating to the management of personal \ninformation. \n 9. Individual Access \n Upon request, an individual shall be informed of the existence, use and \ndisclosure of his or her personal information and shall be given access to \nthat information. An individual shall be able to challenge the accuracy and \ncompleteness of the information and have it amended as appropriate. \n 10. Challenging Compliance \n An individual shall be able to address a challenge concerning compliance with \nthe above principles to the designated individual or individuals accountable for \nthe organization’s compliance. \n 11 K. S. Schwaig, G. C. Kane, and V. C. Storey, “ Privacy, fair infor-\nmation practices and the fortune 500: the virtual reality of compli-\nance, ” The DATA BASE for Advances in Information Systems , 36(1), \npp. 49 – 63, 2005. \n" }, { "page_number": 523, "text": "PART | IV Privacy and Access Management\n490\n Privacy Policy Specification \n Based on these explorations, the contents of a pri-\nvacy policy should, for each item of PII, identify (1) \nthe collector — the person who wants to collect the \ninformation, (2) what — the nature of the information, \n(3) purposes — the purposes for which the informa-\ntion is being collected, (4) retention time — the amount \nof time for the provider to keep the information, and \n(5) disclose-to — the parties to whom the information \nwill be disclosed. Figure 29.1 gives three examples of \nconsumer personal privacy policies for use with an e-\nlearning provider, an online bookseller, and an online \nmedical help clinic. The policy use field indicates the \ntype of online service for which the policy will be used. \nSince a privacy policy may change over time, we have \na valid field to hold the time period during which the \npolicy is valid. Figure 29.2 gives examples of provider \nprivacy policies corresponding to the personal privacy \npolicies of Figure 29.1 . \nPolicy Use: E-learning\nOwner: Alice Consumer\nValid: unlimited\nCollector: any\nWhat: name, address, tel\nPurposes: identification\nRetention Time: unlimited\nDisclose-To: none\nCollector: any\nWhat: course marks\nPurposes: records\nRetention Time: 2 years\nDisclose-To: none\nPolicy Use: Bookseller\nOwner: Alice Consumer\nValid: June 2009\nPolicy Use: Medical Help\nOwner: Alice Consumer\nValid: July 2009\nCollector: any\nWhat: name, address, tel\nPurposes: identification\nRetention Time: unlimited\nDisclose-To: none\nCollector: any\nWhat: name, address, tel\nPurposes: contact\nRetention Time: unlimited\nDisclose-To: pharmacy\nCollector: Dr. A. Smith\nWhat: medical condition\nPurposes: treatment\nRetention Time: unlimited\nDisclose-To: pharmacy\n FIGURE 29.1 Example of consumer personal privacy policies. \nPolicy Use: E-learning\nOwner: E-learning Unlimited\nValid: unlimited\nCollector: E-learning Unlimited\nWhat: name, address, tel\nPurposes: identification\nRetention Time: unlimited\nDisclose-To: none\nCollector: E-learning Unlimited\nWhat: course marks\nPurposes: records\nRetention Time: 1 years\nDisclose-To: none\nPolicy Use: Bookseller\nOwner: All Books Online\nValid: unlimited\nPolicy Use: Medical Help\nOwner: Medics Online\nValid: unlimited\nCollector: All Books Online\nWhat: name, address, tel\nPurposes: identification\nRetention Time: unlimited\nDisclose-To: none\nCollector: All Books Online\nWhat: credit card\nPurposes: payment\nRetention Time: until paid\nDisclose-To: none\nCollector: Medics Online\nWhat: name, address, tel\nPurposes: contact\nRetention Time: unlimited\nDisclose-To: pharmacy\nCollector: Medics Online\nWhat: medical condition\nPurposes: treatment\nRetention Time: 1 year\nDisclose-To: pharmacy\n FIGURE 29.2 Example of corresponding provider privacy policies. \n A privacy policy thus consists of “ header ” informa-\ntion ( policy use , owner , valid ) together with one or more \n5-tuples, or privacy rules: \n \n \n\u0004\n\u0005\ncollector what purposes retention time\ndisclose-to\n,\n,\n, \n \n, \n \n \n where each 5-tuple or rule represents an item of private \ninformation and the conditions under which the infor-\nmation may be shared. For example, in Figure 29.1 , the \npersonal policy for e-learning has a header (top portion) \nplus two rules (bottom portion); the personal policy for a \nbookseller has only one rule. \n 3. SEMIAUTOMATED DERIVATION OF \nPERSONAL PRIVACY POLICIES \n A semiautomated derivation of a personal privacy policy \nis the use of mechanisms (described in a moment) that \nmay be semiautomated to obtain a set of privacy rules \n" }, { "page_number": 524, "text": "Chapter | 29 Personal Privacy Policies\n491\nfor a particular policy use. We present two approaches \nfor such derivations. The first approach relies on third-\nparty surveys (see sidebar, “ Derivation Through Third \nParty Surveys ” ) of user perceptions of data privacy \n( Figure 29.3 ). The second approach is based on retrieval \nfrom a community of peers.\nSurveys,\nStudies\nPSL\nConsolidation\nPrivacy Rules,\nSlider Levels\nRetrieve\nRules\nInternet Users\nPrivacy\nSlider\nConsumer\n1\n5\nProvider\nPolicies\nProvider\nPolicies\nPersonal\nPolicies\nWPR,\nPSL Scale\n FIGURE 29.3 Derivation of personal privacy policies from surveys. \n Derivation Through Third-Party Surveys \n (a) A policy provider makes use of third-party surveys per-\nformed on a regular basis, as well as those published in \nresearch literature, to obtain user privacy sensitivity lev-\nels (PSLs) or perceptions of the level of privacy for vari-\nous combinations of \u0004 what, purposes, retention time \u0005 \nin provider policy rules. We call \u0004 what, purposes, \nretention time \u0005 WPR , for short. This gives a range of \nPSLs for different WPRs in different provider policies. \nFormally: \n Let p i represent a WPR from a provider policy, I repre-\nsent the set of p i over all provider policies, f k,i represent the \nprivacy sensitivity function of person k to sharing p i with \na service provider. We restrict f k,i to an integer value in a \nstandard interval [ M,N ], that is, M \b f k,i \b N for integers M, \nN (e.g., M \u0003 1, N \u0003 5). Then the PSLs s k,i are obtained as \n \n s\nf\np\nk K\ni I\nk,i\nk,i\ni\n\u0003\n(\n) \n, \n∀∈\n∈ \n \n where K is the set of consumers interested in the pro-\nviders ’ services. This equation models a person making a \nchoice of what PSL to assign a particular p i . \n (b) Corresponding to a service provider’s privacy policy (which \nspecifies the privacy rules required), a policy provider \n(or a software application used by the policy provider) \nconsolidates the PSLs from (a) such that the WPRs are \nselectable by a single value privacy level from a “ privacy \nslider ” for each service provider policy. There are differ-\nent ways to do this consolidation. One way is to assign \na WPR the median of its PSL range as its privacy level \n(illustrated in a moment). The outcome of this process is \na set of consumer privacy rules (expressed using a policy \nlanguage such as APPEL) ranked by privacy level for dif-\nferent providers, and with the collector and disclose-to \nfields as “ any ” and “ none, ” respectively. (The consumer \ncan change these fields later if desired.) Formally, using \nthe notation introduced in (a): \n Let P represent a provider’s privacy policy. Then for each \nWPR p i \u0002 P, we have from (a) a set of PSLs: S i ( P ) \u0003 { s k,i | \n p i \u0002 P , ∀ k \u0002 K } . Our goal is to map S i (P) to a single privacy \nlevel from a privacy slider. Let g be such a mapping. Then \nthis step performs the mapping g(S i (P)) \u0003 n , where n is the \nprivacy slider value. For example, the mapping g can be \n “ take the median of ” (illustrated below) or “ take the aver-\nage of. ” We have assumed that the range of slider values \nis the same as [ M,N ] in (a). If this is not the case, g would \nneed to incorporate normalization to the range of slider \nvalues. \n (c) Consumers obtain online from the policy provider the \nprivacy rules that make up whole policies. They do \n" }, { "page_number": 525, "text": "PART | IV Privacy and Access Management\n492\nthis by first specifying the provider for which a con-\nsumer privacy policy is required. The consumer is then \nprompted to enter the privacy level using the privacy \nslider for each WPR from the service provider’s policy. \nThe selected rules would then automatically populate \nthe consumer’s policy. The consumer then completes \nhis privacy policy by adding the header information \n(i.e., policy use , owner , valid ) and, if desired, add spe-\ncific names to collector and disclose-to for all rules. This \ncan be done through a human-computer interface that \nshelters the user from the complexity of the policy lan-\nguage. In this way, large populations of consumers may \nquickly obtain privacy policies for many service provid-\ners that reflect the privacy sensitivities of the communi-\nties surveyed. \nPrivacy Sensitivity Levels\nNew Provider Policy\nPolicy\nInterpreter\nPolicy Search\nPersonal Policy\nAmendments\nConsumer\nUser\nInteraction\nPersonal\nPolicies\n FIGURE 29.4 Adapting an existing personal privacy policy to a new provider. \n Consumers may interactively adapt their existing \nprivacy policies for new service provider policies based \non the PSLs of the WPRs and the new provider policies, \nas illustrated in Figure 29.4 . In Figure 29.4 , the Policy \nInterpreter interactively allows the user to establish \n(using a privacy slider) the privacy levels of required \nrules based on the new provider policy and the PSLs \nfrom a policy provider. Policy Search then retrieves the \nuser policy that most closely matches the user’s privacy-\nestablished rules. This policy may then be further \namended interactively via the Policy Interpreter to obtain \nthe required personal privacy policy. This assumes the \navailability of an easy-to-understand interface for the user \ninteraction as well as software to automatically take care \nof any needed conversions of rules back into the policy \nlanguage (APPEL). \n An Example \n Suppose a consumer wants to generate a personal pri-\nvacy policy for a company called E-learning Unlimited. \nFor simplicity, suppose the privacy policy of E-learning \nUnlimited has only one WPR, namely \u0004 course marks, \nrecords, 12 months \u0005 . The steps are implemented as \nfollows: \n 1. The third-party survey generates the following \nresults for the WPR (the lowest privacy sensitivity \nlevel is M \u0003 1 , the highest is N \u0003 5). \n \n WPR ( p i ) \n PSL ( s k,i ) \n \n \u0004 course marks, records, 6 months \u0005 3 \n \n \u0004 course marks, records, 6 months \u0005 4 \n \n \u0004 course marks, records, 6 months \u0005 4 \n \n \u0004 course marks, records, 6 months \u0005 5 \n \n \u0004 course marks, records, 12 months \u0005 1 \n \n \u0004 course marks, records, 12 months \u0005 1 \n \n \u0004 course marks, records, 12 months \u0005 2 \n \n \u0004 course marks, records, 12 months \u0005 3 \n \n Note that the higher the number of months the marks \nare retained, the lower the PSL (the lower the privacy \nperceived by the consumer). The different PSLs \nobtained constitute one part of the privacy sensitivity \nscale. \n 2. In this step, the policy provider consolidates the PSL \nin Step 1 using the median value from the corre-\nsponding PSL range. Thus for the four course-mark \n" }, { "page_number": 526, "text": "Chapter | 29 Personal Privacy Policies\n493\nv\nC\nB\nA\nF\nE\nD\nC\nB\nA\nF\nE\nD\n(a) New consumer “A” broadcasts request for privacy rules to the community\n(b) Consumers B and D answer A’s request\n FIGURE 29.5 Retrieval of private policy rules from a community of peers. \nretention times of 6 months, the lowest value is 3, the \nhighest value is 5, and the median is 4. Therefore the \nrule \u0004 any, course marks, records, 6 months, none \u0005 \nis ranked with privacy level 4. Similarly, the rule \n \u0004 any, course marks, records, 12 months, none \u0005 is \nranked with privacy level 2. \n 3. To obtain her privacy rules, the consumer speci-\nfies the service provider as E-learning Unlimited \nand a privacy slider value of 4 (for example) when \nprompted. She then obtains the rule: \n \n \u0004\n\u0005\nany, course marks, records, 6 months, none\n \n \n \n and proceeds to complete the policy by adding the \nheader values and if desired, specific names for \ncollector and disclose-to . \n Retrieval from a Community of Peers \n This approach assumes an existing community of peers \nalready possessing specific use privacy policies with rules \naccording to desired levels of privacy. A new consumer \njoining the community searches for personal privacy \nrules. The existing personal privacy policies may have \nbeen derived using the third-party surveys, as previously. \nEach privacy policy rule is stored along with its privacy \nlevel so that it may be selected according to this level and \n purpose . Where a rule has been adapted or modified by \nthe owner, it is the owner’s responsibility to ensure that \nthe slider privacy value of the modified rule is consistent \nwith the privacy sensitivity scale from surveys. \n ● All online users are peers and everyone has a privacy \nslider. The new consumer broadcasts a request for \nprivacy rules to the community (see Figure 29.5a ), \nspecifying purpose and slider value. This is essen-\ntially a peer-to-peer search over all peers. \n ● The community responds by forwarding matching \n(in terms of purpose and slider value) rules to the \nconsumer (see Figure 29.5b ). This matching may \nalso be fuzzy. \n ● The consumer compares the rules and selects them \naccording to what , possibly popularity (those that \nare from the greater number of peers), and best \nfit in terms of privacy. After obtaining the rules, \nthe consumer completes the privacy policies by \n" }, { "page_number": 527, "text": "PART | IV Privacy and Access Management\n494\nprivacy, that is, require less privacy reduction. This could \nmean that the provider is requiring less information that \nis private. In this case, the provider or consumer may not \nrealize the extra costs that may result from not having \naccess to the private information item or items that were \neliminated through upgrading. For example, leaving out \nthe social security number may lead to more costly means \nof consumer identification for the provider. As another \nexample, consider the provider and consumer policies \nof Figure 29.6 . In this figure, suppose All Books Online \nupgraded its privacy policy by eliminating the credit-card \nrequirement. This would lead to a match with Alice’s pri-\nvacy policy, but it could cost Alice longer waiting time to \nget her order, since she may be forced into an alternate \nand slower means of making payment (e.g., mailing a \ncheck) if payment is required prior to shipping. \n Policy Downgrades \n Since the consumer resists the provider’s privacy reduc-\ntion, it is possible that the policy nonmatch was due to \nthe consumer’s policy allowing too little privacy reduc-\ntion. Suppose then that the match occurred after the con-\nsumer downgraded her privacy policy to represent less \nprivacy, that is, to allow for more privacy reduction. This \ncould mean that the consumer is willing to provide more \ninformation that is private. Then the provider or con-\nsumer may not realize the extra costs that result from \nhaving to safeguard the additional private information \nitem or items that were added through downgrading. For \nexample, the additional information might be a critical \nhealth condition that the consumer does not want any-\none else to know, especially her employer, which could \nresult in loss of her employment. The provider had better \nadd sufficient (costly) safeguards to make sure that the \nPolicy Use: Book Seller\nOwner: All Books Online\nValid: unlimited\nPolicy Use: Book Seller\nOwner: Alice Consumer\nValid: December 2009\nCollector: All Books Online\nWhat: name, address, tel\nPurposes: identification\nRetention Time: unlimited\nDisclose-To: none\nCollector: All Books Online\nWhat: credit card\nPurposes: payment\nRetention Time: until\n \npayment complete\nDisclose-To: none\nCollector: any\nWhat: name, address, tel\nPurposes: identification\nRetention Time: unlimited\nDisclose-To: none\n FIGURE 29.6 Example online bookseller provider (left) and con-\nsumer privacy policies (right). \ncompleting the headers and possibly changing \nthe collector and disclose-to as in the preceding \nderivation from surveys approach. \n ● The consumer adapts a privacy policy to the service \nprovider’s policy, as in the derivation by surveys \napproach ( Figure 29.4 ), to try to fulfill provider \nrequirements. \n 4. SPECIFYING WELL-FORMED \nPERSONAL PRIVACY POLICIES \n This section explains how unexpected outcomes may \narise from badly specified personal privacy policies. It \nthen gives guidelines for specifying “ well-formed ” poli-\ncies that avoid unexpected outcomes. \n Unexpected Outcomes \n We are interested in unexpected outcomes that result \nfrom the matching of consumer and provider poli-\ncies. Unexpected outcomes result from (1) the way the \nmatching policy was obtained, and (2) the content of the \nmatching policy itself. We examine each of these sources \nin turn. \n Outcomes From the Way the Matching \nPolicy Was Obtained \n The matching policy can be obtained through policy \nupgrades or downgrades. These policy changes can \noccur while the policy is being formulated for the first \ntime or after a mismatch occurs, in an attempt to obtain \na match (e.g., during policy negotiation 12 , 13 ). Recall \nfrom the “ Introduction ” section that an upgraded policy \nreflects a higher level of privacy. On the other hand, a \ndowngraded policy reflects a lower level of privacy. \n Policy Upgrades \n Given that a provider always tries to reduce a consum-\ner’s privacy, it is possible that the policy nonmatch was \ndue to the provider’s policy requiring too much privacy \nreduction. Suppose then that the match occurred after the \nprovider upgraded its privacy policy to represent more \n 13 G. Yee, and L. Korba, “ The negotiation of privacy policies in dis-\ntance education, ” Proceedings, 14th IRMA International Conference , \nPhiladelphia, May 2003. \n 12 G. Yee, and L. Korba, “ Bilateral e-services negotiation under \nuncertainty, ” Proceedings, The 2003 International Symposium on \nApplications and the Internet (SAINT 2003) , Orlando, Jan 2003. \n" }, { "page_number": 528, "text": "Chapter | 29 Personal Privacy Policies\n495\nsituation (one way is simply to have an overriding condi-\ntion that in an emergency, Alice must give her condition \nto any doctor or nurse on staff), but our point still holds: \nAn improperly specified collector attribute can lead to \nunexpected serious consequences. \n Retention Time \n Care must also be taken to specify the appropriate reten-\ntion time for a particular information item. The respon-\nsibility for setting an appropriate retention time lies with \nboth the provider and the consumer. For example, con-\nsider once again the policies of Figure 29.7 . Suppose \nAlice changes her privacy rule for medical condition \nfrom Dr. A. Smith to any and from unlimited to 2 years . \nThen the policies match and the service can proceed. \nAt the end of two years, the provider complies with the \nconsumer’s privacy policy and discards the informa-\ntion it has on Alice’s medical condition. But suppose \nthat after the two years, medical research discovers that \nAlice’s condition is terminal unless treated with a certain \nnew drug. Then Nursing Online cannot contact Alice to \nwarn her, since it no longer knows that Alice has that \ncondition. Poor Alice! Clearly, both the provider and \nthe consumer are responsible for setting the appropriate \nretention time. One could conclude that in the case of a \nmedical condition, the retention time should be unlim-\nited . However, unlimited can also have its risks, such \nas retaining information beyond the point at which it \nno longer applies. For example, Alice could one day be \ncured of her condition. Then retention of Alice’s condi-\ntion could unjustly penalize Alice if it somehow leaked \nout when it is no longer true. \n Disclose-To Field \n If the disclose-to attribute of the consumer’s privacy \npolicy is not specified or improperly specified, providers \ncan share the consumer’s private information with other \nproviders or consumers with resulting loss of privacy. \nConsider the following examples. \n Suppose Alice has a critical health condition and she \ndoes not want her employer to know for fear of losing her \njob (the employer might dismiss her to save on sick leave \nor other benefits — this really happened! 14 ). Suppose that \nshe is able to subscribe to Nursing Online as in the pre-\nceding examples. Then through the execution of the serv-\nice, Nursing Online shares her condition with a pharmacy \n condition is kept confidential. The provider may not have \nfully realized the sensitivity of the extra information. \n Outcomes from the Content of the Matching \nPolicy \n We give here some example unexpected outcomes due to \nthe content of the matching policy. We examine the con-\ntent of the header and privacy rules in turn, as follows. \n Valid Field \n If the valid field of the consumer’s policy is not care-\nfully specified, the provider may become confused \nupon expiry if there is not another consumer policy that \nbecomes the new policy. In this state of confusion the \nprovider could inadvertently disclose the consumer’s \nprivate information to a party that the consumer does not \nwant to receive the information. \n Collector Field \n Specification of who is to collect the consumer’s pri-\nvate information needs to consider what happens if the \ncollector is unavailable to receive the information. For \nexample, consider the privacy policies of Figure 29.7 . \nThe policies are not compatible, since Alice will reveal \nher medical condition only to Dr. Smith, whereas the \nprovider would like any doctor or nurse on staff to take \nthe information. Suppose the provider upgrades its pol-\nicy to satisfy Alice by allowing only Dr. Smith to receive \ninformation on Alice’s condition. Then an unexpected \noutcome is that Alice cannot receive help from Nursing \nOnline because Dr. Smith is not available (he might have \nbeen seriously injured in an accident), even though the \npolicies would match and the service could theoretically \nproceed. There are various ways to solve this particular \nPolicy Use: Medical Help\nOwner: Nursing Online\nValid: unlimited\nPolicy Use: Medical Help\nOwner: Alice Consumer\nValid: December 2009\nCollector: Nursing Online\nWhat: name, address, tel\nPurposes: contact\nRetention Time: unlimited\nDisclose-To: pharmacy\nCollector: Nursing Online\nWhat: medical condition\nPurposes: treatment\nRetention Time: 1 year\nDisclose-To: pharmacy\nCollector: any\nWhat: name, address, tel\nPurposes: contact\nRetention Time: unlimited\nDisclose-To: pharmacy\nCollector: Dr. A. Smith\nWhat: medical condition\nPurposes: treatment\nRetention Time: unlimited\nDisclose-To: pharmacy\n FIGURE 29.7 Example of medical help provider (left) and consumer \nprivacy policies (right). \n 14 J. K. Kumekawa, “ Health information privacy protection: crisis or \ncommon sense? ” , retrieved Sept. 7, 2003, from www.nursingworld.\norg/ojin/topic16/tpc16_2.htm . \n" }, { "page_number": 529, "text": "PART | IV Privacy and Access Management\n496\nto fill her prescription. Suppose the company that Alice \nworks for is a pharmaceutical supplier and needs to know \ncontact information of patients in the area where Alice \nlives so that the company can directly advertise to them \nabout new drugs effective for Alice’s condition. Suppose \nfurther that the pharmacy with which Nursing Online \nshared Alice’s condition is a consumer of the pharmaceu-\ntical supplier and the pharmacy’s privacy policy does not \nrestrict the sharing of patient information that it receives \nsecondhand. Then the pharmaceutical supplier, Alice’s \nemployer, can learn of her health condition from the \npharmacy, and Alice could lose her job — an unexpected \noutcome with serious consequences. A possible solution \nto this situation is for Alice to specify pharmacy, no fur-\nther for disclose-to . Then to comply with Alice’s policy, \nNursing Online, as a consumer of the pharmacy, in its pri-\nvacy policy with the pharmacy would specify none for the \n disclose-to corresponding to Alice’s condition, thus pre-\nventing Alice’s employer from learning of her condition \nand so preserving her privacy. \n As another example, suppose Alice, as a consumer, \nuses graphics services from company A and company B. \nHer privacy policy with these companies stipulates that \nthe rates she pays them is private and not to be disclosed \nto any other party. Suppose she pays company A a higher \nrate than company B. Now suppose companies A and B \nare both consumers of company C, which provides data \non rates paid for graphics services. To use company C’s \nservices, companies A and B must provide company C \nwith deidentified information regarding rates they are \npaid. This does not violate the privacy policies of con-\nsumers of companies A and B, because the information \nis deidentified. However, company B now learns of the \nhigher rate paid company A and seeks a higher rate from \nAlice. There does not appear to be any solution to this \nsituation, since Alice has already specified disclose-to as \n none . This example shows that there can be unexpected \noutcomes that may not be preventable. \n We have presented a number of unexpected outcomes \narising from the way the policy match was obtained and \nhow the content of the policy was specified. Our outcomes \nare all negative ones because they are the ones we need \nto be concerned about. There are also, of course, positive \nunexpected outcomes, but they are outside the scope of \nthis chapter. \n 5. PREVENTING UNEXPECTED NEGATIVE \nOUTCOMES \n The problem at hand is how to detect and prevent the \nunexpected outcomes that are negative or dangerous. \nSince all unexpected outcomes derive from the personal \nprivacy policy (at least in this work), it is necessary to \nensure “ well-formed ” policies that can avoid unexpected \nnegative outcomes. Further, if a non-well-formed policy \nmatches the first time and leads to negative outcomes, it \nis too late to do anything about it. Based on the discus-\nsion of the preceding section, let’s define our terms. \n Definition 1 \n An unexpected negative outcome is an outcome of the \nuse of privacy policies such that (1) the outcome is unex-\npected by both the provider and the consumer, and (2) \nthe outcome leads to either the provider or the consumer \nor both experiencing some loss, which could be private \ninformation, money, time, convenience, job, and so on, \neven losses that are safety and health related. \n Definition 2 \n A well-formed (WF) privacy policy (for either consumer \nor provider) is one that does not lead to unexpected \nnegative outcomes. A near well-formed (NWF) privacy \npolicy is one in which the attributes valid , collector , \n retention time , and disclose-to have each been consid-\nered against all known misspecifications that can lead to \nunexpected negative outcomes. \n In Definition 2, the misspecifications can be accu-\nmulated as a result of experience (e.g., trial and error) \nor by scenario exploration (as earlier). We have already \npresented a number of them in the preceding section. An \nNWF privacy policy is the best that we can achieve at this \ntime. Clearly, such a policy does not guarantee that unex-\npected negative outcomes will not occur; it just reduces \nthe probability of an unexpected negative outcome. \n Rules for Specifying Near Well-Formed \nPrivacy Policies \n Let’s consider once more the content of a personal pri-\nvacy policy by looking at the header and the privacy \nrules. \n The header ( Figure 29.1 ) consists of policy use , owner , \nand valid. Policy use and owner serve only to identify the \npolicy and, assuming they are accurately specified, they \nare unlikely to lead to unexpected negative outcomes. \nThat leaves valid . As discussed, valid must be specified \nso that it is never the case that the provider is in posses-\nsion of the consumer’s private information without a \ncorresponding valid consumer policy (i.e., with the pol-\nicy expired). Another way to look at this is that it must \n" }, { "page_number": 530, "text": "Chapter | 29 Personal Privacy Policies\n497\nbe true that the provider is no longer in possession of the \nconsumer’s information at the point of policy expiration. \nHence we can construct a rule for specifying valid . \n Rule for Specifying Valid \n The time period specified for valid must be at least as \nlong as the longest retention time in the privacy policy. \nThis rule ensures that if the provider is in possession of \nthe consumer’s private information, there is always a \ncorresponding consumer privacy policy that governs the \ninformation, which is what is needed to avoid the unex-\npected outcomes from an improperly specified valid . \n Let’s now consider the content of a privacy rule. The \nprivacy rule consists of the attributes collector , what , \n purposes , retention time , and disclose-to ( Figure 29.1 ). \n What and purposes serve only to identify the informa-\ntion and the purposes for which the information will be \nput to use. Assuming they are accurately specified, they \nare unlikely to lead to unexpected negative outcomes. \nThat leaves collector , retention-time , and disclose-to , \nwhich we discussed. Based on this discussion, we can \nformulate specification rules for these attributes. \n Rule for Specifying Collector \n When specifying an individual for collector , the conse-\nquences of the unavailability of the individual to receive \nthe information must be considered. If the consequences \ndo not lead to unexpected negative outcomes (as far as \ncan be determined), proceed to specify the individual. \nOtherwise, or if there is doubt, specify the name of the \nprovider (meaning anyone in the provider’s organization). \n Rule for Specifying Retention Time \n When specifying retention time , the consequences of the \nexpiration of the retention time (provider destroys corre-\nsponding information) must be considered. If the conse-\nquences do not lead to unexpected negative outcomes (as \nfar as can be determined), proceed to specify the desired \ntime. Otherwise, or if there is doubt, specify the length of \ntime the service will be used. \n Rule for Specifying Disclose-To \n When specifying disclose-to , the consequences of succes-\nsive propagation of your information starting with the \nfirst party mentioned in the disclose-to must be consid-\nered. If the consequences do not lead to unexpected neg-\native outcomes (as far as can be determined), proceed \nwith the specification of the disclose-to party or parties. \nOtherwise, or if there is doubt, specify none or name of \nreceiving party, no further. \n These rules address the problems discussed in the \nsection “ Outcomes from the Content of the Matching \nPolicy ” that lead to unexpected negative outcomes. \nExcept for valid , in each case we require the consumer \nor provider to consider the consequences of the intended \nspecification, and propose specification alternatives, \nwhere the consequences lead to unexpected negative \noutcomes or there is doubt. By definition, application of \nthese rules to the specification of a privacy policy will \nresult in a near well-formed policy. Undoubtedly, math-\nematical modeling of the processes at play together with \nstate exploration tools can help to determine whether \nor not a particular specification will lead to unexpected \nnegative outcomes. Such modeling and use of tools is \npart of future research. \n Approach for Obtaining Near Well-Formed \nPrivacy Policies \n We propose that these rules for obtaining near well-formed \npolicies be incorporated during initial policy specification. \nThis is best achieved using an automatic or semiautomatic \nmethod for specifying privacy policies, such as the meth-\nods in the section “ Semiautomated Derivation of Personal \nPrivacy Policies. ” The rule for valid is easy to implement. \nImplementation of the remaining rules may employ a \ncombination of artificial intelligence and human-computer \ninterface techniques to assist the human specifier to rea-\nson out the consequences. Alternatively, the rules may be \napplied during manual policy specification in conjunction \nwith a tool for determining possible consequences of a \nparticular specification. \n 6. THE PRIVACY MANAGEMENT MODEL \n In this part of the chapter, we explain how our Privacy \nManagement Model works to protect a consumer’s pri-\nvacy through the use of personal privacy policies. \n How Privacy Policies Are Used \n An e-service provider has a privacy policy stating what \nPII it requires from a consumer and how the information \nwill be used. A consumer has a privacy policy stating \nwhat PII the consumer is willing to share, with whom \nit may be shared, and under what circumstances it may \nbe shared. An entity that is both a provider and a con-\nsumer has separate privacy policies for these two roles. \nA privacy policy is attached to a software agent, one that \n" }, { "page_number": 531, "text": "PART | IV Privacy and Access Management\n498\nConsumer\nProvider\nCA\nPP\nPolicy Exchange\nPA\nPP\n FIGURE 29.8 Exchange of privacy policies (PP) between consumer agent (CA) and provider agent (PA). \nacts for the consumer and another that acts for the pro-\nvider. Prior to the activation of a particular service, the \nagent for the consumer and the agent for the provider \nundergo a privacy policy exchange, in which the policies \nare examined for compatibility (see Figure 29.8 ). The \nservice is only activated if the policies are compatible, \nin which case we say that there is a “ match ” between the \ntwo policies. \n The Matching of Privacy Policies \n We define here the meaning of a match between a con-\nsumer personal privacy policy and a service provider \nprivacy policy. Such matching is the comparison of cor-\nresponding rules that have the same purposes and simi-\nlar what . Let I represent the set of rules in a consumer \nprivacy policy and let J represent the set of rules in a \nservice provider privacy policy. Let p i,c , i \u0002 I and p j,p , j \u0002 J \nrepresent corresponding WPRs of the consumer policy \nand the provider policy, respectively, that have the same \n purposes and similar what . We want to ascribe a func-\ntion pr that returns a numerical level of privacy from the \nconsumer’s point of view when applied to p i,c and p j,p . A \nhigh pr means a high degree of privacy; a low pr means \na low degree of privacy — from the consumer’s point of \nview. It is difficult to define pr universally because pri-\nvacy is a subjective notion, and one consumer’s view of \ndegree of privacy may be different from another con-\nsumer’s view. However, the privacy rules from the policy \nprovider have corresponding privacy slider values and \nthey are just what we need. We simply look up p i,c and \n p j,p in the policy provider database and assign the corre-\nsponding privacy slider values to them. This look-up and \nassignment can be done automatically. \n Definition 3 (Matching Collector and \n Disclose-To ) \n The collector parameter from a consumer policy \nmatches the collector parameter from a provider policy \nif and only if they are textually the same or the collector \nparameter from the consumer policy has the value any . \nThe matching of disclose-to parameters is defined in the \nsame way. \n Definition 4 (Matching Rules) \n There is a match between a rule in a consumer privacy \npolicy and the corresponding (same purposes and simi-\nlar what ) rule in the provider policy if and only if: \n \n pr p\npr p\ni\nI j\nJ\ni\nj p\n(\n)\n(\n), \n, \nc\n,\n,\n\b\n∈\n∈\n \n \n and the corresponding collector and disclose-to param-\neters match. If there is no corresponding rule in the pro-\nvider policy, we say that the consumer rule corresponds \nto the null rule in the provider policy (called consumer \nn- correspondence ), in which case the rules automati-\ncally match. If there is no corresponding rule in the con-\nsumer policy, we say that the provider rule corresponds \nto the null rule in the consumer policy (called provider \nn- correspondence ), in which case the rules automatically \nmismatch. \n In Definition 4, a match means that the level of privacy \nin the provider’s rule is greater than the level of pri-\nvacy in the consumer’s rule (the provider is demanding \nless information than the consumer is willing to offer). \nSimilarly, a consumer rule automatically matches a \nprovider null rule because it means that the provider is \nnot even asking for the information represented by the \nconsumer’s rule (ultimate rule privacy). A provider rule \nautomatically mismatches a consumer null rule because \nit means the provider is asking for information the con-\nsumer is not willing to share, whatever the conditions \n(ultimate rule lack of privacy). \n Definition 5 (Matching Privacy Policies) \n A consumer privacy policy matches a service provider \nprivacy policy if and only if all corresponding (same \n purposes and similar what ) rules match and there are no \ncases of provider n -correspondence, although there may \nbe cases of consumer n -correspondence. \n Definition 6 (Upgrade and Downgrade of \nRules and Policies) \n A privacy rule or policy is considered upgraded if \nthe new version represents more privacy than the prior \n" }, { "page_number": 532, "text": "Chapter | 29 Personal Privacy Policies\n499\nwith each other to try to achieve a match. 15 , 16 Consider \nthe following negotiation to produce a privacy policy \nfor an employee taking a course from an e-learning pro-\nvider. Suppose the item for negotiation is the privacy of \nexamination results. The employer would like to know \nhow well the employee performed on a course in order \nto assign him appropriate tasks at work. Moreover, man-\nagement (Bob, David and Suzanne) would like to share \nthe results with management of other divisions, in case \nthey could use the person’s newly acquired skills. The \nnegotiation dialogue can be expressed in terms of offers, \ncounter-offers, and choices, as follows in Table 29.2 \n(read from left to right and down). \n As shown in this example, negotiation is a pro cess \nbetween two parties, wherein each party presents the \nother with offers and counter-offers until either an \nagreement is reached or no agreement is possible. Each \nparty chooses to make a particular offer based on the \nvalue that the choice represents to that party. Each party \nchooses a particular offer because that offer represents \nthe maximum value among the alternatives. \n Each party in a negotiation shares a list of items to \nbe negotiated. For each party and each item to be nego-\ntiated, there is a set of alternative positions with corre-\nsponding values. This set of alternatives is explored as \nnew alternatives are considered at each step of the nego-\ntiation. Similarly, the values can change (or become \napparent), based on these new alternatives and the other \nparty’s last offer. \n TABLE 29.2 Example Negotiation Dialogue Showing \nOffers and Counteroffers \n Provider \n Employee \n Okay for your exam results to \nbe seen by your management? \n Yes, but only David and \nSuzanne can see them. \n Okay if only David and Bob \nsee them? \n No, only David and \nSuzanne can see them. \n Can management from \nDivisions B and C also see \nyour exam results? \n Okay for management \nfrom Division C but not \nDivision B. \n How about letting Divisions \nC and D see your results? \n That is acceptable. \nversion. A privacy rule or policy is considered down-\ngraded if the new version represents less privacy than \nthe prior version. \n In comparing policies, it is not always necessary to \ncarry out the comparison of each and every privacy rule \nas required by Definitions 4 and 5. We mention three \nshortcuts here. \n Shortcut 1 \n Both policies are the same except one policy has fewer \nrules than the other policy. According to Definitions \n4 and 5, there is a match if the policy with fewer rules \nbelongs to the provider. There is a mismatch if this pol-\nicy belongs to the consumer. \n Shortcut 2 \n Both policies are the same except one policy has one or \nmore rules with less retention time than the other pol-\nicy. According to Definitions 4 and 5, there is a match \nif the policy with one or more rules with less retention \ntime belongs to the provider. There is a mismatch if this \npolicy belongs to the consumer. \n Shortcut 3 \n Both policies are the same except one policy has one \nor more rules that clearly represent higher levels of pri-\nvacy than the corresponding rules in the other policy. \nAccording to Definitions 4 and 5, there is a match if the \npolicy with rules representing higher levels of privacy \nbelongs to the provider. There is a mismatch if this pol-\nicy belongs to the consumer. \n Thus, in our example policies ( Figures 29.1 and 29.2 ), \nthere is a match for e-learning according to Shortcut 2, \nsince the policy with lower retention time belongs to the \nprovider. There is a mismatch for bookseller according \nto Shortcut 1, since the policy with fewer rules belongs \nto the consumer. There is a mismatch for medical help \naccording to Shortcut 3, since the policy with the rule \nrepresenting a higher level of privacy is the one speci-\nfying a particular collector (Dr. Smith), and this policy \nbelongs to the consumer. \n Personal Privacy Policy Negotiation \n Where there is no match between a consumer’s per-\nsonal privacy policy and the privacy policy of the serv-\nice provider, the consumer and provider may negotiate \n 16 G. Yee, and L. Korba, “ The negotiation of privacy policies in dis-\ntance education, ” Proceedings, 14th IRMA International Conference , \nPhiladelphia, May 2003. \n 15 G. Yee, and L. Korba, “ Bilateral e-services negotiation under \nuncertainty, ” Proceedings, The 2003 International Symposium on \nApplications and the Internet (SAINT 2003) , Orlando, Jan 2003. \n" }, { "page_number": 533, "text": "PART | IV Privacy and Access Management\n500\n Let R be the set of items r i to be negotiated, R \u0003 { r 1 , \nr 2 , … ,r n } . Let A 1 ,r,k be the set of alternatives for party \n1 and negotiation item r at step k, k \u0003 0,1,2, … , in the \nnegotiation. A 1 ,r,0 is party 1’s possible opening positions. \nLet O 1 ,r,k be the alternative a \u0002 A 1 ,r,k that party 1 chooses \nto offer party 2 at step k. O 1 ,r,0 is party 1’s chosen open-\ning position. For example, for the first negotiation, the \nprovider’s opening position is exam results can be seen \nby management . Then for each alternative a \u0002 A 1 ,r,k , V k (a) \nis the value function of alternative a for party 1 at step k , \n k \u0005 0, and \n \n \nV a\nf I O\nO\nk\nr k\nr k\n( )\n( , \n \n)\n, ,\n,\n,\n,\n\u0003\n\t\n\t\n1\n1\n2\n1\n,\n\u0003\n \n \n where I is the common interest or purpose of the nego-\ntiation (e.g. negotiating privacy policy for “ Psychology \n101 ” ), O 1 ,r,k \t 1 is the offer of party 1 at step k 2 1, O 2 ,r,k \t 1 \nis the offer of party 2 at step k 2 1, plus other factors \nwhich could include available alternatives, culture, sex, \nage, income level, and so on. These other factors are not \nrequired here, but their existence is without doubt since \nhow an individual derives value can be very complex. \nLet a m \u0002 A 1 ,r,k such that V k (a m ) \u0003 max { V k (a), a \u0002 A 1 ,r,k } . \nThen at step k, k \u0005 0 in the negotiation process, party 1 \nmakes party 2 an offer O 1 ,r,k where \n O\na\nif V a\nV O\nr k\nm\nk\nm\nk\nr k\n1\n2\n1\n, ,\n. ,\n \n(\n)\n(\n),\n\u0003\n\u0005\n\t\n \n (1) \n \n \u0003\n\b\n\t\n\t\nO\nif V a\nV O\nr k\nk\nm\nk\nr k\n2\n1\n2\n1\n, ,\n, ,\n \n(\n)\n(\n). (2) \n Equation 1 represents the case where party 1 makes \na counter-offer to party 2’s offer. Equation 2 represents \nStep 0\nStep 0\nStep 1\nStep 2\nStep 3\nStep 3\nStep 2\nStep 1\nInfluences\nTraversal\n(negotiation path)\nParty 1\nParty 2\nA1,r,1\nA1,r,0\n(opening positions)\nA1,r,3\nA1,r,2\nA2,r,1\nA2,r,0\n(opening positions)\nA2,r,3\nA2,r,2\nAdoption by party 1,\nagreement reached!\nr\n FIGURE 29.9 Negotiation tree \u0004r for a policy negotiation. \nthe case where party 1 accepts party 2’s offer and agree-\nment is reached! A similar development can be done \nfor party 2. Thus, there is a negotiation tree \n\u0004r corre-\nsponding to each item r to be negotiated, with two main \nbranches extending from r at the root (see Figure 29.9 ). \nThe two main branches correspond to the two negotiat-\ning parties. Each main branch has leaves representing \nthe alternatives at each step. At each step, including the \nopening positions at step 0, each party’s offer is visible \nto the other for comparison. As negotiation proceeds, \neach party does a traversal of its corresponding main \nbranch. If the negotiation is successful, the traversals \nconverge at the successful alternative (one of the parties \nadopts the other’s offer as his own, Equation 2) and the \nnegotiation tree is said to be complete. Each party may \nchoose to terminate the negotiation if the party feels no \nprogress is being made; the negotiation tree is then said \nto be incomplete . \n In Figure 29.9 , the influences arrows show that a par-\nticular alternative offered by the other party at step k will \ninfluence the alternatives of the first party at step k \u0002 1. \n Figure 29.10 illustrates the negotiation tree using the \nabove privacy of examination results negotiation. \n Personal privacy policy negotiation may be used to \navoid negative unexpected outcomes, even if the pri-\nvacy policies involved are near well formed, since near \nwell-formed policies may still not match. At a policies \nmismatch, the consumer or the provider upgrades or \ndowngrades the individual policy to try to get a match. \nIn so doing, each could inadvertently introduce new \nvalues into the policy or remove values from the policy \nthat result in negative unexpected outcomes or loss of \n" }, { "page_number": 534, "text": "Chapter | 29 Personal Privacy Policies\n501\nsection) where the provider has to introduce more costly \nsafeguards to protect the consumer’s added highly sen-\nsitive information, negotiation could have uncovered \nthe high sensitivity of the new information and possibly \nresult in a different less costly alternative chosen (e.g., \nthe new information may not be needed after all). \n Table 29.3 illustrates how negotiation can detect and \nprevent the unexpected negative outcome of Alice hav-\ning no access to medical service when it is needed (read \nfrom left to right and down). The result of this negotia-\ntion is that Nursing Online will be able to provide Alice \nwith nursing service whenever Alice requires it, once \nshe makes the change in her privacy policy reflecting \nthe results of negotiation. If this negotiation had failed \n(Alice did not agree), Alice will at least be alerted to the \npossibility of a bad outcome, and may take other meas-\nures to avoid it. This example shows how negotiation \nNWF-ness. We propose the use of privacy policy nego-\ntiation between consumer and provider agents to guide \nthe policy upgrading or downgrading to avoid undoing \nthe values already put in place for NWF-ness in the ini-\ntial specification. Alternatively, negotiation may expose \na needed application of the above rules for a policy to be \nnear well formed. This is also a consequences explora-\ntion, but here both provider and consumer do the explo-\nration while negotiating in real time. \n For example, in the All Books Online example of \nthe “ Outcomes From How the Matching Policy Was \nObtained ” section, where Alice does not need to provide \nher credit card, negotiation between Alice and All Books \nOnline could have identified the consequence that Alice \nwould need to wait longer for her order and direct her \nto another, more viable alternative, such as agreeing to \nprovide her credit card. Similarly, in the example (same \nStep 0\nStep 0\nStep 1\nStep 2\nStep 3\nStep 3\nStep 2\nStep 1\nProvider\nConsumer\nAdoption by provider,\nagreement reached!\nExam results can or cannot\nbe seen by management?\nExam results can be\nseen by management.\nExam results can be\nseen by management.\nDavid and Bob can\nsee them.\nOnly David and\nSuzanne can see them.\nExam results can not be\nseen by management.\nOnly David and\nSuzanne can see them.\nOnly David and\nSuzanne can see them.\nOnly David and\nSuzanne can see them.\n(opening position)\n(opening position)\nr\nOffers\nOffers\n FIGURE 29.10 Negotiation tree for the first part of our negotiation. \n TABLE 29.3 Preventing Unexpected Negative Outcomes: Nursing Online \n Nursing Online (Provider) \n Alice (Consumer) \n Okay if a nurse on our staff is told your medical condition? \n No, only Dr. Alexander Smith can be told my medical condition. \n We cannot provide you with any nursing service unless we \nknow your medical condition. \n Okay, I’ll see Dr. Smith instead. \n You are putting yourself at risk. What if you need emergency \nmedical help for your condition and Dr. Smith is not \navailable? \n You are right. Do you have any doctors on staff? \n Yes, we always have doctors on call. Okay to allow them to \nknow your medical condition? \n That is acceptable. I will modify my privacy policy to share my \nmedical condition with your doctors on call. \n" }, { "page_number": 535, "text": "PART | IV Privacy and Access Management\n502\nmay persuade the consumer to resolve a mismatch by \napplying the above rule for specifying collector . \n Table 29.4 gives another example of negotiation at \nwork using Alice’s bookseller policy from Figure 29.1 . \nThis policy mismatched because Alice did not want \nto provide her credit-card information. At the end of \nthe negotiation, Alice modifies her privacy policy and \nreceives service from All Books Online. \n Personal Privacy Policy Compliance \n As we have seen, the above Privacy Principles require \na provider to be accountable for complying with the \nPrivacy Principles (CSAPP.1) and the privacy wishes \nof the consumer. In practice, a provider is required to \nappoint someone in its organization to be accountable \nfor its compliance to Privacy Principles (CSAPP.1). This \nperson is usually called the chief privacy officer (CPO). \nAn important responsibility of the CPO is to put in place \na procedure for receiving and responding to complaints \nor inquiries about the privacy policy and the practice of \nhandling personal information. This procedure should \nbe easily accessible and simple to use. The procedure \nshould also refer to the dispute resolution process that \nthe organization has adopted. Other responsibilities of \nthe CPO include auditing the current privacy practices of \nthe organization, formulating the organization’s privacy \npolicy, and implementing and maintaining this policy. \n We propose that the CPO’s duties be extended to include \nauditing the provider’s compliance to the consumer’s \npersonal privacy policy. \n Further discussion of personal privacy policy compli-\nance is beyond the scope of this chapter. We mention in \npassing that an alternative method of ensuring compli-\nance is the use of a Privacy Policy Compliance System \n(PPCS) as presented in. 17 \n TABLE 29.4 Preventing Unexpected Negative Outcomes: All Books Online \n All Books Online (Provider) \n Alice (Consumer) \n Okay if you provide your credit-card information? \n No, I do not want to risk my credit-card number \ngetting stolen. \n If you do not provide your credit-card information, you will \nneed to send us a certified check before we can ship your \norder. This will delay your order for up to three weeks. \n I still don’t want to risk my credit-card number \ngetting stolen. \n Your credit-card information will be encrypted during \ntransmission and we keep your information in secure storage \nonce we receive it. You need not worry. \n Okay, I will modify my privacy policy to share my \ncredit-card information. \n 7. DISCUSSION AND RELATED WORK \n We have presented methods for the semiautomatic deri-\nvation of personal privacy policies. Given our Privacy \nManagement Model, there needs to be a way for con-\nsumers to derive their personal privacy policies easily or \nconsumers will simply not use the approach. The only \n “ alternative ” that we can see to semiautomated deriva-\ntion is for the consumer to create his/her personal privacy \npolicy manually. This can be done by a knowledgeable \nand technically inclined consumer, but would require \na substantially larger effort (and correspondingly less \nlikely to be used) than the semiautomated approaches. In \naddition to ease of use, our approaches ensure consist-\nency of privacy rules by community consensus. This has \nthe added benefit of facilitating provider compliance, \nsince it is undoubtedly easier for a provider to comply to \nprivacy rules that reflect the community consensus than \nrules that only reflect the feelings of a few. \n We believe our approaches for the semiautomatic \nderivation of personal privacy policies are quite fea-\nsible, even taking account of the possible weaknesses \ndescribed in this chapter. In the surveys approach, the \nconsumer has merely to select the privacy level when \nprompted for the rules from the provider’s policy. In \nfact, the user is already familiar with the use of a privacy \nslider to set the privacy level for Internet browsers (e.g., \nMicrosoft Internet Explorer, under Internet Options). \nWe have implemented the surveys approach in a pro-\ntotype that we created for negotiating privacy policies. \nWe plan to conduct experiments with this implementa-\ntion, using volunteers to confirm the usability of the sur-\nveys approach. In the retrieval approach, the consumer \nis asked to do a little bit more — compare and select \nthe rules received — but this should be no more com-\nplex than today’s ecommerce transactions or the use of \na word-processing program. Likewise, adapting a per-\nsonal policy to a provider policy should be of the same \nlevel of complexity. Like anything else, the widespread \n 17 G. Yee, and L. Korba, “ Privacy policy compliance for web ser-\nvices, ” Proceedings, 2004 IEEE International Conference on Web \nServices (ICWS 2004) , San Diego, July 2004. \n" }, { "page_number": 536, "text": "Chapter | 29 Personal Privacy Policies\n503\nuse of these methods will take a little time to achieve, \nbut people will come around to using them, just as they \nare using ecommerce; it is merely a matter of education \nand experience. Further, consumers are becoming more \nand more aware of their privacy rights as diverse juris-\ndictions enact privacy legislation to protect consumer \nprivacy. In Canada, the Personal Information Protection \nand Electronic Documents Act has been in effect across \nall retail outlets since January 1, 2004, and consumers \nare reminded of their privacy rights every time they visit \ntheir optician, dentist, or other business requiring their \nprivate information. \n We now discuss some possible weaknesses in our \napproaches. The surveys approach requires trust in the \npolicy provider. Effectively, the policy provider becomes \na trusted third party. Clearly, the notion of a trusted third \nparty as a personal policy provider may be controversial to \nsome. Any error made by the policy provider could affect \nPII for many hundreds or thousands of people. A certifica-\ntion process for the policy provider is probably required. \nFor instance, in Canada, the offices for the provincial and \nfederal privacy commissioners could be this certification \nbody. They could also be policy providers themselves. \nHaving privacy commissioners ’ offices take on the role of \npolicy providers seems to be a natural fit, given their man-\ndate as privacy watchdogs for the consumer. However, the \nprocess would have a cost. Costs could be recovered via \nmicro-charges to the consumer, or the service provider \nfor the policies provided. Aggregated information from \nthe PII surveys could be sold to service providers who \ncould use them to formulate privacy policies that are more \nacceptable to consumers. \n There is a challenge in the retrieval approach regard-\ning how to carry it out in a timely fashion. Efficient peer-\nto-peer search techniques will collect the rules in a timely \nmanner, but the amount of information collected by the \nrequester may be quite large. Furthermore, since the vari-\nous rules collected will probably differ from one another, \nthe requestor will have to compare them to determine \nwhich ones to select. Quick comparisons to reduce the \namount of data collected could be done through a peer-to-\npeer rules search that employs a rules hash array contain-\ning hashed values for different portions of the rule. \n In the section on policy compliance, a weakness of \nhaving the CPO be responsible for protecting consumer \nprivacy is that the CPO belongs to the provider’s organi-\nzation. Will he be truly diligent about his task to protect \nthe consumer’s privacy? To get around this question, the \nCPO can use secure logs to answer any challenges doubt-\ning his organization’s compliance. Secure logs automati-\ncally record all the organization’s use of the consumer’s \nprivate information, both during and after the data collec-\ntion. Cryptographic techniques 18 provide assurance that \nany modification of the secure log is detectable. In addi-\ntion, database technology such as Oracle9i can tag the \ndata with its privacy policy to evaluate the policy every \ntime data is accessed. 19 The system can be set up so that \nany policy violation can trigger a warning to the CPO. \n An objection could be raised that our approaches are \nnot general, having been based on privacy policy con-\ntent that was derived from Canadian privacy legislation. \nWe have several answers for this objection. First, as we \npointed out, Canadian privacy legislation is representa-\ntive of privacy legislation in many countries. Therefore \nthe content of privacy policies derived from Canadian pri-\nvacy legislation is applicable in many countries. Second, \nas we also pointed out, Canadian privacy legislation \nis also representative of the Fair Information Practices \nstandards, which have universal applicability. Third, all \nprivacy policies regardless of their content will have to \nconverge on the content that we presented, since such \ncontent is required by legislation. Finally, our approaches \ncan be customized to any form of privacy policy, regard-\nless of the content. We next discuss related work. \n The use of privacy policies as a means of safeguard-\ning privacy for ebusiness is relatively new. There is \nrelatively little research on the use of personal privacy \npolicies. For these reasons, we have not been able to find \nmany authors who have written on the derivation of per-\nsonal privacy policies. Dreyer and Olivier 20 worked on \ngenerating and analyzing privacy policies for computer \nsystems, to regulate private information flows within the \nsystem, rather than generating personal privacy policies. \nBrodie et al. 21 describe a privacy management work-\nbench for use by organizations. Within this workbench \nare tools to allow organizational users tasked with the \nresponsibility to author organizational privacy policies. \nThe tools allow these users to use natural language for \npolicy creation and to visualize the results to ensure that \nthey accomplished their intended goals. These authors \n 20 L. C. J. Dreyer, and M. S. Olivier, “ A workbench for privacy poli-\ncies, ” Proceedings, The Twenty-Second Annual International Computer \nSoftware and Applications Conference (COMPSAC ’ 98) , pp. 350-355, \nAugust 19-21, 1998. \n 19 G. Yee, K. El-Khatib, L. Korba, A. Patrick, R. Song, and Y. Xu, \n “ Privacy and trust in e-government, ” chapter in Electronic Government \nStrategies and Implementation , Idea Group Inc., 2004. \n 18 B. Schneier, and J. Kelsey, “ Secure audit logs to support computer \nforensics, ” ACM Transactions on Information and System Security , \n2(2), pp. 159-176, ACM, May 1999. \n 21 C. Brodie, C.-M. Karat, J. Karat, and J. Feng, “ Usable security \nand privacy: a case study of developing privacy management tools, ” \n Symposium On Usable Privacy and Security (SOUPS) 2005 , July 6-8, \nPittsburgh, 2005. \n" }, { "page_number": 537, "text": "PART | IV Privacy and Access Management\n504\ndo not address the creation of personal privacy policies. \nIrwin and Yu 22 present an approach for dynamically ask-\ning the user suitable questions to elicit the user’s privacy \npreferences. They present a framework for determining \nwhich questions are suitable. However, there is no use of \ncommunity consensus which means the resulting poli-\ncies could be highly subjective. This means that provid-\ners would find it more difficult to use such policies to \nhelp them formulate provider privacy policies that would \nbe acceptable to the majority of consumers. \n Other work that uses privacy policies is primarily rep-\nresented by the W3C Platform for Privacy Preferences. 23 \nThis provides a Web site operator with a way of express-\ning the site’s privacy policy using a P3P standard format \nand to have that policy automatically retrieved and inter-\npreted by user agents (e.g., browser plug-in). The user can \nexpress rudimentary privacy preferences and have those \npreferences automatically checked against the web site’s \npolicy before proceeding. However, P3P cannot be used to \nfulfill the requirements of privacy legislation, has no com-\npliance mechanism, and represents a “ take it or leave it ” \nview to privacy — if you don’t like the privacy policy of the \nWeb site, you leave it. There is no provision for negotia-\ntion. In addition, Jensen and Potts 24 evaluated the usability \nof 64 online privacy policies and the practice of posting \nthem and determined that significant changes needed to \nbe made to the practice in order to meet usability and \nregulatory requirements. Finally, Stufflebeam et al. 25 pre-\nsented a case study in which they used P3P and Enterprise \nPrivacy Authorization Language (EPAL) to formulate two \nhealthcare Web site privacy policies and described the \nshortcomings they found with using these languages. \n Negative outcomes arising from privacy poli-\ncies may be regarded as a feature interaction problem, \nwhere policies “ interact ” and produce unexpected out-\ncomes. 26 Traditionally, feature interactions have been \nconsidered mainly in the telephony or communication \nservices domains. 27 More recent papers, however, have \nfocused on other domains such as the Internet, multi-\nmedia systems, mobile systems, 28 and Internet personal \nappliances. 29 In this work, we have chosen not to frame \nnegative outcomes from privacy policies as a feature \ninteraction problem. In so doing, we have obtained new \ninsights and results. Apart from feature interactions, \nother possible related work has to do with resolving con-\nflicts in access control and mobile computing (e.g. 30 , 31 ). \nHowever, it is believed that these methods and simi-\nlar methods in other domains will not work for privacy \ndue to the subjective nature of privacy, that is, personal \ninvolvement to consider each privacy rule is necessary. \n Most negotiation research is on negotiation via auton-\nomous software agents, focusing on methods or models \nfor agent negotiation 32 and can incorporate techniques \nfrom other scientific areas such as game theory (e.g. 33 ), \nfuzzy logic (e.g. 34 ) and genetic algorithms (e.g. 35 ). The \nresearch also extends to autonomous agent negotiation \nfor specific application areas, such as ecommerce 36 and \nservice-level agreements for the Internet. 37 Apart from \n 30 T. Jaeger, R. Sailer, and X. Zhang, “ Resolving constraint confl icts ” , \n Proceedings of the Ninth ACM Symposium on Access Control Models \nand Technologies , June 2004. \n 27 D. Keck, and P. Kuehn, “ The feature and service interaction prob-\nlem in telecommunications systems: a survey, ” in IEEE Transactions \non Software Engineering , Vol. 24, No. 10, October 1998. \n 26 G. Yee, and L. Korba, “ Feature interactions in policy driven pri-\nvacy management ” , Proceedings, Seventh International Workshop on \nFeature Interactions in Telecommunications and Software Systems , \nOttawa, Ontario, Canada, June 2003. \n 25 W. Stuffl ebeam, A. Anton, Q. He, and N. Jain, “ Specifying privacy \npolicies with P3P and EPAL: lessons learned, ” Proceedings of the 2004 \nACM Workshop on Privacy in the Electronic Society (WPES 2004) , \nOctober 28, Washington, D.C., 2004. \n 24 Jensen, C., and Potts, C., “ Privacy policies as decision-making \ntools: an evaluation of online privacy notices, ” Proceedings of the 2004 \nConference on Human Factors in Computing Systems (CHI 2004) , \nApril 24-29, Vienna, 2004. \n 23 W3C Platform, “ The platform for privacy preferences, ” retrieved \nSept. 2, 2002, from www.w3.org/P3P/ . \n 22 K. Irwin, and T. Yu, “ Determining user privacy preferences by \nasking the right questions: an automated approach, ” Proceedings of \nthe 2005 ACM Workshop on Privacy in the Electronic Society (WPES \n2005) , November 7, Alexandria, 2005. \n 28 L. Blair, and J. Pang, “ Feature interactions: life beyond traditional \ntelephony, ” Distributed Multimedia Research Group, Computing Dept., \nLancaster University, UK. \n 29 M. Kolberg, E. Magill, D. Marples, and S. Tsang, “ Feature interac-\ntions in services for internet personal appliances, ” Proceedings, IEEE \nInternational Conference on Communications (ICC 2002) , Vol. 4, \npp. 2613-2618, 2002. \n 37 T. Nguyen, N. Boukhatem, Y. Doudane, and G. Pujolle, “ COPS-\nSLS: A service level negotiation protocol for the internet, ” IEEE \nCommunications Magazine , Vol. 40, Issue 5, May 2002. \n 36 M. Chung, and V. Honavar, “ A Negotiation Model in Agent-mediated \nElectronic Commerce, ” Proceedings, International Symposium on \nMultimedia Software Engineering , 2000. \n 35 M. Tu, E. Wolff, and W. Lamersdorf, “ Genetic algorithms for auto-\nmated negotiations: a FSM-based application approach, ” Proceedings, \n 11th International Workshop on Database and Expert Systems \nApplications , 2000. \n 34 R. Lai, and M. Lin, “ Agent negotiation as fuzzy constraint process-\ning, ” Proceedings of the 2002 IEEE International Conference on Fuzzy \nSystems (FUZZ-IEEE’02) , Vol. 2, 2002. \n 33 Y. Murakami, H. Sato, and A. Namatame, “ Co-evolution in nego-\ntiation games, ” Proceedings, Fourth International Conference on \nComputational Intelligence and Multimedia Applications , 2001. \n 32 P. Huang, and K. Sycara, “ A Computational model for online agent \nnegotiation ” , Proceedings of the 35 th Annual Hawaii International \nConference on System Sciences , 2002. \n 31 L. Capra, W. Emmerich, and C. Mascolo, “ A micro-economic \napproach to confl ict resolution in mobile computing, ” Proceedings \nof the 10th ACM SIGSOFT Symposium on Foundations of Software \nEngineering , November 2002. \n" }, { "page_number": 538, "text": "Chapter | 29 Personal Privacy Policies\n505\nnegotiation by autonomous software agents, research \nhas also been carried out on support tools for negotia-\ntion (e.g. 38 ), which typically provide support in position \ncommunication, voting, documentation communication, \nand big picture negotiation visualization and navigation. \n 8. CONCLUSIONS AND FUTURE WORK \n The protection of personal privacy is paramount if \ne-services are to be successful. A personal privacy policy \napproach to privacy protection seems best. However, for \nthis approach to work, consumers must be able to derive \ntheir personal privacy policies easily. To describe semi-\nautomated approaches to derive personal privacy policies, \nwe first defined the content of a personal privacy policy \nusing the Canadian Privacy Principles. We then presented \ntwo semiautomated approaches for obtaining the policies: \none based on third-party surveys of consumer percep-\ntions of privacy, the other based on retrieval from a peer \ncommunity. Both approaches reflect the privacy sensitivi-\nties of the community, giving the consumer confidence \nthat his/her privacy preferences are interpreted with the \nbest information available. We then explained how per-\nsonal privacy policies can lead to negative unexpected \noutcomes if not properly specified. We proposed specifi-\ncation rules that can be applied in conjunction with semi-\nautomated policy derivation to result in near well-formed \npolicies that can avoid the negative unexpected outcomes. \nWe closed with our Privacy Management Model, which \nexplains how privacy policies are used and the meaning \nof privacy policy matching. We described policy nego-\ntiation, not only for resolving policies that do not match, \nbut also as an effective means for avoiding negative unex-\npected outcomes. Finally, we suggested how consumers \ncould be assured that providers will comply with per-\nsonal privacy policies. \n We have based our work on our particular formula-\ntion of a privacy policy. An obvious question is whether \nour approaches apply to other formulations of privacy \npolicies. We believe the answer is yes, for the following \nreasons: (1) privacy policy formulations (i.e., contents) \ncannot differ too much from one another since they must \nall conform to privacy legislation and our policy is a \nminimal policy that so conforms, and (2) if necessary, \nwe can fit our approaches to any formulation by apply-\ning the same logic we used in this work. \n Possible topics for future work include (1) looking \nat other methods for easily deriving personal privacy \npolicies, (2) conducting experiments with volunteers, \nusing the implementation of the surveys approach in our \nprivacy policy negotiation prototype to confirm usabil-\nity and resolve any scalability/performance issues, (3) \ninvestigating other possible unexpected outcomes from \nthe interaction of privacy policies, (4) designing tools for \noutcomes exploration to identify the seriousness of each \nconsequence, (5) exploring other methods for avoid-\ning or mitigating negative unexpected outcomes from \nthe interaction of privacy policies, and (6) investigating \nways to facilitate personal privacy policy negotiation, \nsuch as improving trust, usability and response times. \n \n 38 D. Druckman, R. Harris, and B. Ramberg, “ Artifi cial computer-\nassisted international negotiation: a tool for research and practice, ” \nProceedings of the 35th Annual Hawaii International Conference on \nSystem Sciences , 2002. \n" }, { "page_number": 539, "text": "This page intentionally left blank\n" }, { "page_number": 540, "text": "507\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Virtual Private Networks \n Jim Harmening \n Computer Bits, Inc. \n Joe Wright \n Computer Bits, Inc. \n Chapter 30 \n With the incredible advance of the Internet, it has \nbecome more and more popular to set up virtual private \nnetworks (VPNs) within organizations. VPNs have been \naround for many years and have branched out into more \nand more varieties. (See Figure 30.1 for a high-level \nview of a VPN.) Once only the largest of organiza-\ntions would utilize VPN technology to connect multiple \nnetworks over the Internet “ public networks, ” but now \nVPNs are being used by many small businesses as a way \nto allow remote users access to their business networks \nfrom home or while traveling. \n Consultants have changed their recommendations \nfrom dial-in systems and leased lines to VPNs for several \nreasons. Security concerns were once insurmountable, \nforcing the consultants to set up direct dial-in lines. Not \nthat the public telephone system was much more secure, \nbut it gave the feeling of security and with the right setup, \ndial-in systems approach secure settings. Sometimes \nthey utilized automatic callback options and had their \nown encryption. Now, with advanced security, including \nrandom-number generator logins, a network administra-\ntor is far more likely to allow access to his network via a \nVPN. High-speed Internet access is now the rule instead \nof the exception. Costs have plummeted for the hardware \nand software to make the VPN connection as well. The \nproliferation of vendors, standardization of Internet \nProtocol (IP) networks, and ease of setup all played a \nrole in the increasingly wide use and acceptance of VPN. \n The key to this technology is the ability to route \ncommunications over a public network to allow access \nto office servers, printers, or data warehouses in an inex-\npensive manner. As high-speed Internet has grown and \nbecome prevalent throughout the world, VPNs over \nthe public Internet have become common. Even inex-\npensive hotels are offering free Internet access to their \ncustomers. This is usually done through Wi-Fi connec-\ntions, thus causing some concern for privacy, but the \nconnections are out there. Moreover, the iPhone, Treo, \nand other multifunction Web-enabled phones are giv-\ning mobile users access to the Internet via their phones. \nFinally, one of the best ways to access the Internet is via \na wireless USB or PCMCIA card from the major phone \ncompanies. These dedicated modem cards allow users to \nsurf the Internet as long as they are in contact with the \ncell towers of their subscribing company. \n One of the sources of our information on the over-\nview of VPNs is James Yonan’s talk at Linux Fest \nNorthwest in 2004. You can read the entire presentation \nonline at openvpn.net. \n “ Fundamentally, a VPN is a set of tools which allow \nnetworks at different locations to be securely connected, \nusing a public network as the transport layer. ” 1 This \nquote from Yonan’s talk states the basic premise incred-\nibly well. Getting two computers to work together over \nthe Internet is a difficult feat, but making two different \ncomputer networks be securely connected together via \nthe public Internet is pure genius. By connecting differ-\nent locations over the Internet, many companies cut out \nthe cost of dedicated circuits from the phone companies. \nSome companies have saved thousands of dollars by \ngetting rid of their Integrated Services Digital Network \n(ISDN) lines, too. Once thought of as the high-speed \n(128,000 bits per second, or 128kbs) Holy Grail, it is \nnow utilized mainly by videoconferencing systems that \nrequire a direct sustained connection, but the two end-\npoints aren’t usually known too much prior to the con-\nnection requirement. The ISDN lines are often referred \n 1 www.openvpn.net/papers/BLUG-talk/2.html , copyright James Yonan \n2003 \n" }, { "page_number": 541, "text": "PART | IV Privacy and Access Management\n508\nto as glorified fast dial-up connections. Some companies \nutilize multiple ISDN connections to get higher-quality \nvoice or video. \n Not all VPNs had security in the early days. Packets \nof information were transmitted as cleartext and could \nbe easily seen. To keep the network systems secure, the \ninformation must be encrypted. Throughout the past 20 \nyears, different encryption schemes have gained and lost \nfavor. Some are too easy to break with the advanced speed \nof current computers; others require too much processing \npower at the router level, thus making their implementa-\ntion expensive. This is one of those areas where an early \ntechnology seemed too expensive, but through time and \ntechnological advancements, the hardware processing \npower has caught up with requirements of the software. \nEncryption that seems secure in our current environments \nis often insecure as time passes. With supercomputers \ndoing trillions of computations a second, we are required \nto make sure that the technology employed in our net-\nworks is up to the task. There are many different types of \nencryption, as we discuss later in the chapter. \n Early in the VPN life-cycle, the goal for organizations \nwas to connect different places or offices to remote com-\nputer systems. This was usually done with a dedicated \npiece of hardware at each site. This “ point-to-point ” \nsetup allowed for a secure transmission between two \nsites, allowing users access to computer resources, data, \nand communications systems. Many of these sites were \ntoo expensive to access, so the advent of the point-to-\npoint systems allowed access where none existed. Now \nmultinational companies set up VPNs to access their \nmanufacturing plants all over the world. \n Accounting, order entry, and personnel databases were \nthe big driving forces for disparate locations to be con-\nnected. Our desire to have more and more information \nand faster and faster access to information has driven this \ntrend. Now individuals at home are connecting their com-\nputers into the corporate network either through VPN con-\nnections or SSL-VPN Web connections. This proliferation \nis pushing vendors to make better security, especially \nin unsecure or minimally secure environments. Giving \nremote access to some, unfortunately, makes for a target \nto hackers and crackers. \n 1. HISTORY \n Like many innovations in the network arena, the tel-\nephone companies first created VPNs. ATT, with it’s \nfamiliar “ Bell logo ” (see Figure 30.2 ), was one of the \nleading providers of Centrex systems. The goal was to \ntake advantage of different telephone enhancements for \nconferencing and dialing extensions within a company to \nconnect to employees. Many people are familiar with the \nCentrex systems that the phone companies offered for \nmany years. \n FIGURE 30.2 ATT logo; the company was often referred to as Ma Bell. \nPublic \nInternet\nRouter/Modem\nRemote\nComputer \nVPN Client\nVPN Router\nFirewall\nServer\nWorkstations\nSwitch\n FIGURE 30.1 A high-level view of a VPN. \n" }, { "page_number": 542, "text": "Chapter | 30 Virtual Private Networks\n509\n With Centrex the phone company did not require you \nto have a costly private branch exchange (PBX) switching \nsystem onsite. These PBXs were big, needed power, and \ncost a bundle of money. By eliminating the PBX and using \nthe Centrex system, an organization could keep costs down \nyet have enhanced service and flexibility of the advanced \nphone services through the telephone company PBX. \n The primary business of the phone companies was \nto provide voice service, but they also wanted to provide \ndata services. Lines from the phone company from one \ncompany location to another (called leased lines ) offered \nremote data access from one part of a company to another. \n Many companies started utilizing different types of \nsoftware to better utilize their leased lines. In the early \ndays, the main equipment was located centrally, and all \nthe offices connected to the “ hub ” (see Figure 30.3 ). This \nwas a good system and many companies still prefer this \nnetwork topography, but times are changing. Instead of \nhaving a hub-and-spoke design, some companies opted \nto daisy-chain their organization together, thus trying to \nlimit the distance they would have to pay for their leased \nlines. So a company would have a leased line from New \nYork to Washington, D.C., another from D.C. to Atlanta, \nand a third from Atlanta to Miami. This would cut costs \nover the typical hub-and-spoke system of having all the \nlines go through one central location (see Figure 30.4 ). \n With the proliferation of the Internet and additional \ncosts for Internet connections and leased-line connections, \nthe companies pushed the software vendors to make cheap \nconnections via the Internet. VPNs solved their problems \nand had a great return on investment (ROI). Within a year, \nthe cost of the VPN equipment paid for itself through \neliminating the leased lines. Though this technology has \nbeen around for years, some organi zations still rely on \nleased lines due to a lack of high-speed Internet in remote \nareas. \n In 1995, IPsec was one of the first attempts at bringing \nencryption to the VPN arena. One of the early downfalls \nof this technology was its complexity and requirement \nfor fast processors on the routers to keep up with the high \nbandwidths. In 1995, according to ietf.org, “ at least one \nhardware implementation can encrypt or decrypt at about \n1 Gbps. ” 2 Yes, one installation that cost thousands of \ndollars. \n Fortunately, Moore’s Law has been at work, and \ntoday our processing speeds are high enough to get IPsec \nworking on even small routers. Moore’s Law is named \nafter Intel cofounder Gordon Moore, who wrote in 1965: \n “ the number of transistors on a chip will double about \n 2 http://tools.ietf.org/html/draft-ietf-ipsec-esp-des-cbc-03 “ The ESP \nDES-CBC Transform ” \nVPN Client\nVPN Client\nVPN Client\nVPN Client\nVPN Server\nHUB\nHub and Spoke\nVirtual Private\nNetwork\n FIGURE 30.3 The hub in the early days. \n" }, { "page_number": 543, "text": "PART | IV Privacy and Access Management\n510\nevery two years. ” 3 This doubling of chip capacity allows \nfor more and more computing to be done. \n Another issue with early IPsec is that it is fairly \ninflexible, with differing IP addresses. Many home and \nhome office computers utilize dynamic IP addresses. \nYou may get a different IP address each time you turn \non your computer and connect to the Internet. The IPsec \nconnection will have to be reestablished and may cause \na hiccup in your transmissions or the requirement that a \npassword be reentered. This seems unreasonable to most \nusers. \n Another difficulty is the use of Network Address \nTranslation (NAT) for some networks. Each computer on \nthe network has the same IP address as far as the greater \nInternet is concerned. This is in part because of the short-\nage of legal IP addresses available in the IPv4 address \nspace. As we move closer and closer to the IPv6 address \nspace model, some of these issues will be moot, for a \nwhile. Soon, every device we own, including our refrig-\nerators, radios, and heating systems, will have a static IP \naddress. Maybe even our kitchen sinks will. Big Brother \nis coming, but won’t it be cool to see what your refrig-\nerator is up to? \n In the late 1990s Linux began to take shape as a great \ntest environment for networking. A technology called \n tun , short for tunnel , allows data to be siphoned through \nthe data stream to create virtual hardware. From the \noperating system perspective it looks like point-to-point \nnetwork hardware, even though it is virtual hardware. \nAnother technology, called tap , looks like Ethernet traf-\nfic but also uses virtual hardware to fool the operating \nsystem into thinking it is real hardware. \n These technologies utilize a program running in \nthe user area of the operating system software in order \nto look like a file. They can read and write IP packets \ndirectly to and from this virtual hardware, even though \nthe systems are connected via the public Internet and \ncould be on the other side of the world. Security is an \nissue with the tun/tap method. One way to build in secu-\nrity is to utilize the Secure Shell protocol (SSH) and \ntransport the data via a User Datagram Protocol (UDP) \nor Transmission Control Protocol (TCP) packet sent over \nthe network. \n It is important to remember that IP is an unreliable \nprotocol. There are collisions on all IP networks; high \ntraffic times give high collisions and lost packets, but the \nprotocol is good at resending the packets so that eventu-\nally all the data will get to its destination. On the other \nhand, TCP is a reliable protocol. So, like military and \nintelligence, we have the added problem of a reliable \ntransportation protocol (TCP) using an unreliable trans-\nportation (IP) method. \nVPN Client\nVPN Client\nVPN Client\nVPN Server\nHUB\nDaisy Chain\nVirtual Private\nNetwork\nVPN Client\n FIGURE 30.4 One central location for the hub-and-spoke system. \n 3 www.intel.com/technology/mooreslaw/ Gordon Moore 1965. \n" }, { "page_number": 544, "text": "Chapter | 30 Virtual Private Networks\n511\n So, how does it work if it is unreliable? Well, eventu-\nally all the packets get there; TCP makes sure of that, and \nthey are put in order and delivered to the other side. Some \nmay have to be retransmitted, but most won’t and the sys-\ntem should work relatively quickly. \n One way that we can gain some throughput and \nadded security is by utilizing encapsulation protocols. \nEncapsulation allows you to stuff one kind of protocol \ninside another type of protocol. The idea is to encapsu-\nlate a TCP packet inside a UDP packet. This forces the \napplication to worry about dropped packets and reliabil-\nity instead of the TCP network layer, since UDP packets \nare not a reliable packet protocol. This really increases \nspeed, especially during peak network usage times. \n So, follow this logic: The IP packets are encrypted, \nthen encapsulated and stored for transport via UDP \nover the Internet. On the receiving end the host system \nreceives, decrypts, and authenticates the packets and then \nsends them to the tap or tun virtual adapter at the other \nend, thus giving a secure connection between the two \nsides, with the operating system not really knowing or \ncaring about the encryption or transport methods. From \nthe OS point of view, it is like a data file being transmit-\nted; the OS doesn’t have to know that the hardware is vir-\ntual. It is just as happy thinking that the virtual data file is \nreal — and it processes it just like it processes a physical \nfile locally stored on a hard drive. \n OpenVPN is just one of many Open Source VPNs in \nuse today. Google it and you will see a great example of \na VPN system that employs IPsec. \n IPsec is another way to ensure security on your VPN \nconnection. IPsec took the approach that it needed to \nreplace the IP stack and do it securely. IPsec looks to \ndo its work in the background, without utilizing oper-\nating system CPU cycles. This is wonderful for its \nnon-impact on servers, but it then relies heavily on the \nhardware. \n A faster-growing encryption scheme involves Secure \nSocket Layer (SSL) VPN, as we talk about later in this \nchapter. This scheme gives the user access to resources \nlike a VPN but through a Web browser. The end user only \nneeds to install the browser plug-ins to get this VPN up \nand working, for remote access on the fly. One exam-\nple of this SSL type of VPN is LogMeIn Rescue. 4 It sets \nup a remote control session within the SSL layer of the \nbrowser. It can also extend resources out to the remote \nuser without initiating a remote-control session. \n With all these schemes and more, we should take \na look at who is in charge of helping to standardize \nthe hardware and software requirements of the VPN \nworld. \n 2. WHO IS IN CHARGE? \n For all this interconnectivity to actually work, there are \nseveral organizations that publish standards and work for \ncooperation among vendors in the sea of computer net-\nworking change. In addition to these public groups, there \nare also private companies that are working toward new \nprotocols to improve speed and efficiency in the VPN \narena. \n The two biggest public groups are the Internet \nEngineering Task Force ( www.ietf.org ; see Figure 30.5 ) \nand the Institute of Electrical and Electronic Engineers \n( www.IEEE.org ; see Figure 30.6 ). Each group has its \nown way of doing business and publishes its recommen-\ndations and standards. \n As the IEEE Web site proclaims, the group’s “ core \npurpose is to foster technological innovation and excel-\nlence for the benefit of humanity. ” This is a wonderful \nand noble purpose. Sometimes they get it right and some-\ntimes input and interference from vendors get in the way \nof moving technology forward — or worse yet, vendors \ngo out and put up systems that come out before the spec-\nifications get published, leaving humanity with different \nstandards. This has happened several times on the wire-\nless networking standards group. Companies release their \nimplementation of a standard prior to final agreement \nby the standards boards. \n FIGURE 30.5 Logo for IETF \n FIGURE 30.6 Logo for IEEE. \n 4 www.logmein.com \n" }, { "page_number": 545, "text": "PART | IV Privacy and Access Management\n512\n The group’s vision is stated thus: “ IEEE will be \nessential to the global technical community and to \ntechnical professionals everywhere, and be universally \nrecognized for the contributions of technology and of \ntechnical professionals in improving global conditions. ” 5 \n The Internet Engineering Task Force (IETF) is a \nlarge, open international community of network design-\ners, operators, vendors, and researchers concerned with \nthe evolution of the Internet architecture and the smooth \noperation of the Internet. It is open to any interested \nindividual. The IETF Mission Statement is documented \nin RFC 3935. According to the group’s mission state-\nment, “ The goal of the IETF is to make the Internet \nwork better. ” 6 \n Finally, we can’t get away from acronyms unless we \ninclude the United States Government. An organization \ncalled the American National Standards Institute (ANSI; \n www.ansi.org ) is an 85-year-old organization with respon-\nsibilities that include writing voluntary standards for the \nmarketplace to have somewhere to turn for standardizing \nefforts to improve efficiencies and interoperability (see \n Figure 30.7 ). \n There are many standards for many physical things, \nsuch as the size of a light bulb socket or the size of an \noutlet on your wall. These groups help set standards for \nnetworking. Two international groups that are repre-\nsented by ANSI are the International Organization for \nStandardization (ISO) and International Electrotechnical \nCommission (IEC). \n These organizations have ongoing workgroups and \nprojects that are tackling the various standards that will \nbe in use in the future releases of the VPN standard. They \nalso have the standards written for current interoperabil-\nity. However, this does not require a vendor to follow the \nstandards. Each vendor can and will implement parts and \npieces of standards, but unless they meet all the require-\nments of a specification, they will not get to call their \nsystem compatible. \n There are several IEEE projects relating to network-\ning and the advancement of interconnectivity. If you are \ninterested in networking, the 802 family of workgroups \nis your best bet. Check out www.ieee802.org . \n 3. VPN TYPES \n As we talked about earlier, this encryption standard for \nVPN access is heavy on the hardware for processing the \nencryption and decrypting the packets. This protocol \noperates at the Layer 3 level of the Open Systems Inter-\nconnection (OSI) model. The OSI model dates to 1982 by \nthe International Organization for Standardization. 7 IPsec \nis still used by many vendors for their VPN hardware. \n IPsec \n One of the weaknesses of VPNs we mentioned earlier is \nalso a strength. Because the majority of the processing \nwork is done by the interconnecting hardware, the appli-\ncation doesn’t have to worry about knowing anything \nabout IPsec at all. \n There are two modes for IPsec. First, the transport \nmode secures the information all the way to each device \ntrying to make the connection. The second mode is the \ntransport mode, which is used for network-to-network \ncommunications. The latest standard for IPsec came out \nin 2005. \n L2TP \n Layer Tunneling Protocol was released in 1999; then \nto improve the reliability and security of Point-to-Point \nTunneling Protocol (PPTP), L2TP was created. It really \nis a layer 5 protocol because it uses the session layer in \nthe OSI model. \n This was more cumbersome than PPTP and forced the \nusers at each end to have authentication with one another. \nIt also has weak security, thus most implementations of \nthe L2TP protocol utilize IPsec to enhance security. \n There is a 32-bit header for each packet that includes \nflags, versions, Tunnel ID, and Session IDs. There is \nalso a space for the packet size. \n Because this is a very weak protocol, some vendors \ncombined it with IPsec to form L2TP/IPsec. In this imple-\nmentation you take the strong, secure nature of IPsec as \nthe secure channel, and the L2TP will act as the tunnel. \n This protocol is a server/user setup. One part of the \nsoftware acts as the server and waits for the user side of \nthe software to make contact. Because this protocol can \nhandle many users or clients at a time, some Asymmetric \nDigital Subscriber Line (ADSL) providers use this L2TP \nprotocol and share the resources at the telephone central \noffice. Their modem/routers utilize L2TP to phone home \n FIGURE 30.7 The ANSI logo. \n 6 www.ietf.org/rfc/rfc3935.txt Copyright The Internet Society 2004 \n 7 http://en.wikipedia.org/wiki/Open_Systems_Interconnection \n 5 www.ieee.org/web/aboutus/visionmission.html Copyright 2008 IEEE \n" }, { "page_number": 546, "text": "Chapter | 30 Virtual Private Networks\n513\nto the central office and share a higher capacity line out \nto the Internet. \n For more information, see the IETF.org publication \nRFC 2661. You can delve deeper into the failover mode \nof L2TP or get far more detail on the standard. \n L2TPv3 \n This is the draft advancement of the L2TP for large carrier-\nlevel information transmissions. In 2005, the draft pro-\ntocol was released; it provides additional security fea-\ntures, improved encapsulation, and the ability to carry \ndata links other than simply PPP over an IP network \n(e.g., Frame Relay, Ethernet, ATM). Figure 30.8 shows \nthe operation for tunneling Ethernet using the L2TPv3 \nprotocol and was taken from Psuedo-wire Services and \nL2TPv3 from KPN/Quest. 8 \n L2F \n Cisco’s Layer 2 Forwarding protocol is used for tun-\nneling the link layer (layer 2 in the OSI model). This \nprotocol allows for virtual dial-up that allows for the \nsharing of modems, ISDN routers, servers, and other \nhardware. \n This protocol was popular in the mid- to late-1990s \nand was utilized by Shiva’s products to share a bank \nof modems to a network of personal computers. This \nwas a fantastic cost savings for network administra-\ntors wanting to share a small number of modems and \nmodem lines to a large user group. Instead of having 50 \nmodems hooked up to individual PCs, you could have a \nbank of eight modems that could be used during the day \nto dial out and connect to external resources, becom-\ning available at night for workers to dial back into the \ncomputer system for remote access to corporate data \nresources. \n For those long-distance calls to remote computer \nsystems, an employee could dial into the office network. \nFor security and billing reasons, the office computer sys-\ntem would dial back to the home user. The home user \nwould access a second modem line to dial out to a long \ndistance computer system. This would eliminate all \ncharges for the home user except for the initial call to get \nconnected. RFC 2341 on IETF.org gives you the detailed \nstandard. 9 \n PPTP VPN \n Point-to-Point Tunneling Protocol was created in the \n1990s by Microsoft, Ascend, 3COM and a few other \nvendors to try and serve the user community. This VPN \nprotocol allowed for easy implementation with Windows \nmachines because it was included in Windows. It made \nfor fairly secure transmissions, though not as secure as \nIPsec. Although Microsoft has a great deal of influence \nRB removes the\nIP/L2TPv3 header and\nforwards it to B\nRB\nRA\nETHERNET\nETHERNET\nIP BACKBONE\nOperation for Tunneling Ethernet\nLAN 1\nLAN 2\nIGP routes the \nL2TPv3 \npacket to \ndestination\nB receives the packet\nA sends a packet for B\nTU2\nA\nB\nStep 4\nStep 2\nStep 1\nStep 5\nStep 3\nRA encapsulates the Ethernet\nframe with a L2TPv3 tunnel\nheader and an IPv4 delivery\nheader\n FIGURE 30.8 The operation for tunneling Ethernet using the L2TPv3 protocol. \n 8 www.ripe.net/ripe/meetings/ripe-42/presentations/ripe42-eof-\npseudowires2/index.html KPN/Quest Pseudo-wire Services and L2TPv3 \npresentation 5/14/2002 \n 9 www.ietf.org/rfc/rfc2341.txt \n" }, { "page_number": 547, "text": "PART | IV Privacy and Access Management\n514\nin the computing arena, the IPsec and L2TP protocols \nare the standards-based protocols that most vendors use \nfor VPNs. \n Under PPTP, Microsoft has implemented MPPE —\n Microsoft Point-to-Point Encryption Protocol, which \nallows encryption keys of 40 to 128 bits. The latest \nupdates were done in 2003 to strengthen the security of \nthis protocol. A great excerpt from Microsoft Technet for \nWindows NT 4.0 Server explains the process of PPTP \nextremely well; check out http://technet.microsoft.com/\nen-us/library/cc768084.aspx for more information. \n MPLS \n MPLS is another system for large telephone compa-\nnies or huge enterprises to get great response times for \nVPN with huge amounts of data. MPLS stands for \nMultiProtocol Label Switching. This protocol operates \nbetween layer 2 and layer 3 of the OSI model we have \nmentioned before. The Internet Engineering Task Force \nWeb page with the specifications for the label switching \narchitecture can be found at www.ietf.org/rfc/rfc3031.txt . \n This protocol has big advantages over Asynchronous \nTransfer Mode (ATM) and Frame Relay. The overhead \nis lower and with the ability to have variable length \ndata packets, audio and video will be transmitted much \nmore efficiently. Another big advantage over ATM is the \nATM requirement that the two endpoints have to hand-\nshake and make a connection before any data is ever \ntransmitted. \n The name MultiProtocol Label Switching came \nfrom the underlying way in which the endpoints find \neach other. The switches using this technology find \ntheir destination through the lookup of a label instead \nof the lookup of an IP address. Label Edge Routers are \nthe beginning and ending points of an MPLS network. \nThe big competitor for future network expansion is \nL2TPv3. \n MPVPN ™ \n Ragula Systems Development Company created Multi \nPath Virtual Private Network (MPVPN) to enhance the \nquality of service of VPNs. The basic concept is to allow \nmultiple connections to the Internet at both endpoints \nand use the combination of connections to create a faster \nconnection. So if you have a T1 line and a DS3 at your \noffice, you can aggregate both lines through the MPVPN \ndevice to increase your response times. The data traffic \nwill be load balanced and will increase your throughput. \n SSH \n This protocol lets network traffic run over a secured \nchannel between devices. SSH uses public-key cryptog-\nraphy. Tatu Yl ö nen from Finland created the first version \nof SSH in 1995 to thwart password thieves at his uni-\nversity network. The company he created is called SSH \nCommunications Security and can be reached at www.\nssh.com (see Figure 30.9 ). \n Utilizing public-key cryptography is a double-edged \nsword. If an inside user authenticates an attacker’s public \nkey, you have just let them into the system, where they \ncan deploy man-in-the-middle hacks. Also, the intent \nfor this security system was to keep out the bad guys at \nthe gate. Once a person is authenticated, she is in and a \nregular user and can deploy software that would allow \na remote VPN to be set up through the SSH protocol. \nFuture versions of SSH may prevent these abuses. \n SSL-VPN \n Secure Socket Layer (SSL) VPN isn’t really VPN at all. \nIt’s more of an interface that gives users the services that \nlook like VPN through their Web browsers. There are \nmany remote-control applications that take advantage \nof this layer in the Web browser to gain access to users ’ \nresources. \n TLS \n Transport Layer Security, the successor to SSL, is used \nto prevent eavesdropping on information being sent \nbetween computers. When using strong encryption \nalgorithms, the security of your transmission is almost \nguaranteed. \n Both SSL and TLS work very much the same way. \nFirst, the sessions at each endpoint contact each other \nfor information about what encryption method is going \nto be employed. Second, the keys are exchanged. These \ncould be RSA, Diffie-Hellman, ECDH, SRP, or PSK. \n FIGURE 30.9 SSH Communications Security logo. \n" }, { "page_number": 548, "text": "Chapter | 30 Virtual Private Networks\n515\n Finally, the messages are encrypted and authenticated, \nsometimes using Certificate of Authorities Public Key \nlist. When you utilize SSL and TLS you may run into a \nsituation where the server certificate does not match the \ninformation held in the Certificate of Authorities Public \nKey list. If this is the case, the user may override the \nerror message or may choose not to trust the site and end \nthe connection. \n The whole public key/private key encryption is \nable to take place behind the scenes for a few reasons. \nDuring the beginning phase of the connection, the server \nand requesting computer generate a random number. \nRandom numbers are combined and encrypted using \nthe private keys. Only the owner of the public key can \nunencrypt the random number that is sent using their pri-\nvate key. \n 4. AUTHENTICATION METHODS \n Currently, usernames and passwords are the most com-\nmon authentication method employed in the VPN arena. \nWe may transport the data through an SSL channel or \nvia a secured and encrypted transport model, but when \nit comes to gaining access to the system resources, most \noften you will have to log into the VPN with a username \nand password. \n As we talked about earlier, there are some edge-based \nsystems that require a dongle, and a random number is \ngenerated on gaining access to the login screen. These \ntiered layers of security can be a great wall that will \nthwart a hacker’s attempt to gain access to your network \nsystem in favor of going after easier networks. \n Not all authentication methods are the same. We will \ntalk about a few different types of protection schemes \nand point out weaknesses and strengths. \n With each type of encryption we are concerned with \nits voracity along with concerns over the verification and \nauthentication of the data being sent. Transmission speeds \nand overhead in encrypting and decrypting data are \nanother consideration, but as mentioned earlier, Moore’s \nLaw has helped a great deal. \n Hashing \n Using a computer algorithm to mix up the characters in \nyour encryption is fairly common. If you have a secret \nand want another person to know the answer, but you \nare fearful that it will be discovered, you can mix up the \nletters. \n HMAC \n HMAC (keyed Hash Message Authentication Code) is a \ntype of encryption that uses an algorithm in conjunction \nwith a key. The algorithm is only as strong as the com-\nplexity of the key and the size of the output. For HMAC \neither 128 or 160 bits are used. \n This type of Message Authentication Code (MAC) can \nbe defeated. One way is by using the birthday attack. To \nensure that your data is not deciphered, choose a strong \nkey; use upper- and lowercase letters, numbers, and spe-\ncial characters. Also use 160 bits when possible. \n MD5 \n Message Digest 5 is one of the best file integrity checks \navailable today. It is also used in some encryption \nschemes, though the voracity of its encryption strength is \nbeing challenged. \n The method uses a 128-bit hash value. It is repre-\nsented as a 32-digit hexadecimal number. A file can be \n “ hashed ” down to a single 32-digit hex number. The like-\nlihood of two files with the same hash is 2 128 but with \nthe use of rainbow tables and collision theory, there have \nbeen a few successes in cracking this encryption. 10 \n SHA-1 \n Secure Hash Algorithm was designed by the U.S. \nNational Security Agency (NSA). There is also SHA-224, \nSHA-256, SHA-384, and SHA-512. The number of bits \nin SHA-1 is 160. The others have the number of bits \nfollowing the SHA. \n SHA-1 is purported to have been compromised, but \nthe voracity of the reports has been challenged. In any \ncase, the NSA has created the SHA-224 to SHA-512 spec-\nification to make it even more difficult to crack. At the \nRump Session of CRYPTO 2006, Christian Rechberger \nand Christophe De Canni è re claimed to have discovered a \ncollision attack on SHA-1 that would allow an attacker to \nselect at least parts of the message. 11 \n The basic premise is the same as the MD5 hash: The \ndata is encrypted utilizing a message digest. This method \nis the basis for several common applications including \nSSL, PGP, SSH, S/MIME, and IPsec. \n NSA is working on the next-generation SHA-3 and is \nseeking vendors to compete in creating the next-generation \n 10 http://en.wikipedia.org/wiki/MD5#Vulnerability \n 11 http://en.wikipedia.org/wiki/SHA-1#Cryptanalysis_and_validation \n" }, { "page_number": 549, "text": "PART | IV Privacy and Access Management\n516\nhashing algorithms. Submissions were due October 31, \n2008. \n 5. SYMMETRIC ENCRYPTION \n Symmetric encryption requires that both the sender and \nreceiver have the same key and each computes a com-\nmon key that is subsequently used. Two of the most \ncommon symmetric encryption standards are known as \nDES (Data Encryption Standard) and AES (Advanced \nEncryption Standard). Once AES was released, DES was \nwithdrawn as a standard and replaced with 3-DES, often \nreferred to as Triple DES and TDES. \n 3-DES takes DES and repeats it two more times. So \nit is hashed with the 56-bit algorithm and password, and \nthen done twice more. This prevents more brute-force \nattacks, assuming a strong key is used. Some VPN soft-\nware is based on these symmetric keys, as we have dis-\ncussed before. \n Finally, a system of shared secrets allows encryption \nand decryption of data. This can either be done as a pre-\nshared password, which is known by both ends prior to \ncommunication, or some kind of key agreement protocol \nwhere the key is calculated from each end using a com-\nmon identifier or public key. \n 6. ASYMMETRIC CRYPTOGRAPHY \n The biggest example of asymmetric cryptography for \nVPNs is in the RSA protocol. Three professors at MIT, \nRon Rivest, Adi Shamir, and Leonard Adelman (thus \nRSA), came up with the RSA encryption algorithm, which \nis an implementation of public/private key cryptography. \n This is one of the coolest and most secure means of \ntransmitting data. Not only is it used for transmission \nof data, but a person can also digitally sign a document \nwith the use of RSA secure systems. Some states are cre-\nating systems for giving people their own digital signa-\ntures and holding the public keys in a server that can be \naccessed by all. \n Although these systems have been around for a while, \nthey are becoming more and more prevalent. For exam-\nple, some states will allow accountants who sign up with \nthem to transmit income tax forms electronically as long \nas they digitally sign the returns. \n This algorithm uses two large random prime numbers. \nPrime number searching has been a pastime for many \nmathematical scientists. As the prime number gets larger \nand larger, its use for privacy and security systems is \nincreased. Thus, many search for larger prime numbers. \nThrough the use of these numbers and a key, the data is \nsecured from prying eyes. \n When you are in a public system and don’t have the \nluxury of knowing the keys in advance, there are ways to \ncreate a key that will work. This system is very interest-\ning and is known as the exponential key exchange because \nit uses exponential numbers in the initial key exchange to \ncome to an agreed-on cipher. \n 7. EDGE DEVICES \n As with any system, having two locked doors is better \nthan one. With the advent of many remote computing \nsystems, a new type of external security has come into \nfavor. For instance, the setting up an edge device, allows \nfor a unique username and password, or better yet, a \nunique username and a random password that only the \nuser and the computer system knows . \n These edge systems often employ authentication \nschemes in conjunction with a key fob that displays a dif-\nferent random number every 30 to 60 seconds. The server \nknows what the random number should be based on the \ntime and only authenticates the person into the edge of \nthe network if the username and password match. \n Once into the edge network, the user is prompted for \na more traditional username and password to gain access \nto data, email, or applications under his username. \n 8. PASSWORDS \n Your system and data are often only as good as the \nstrength of your password. The weakest of passwords \nentails a single word or name. An attacker using common \ndictionary attacks will often break a weak password. For \nexample, using the word password is usually broken very \nquickly. \n Using multiple words or mixing spelling and upper- \nand lowercase will make your weak password a bit \nstronger. PasswOrd would be better. Adding numbers \nincreases your passwords voracity. P2ssw9rd decreases \nyour chance of getting hacked. Add in a few special char-\nacters and your password gets even more secure, as with \nP2#$w9rd. \n But to get even stronger you need to use a password \nover 12 characters made up of upper- and lowercase let-\nters, numbers, and special characters: P2#$w9rd.34HHlz. \nStay away from acronyms. \n Another way to keep your VPNs secure is to only \nallow access from fixed IP addresses. If the IP address \nisn’t on the allowable list, you don’t allow the computer \n" }, { "page_number": 550, "text": "Chapter | 30 Virtual Private Networks\n517\nin, no matter what. The unique address for each network \ncard that is in the hardware is called the Media Access \nControl (MAC) address. This is another fixed ID that can \nbe used to allow or disallow computers onto your VPN. \nThe problem with this method is that both IP and MACs \ncan be spoofed. So, if a person gets his hands on a valid \nMAC ID, he can get around this bit of security. \n Some VPN systems will allow you to log in with a \nusername and password, and then it will connect to a pre-\ndefined IP address. So even if your passwords are stolen, \nunless the person knows the predefined IP address of the \ncallback, they can’t get into your system. This idea is a \nthrowback to the dial-in networks that would allow a per-\nson to dial in, connect with their username and password, \nand then promptly disconnect and call the person’s com-\nputer system back. It was an extra two minutes on the \nfront end, but a great added level of security. \n Finally, biometrics are playing a role in authentica-\ntion systems. For example, instead of a password, your \nfingerprint is used. Some systems use voiceprint, hand \ngeometry, retinal eye scan, or facial geometry. We can \nforesee the day when a DNA reader uses your DNA as \nyour password. \n 9. HACKERS AND CRACKERS \n Some good ways to prevent hackers and crackers from \ngetting into your system is to enable the best security \nlevels that your hardware has to offer. If you can utilize \n256-bit encryption methods, then use them. If you can \nafford a random-number – generated edge security sys-\ntem, then use it. \n Have your users change their VPN passwords fre-\nquently, especially if they are utilizing public Internet por-\ntals. Don’t expect your local library to have the security \nthat your own internal company has. If you access your \nVPN from an insecure public Internet hotspot, then make \nsure you change your VPN password. \n Don’t give out your VPN password for other people \nto use. This can cause you great difficulties if something \nsinister happens to the network and the connection is \ntraced back to your username and password. \n Another way to secure your network is to deactivate \naccounts that have not been used for 30 days. Yes, this \ncan be a pain, but if a person is not regularly accessing \nthe system, then maybe they don’t need access in the first \nplace. \n Finally, remove stale VPN accounts from the system. \nOne of the biggest problems with unauthorized VPN \naccess is the employee who has retired but her account \nnever got disabled. Maybe her logon and email were \nremoved, but IT didn’t remove her VPN account. Put in \nchecks and balances on accounts. \n" }, { "page_number": 551, "text": "This page intentionally left blank\n" }, { "page_number": 552, "text": "519\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Identity Theft \n Markus Jacobsson * \n Palo Alto Research Center \n Alex Tsow * \n The MITRE Corporation \n Chapter 31 \n Identity theft is commonly defined as unwanted appropri-\nation of access credentials that allows creation and access \nof accounts and that allows the aggressor to pose as the \nvictim. Phishing is a type of identity theft that is perpe-\ntrated on the Internet and that typically relies on social \nengineering to obtain the access credentials of the vic-\ntim. Similar deceit techniques are becoming increasingly \ncommon in the context of crimeware. Crimeware, in turn, \nis often defined as economically motivated malware. \nWhereas computer science has a long-term tradition of \nstudying and understanding security threats, the human \ncomponent of the problem is traditionally ignored. In this \nchapter, we describe the importance of understanding the \nhuman factor of security and detail the findings from a \nstudy on deceit. \n Social engineering can be thought of as an establish-\nment of trust between an attacker and a victim, where the \nattacker’s goal is to make the victim perform some action \nhe would not have wanted to perform had he understood \nthe consequences. Attackers leverage preexisting trust \nbetween victims and the chosen false identities to spur \ndubious actions (illegally transferring money, remailing \nstolen goods, installing malware on computers, and rec-\nommending fraudulent services to friends). \n To understand deceit in this context, it is worth \nrecalling that people are more likely to install software \non their computer if they believe it is manufacturer-dis-\ntributed patch rather than a third-party enhancement. \nSimilarly, Internet users are more likely to visit a web-\nsite when recommended by friends 1, 2 and may agree to \nsigning up to services that appear to be recommended \nby their friends. Moreover, when site-content hinges on \naccepting third-party browser extensions, friend rec-\nommendations prove highly effective in inducing the \nrequired installation. 3 When identity is used convinc-\ningly, these behaviors become social vectors for spread-\ning crimeware and for causing users to opt in where they \nwould not otherwise have. \n Institutions and individuals project Internet identity \nthrough their Web sites and through email communica-\ntion. How do attackers engineer contact with false iden-\ntities? Clearly, email can be sent to anyone. Filters limit \nthe quantity of unwanted messages, but spammers have \nsuccessfully responded with increased volume and vari-\nation. Superficially, arranging contact with bogus Web \nsites appears to be a more difficult problem since legiti-\nmate content providers uncommonly link to spoofed \nWeb hosts. \n 1 T. Jagatic, N. Johnson, M. Jakobsson, and F. Menczer, “ Social \nphishing, ” Communications of the ACM , 2007, available at http://doi.\nacm.org/10.1145/1290958.1290968 . \n 2 A. Genkina and L. J. Camp, “ Phishing and countermeasures: under-\nstanding the increasing problem of electronic identity theft, ” chapter \ncase study: Net Trust , John Wiley & Sons, 2007. \n 3 M. Gandhi, S. Stamm, M. Jakobsson, “ verybigad.com: A study in \nsocially transmitted malware, ” www.indiana.edu/~phishing/verybigad/ . \n * The authors ’ affi liation with The MITRE Corporation is provided \nfor identifi cation purposes only, and is not intended to convey or imply \nMITRE ’ s concurrence with, or support for, the positions, opinions or \nviewpoints expressed by the author. \n" }, { "page_number": 553, "text": "PART | IV Privacy and Access Management\n520\n Roughly 50% of Web requests (by volume) are not \nthe result of site-to-site linking, based on results from \nover 100,000 Internet clients hosted by the Indiana \nUniversity campuses, according to the Indiana University \nAdvanced Network Management Lab. \n The other half of Web visits follow from bookmarks, \ndirect address bar manipulation, or linking from external \nsources (email, word-processor documents, and instant-\nmessaging sessions). Social engineers influence these \nvalues through bogus links in email and by using domain \nnames that are deceptive. \n There are may studies of ways in which humans \nrelate to deceit and treason, and there are many stud-\nies that focus on Internet security, but there is not an \nabundance of research on the combination of these two \nimportant fields. How do people relate to deceit on the \nInternet? This is an important question to ask — and to \nanswer — for anybody who wants to improve Internet \nsecurity. \n This chapter focuses on identity manipulation tactics \nin email and Web pages. It describes the effects of fea-\ntures ranging from URL plausibility to trust endorsement \ngraphics on a population of 398 subjects. The experiment \npresents these trust indicators in a variety of stimuli, since \nreactions vary according to context. In addition to testing \nspecific features, the test gauges the potential of a tac-\ntic that spoofs third-party contractors rather than a brand \nitself. The results show that indeed graphic design can \nchange authenticity evaluations and that its impact var-\nies with context. We expected that authenticity-inspiring \ndesign changes would have the opposite effect when paired \nwith an unreasonable request, but our data suggest that nar-\nrative strength, rather than underlying legitimacy, limits the \nimpact of graphic design on trust and that these authentic-\nity-inspiring design features improve trust in both legiti-\nmate and illegitimate media. Thus, it is not what is said \nthat matters but how it is said: An eloquently stated unrea-\nsonable request is more convincing than a poorly phrased \nbut quite reasonable request. \n 1. EXPERIMENTAL DESIGN \n This experiment tests the ability to identify phishing, an \nInternet scam that spoofs emails and Web pages to trick \nvictims into revealing sensitive information. Although \ncast in terms of phishing, the results generalize to iden-\ntity spoofing for purposes beyond information theft. This \nexperiment tests the effects of several media features —\n sometimes in multiple contexts — on an individual’s \ne valuation of its phishing likelihood. This experiment \nshows subjects six email screenshots followed by six Web \npage screenshots and asks them to rate their authenticity \non a five-point scale: Certainly phishing, Probably phish-\ning, No opinion, Probably not phishing, and Certainly not \nphishing (see Figure 31.1 for an example). The experi-\nment was administered through SurveyMonkey.com, 4 \nan online Web survey service. Subjects were required to \nrate each screenshot before advancing to the next stimu-\nlus. The survey provided the following instructions to \nsubjects: \n ● Phishing is a form of Internet fraud that spoofs \nemails and Web pages to trick people into disclos-\ning sensitive information. When an email or Web \npage fraudulently represents itself, we classify it as \nphishing. \n ● This survey displays a sequence of email and Web \nsite screenshots. Assume that your name is John \nDoe and that your email address is johndoe1972@\ngmail.com . Please rate each screenshot’s authentic-\nity using the five-point scale: Certainly phishing, \nProbably phishing, No opinion, Probably not phish-\ning, Certainly not phishing. \n This style of testing, termed security first , measures \nfraud-recognition skills rather than habits. Subjects are not \ntrying to accomplish other work but are merely instructed \nto rate a series of legitimate and illegitimate stimuli. For \nthis reason, security-first measurements place a plausible \nupper bound on fraud detection habits in normal computer \nusage. Even though security-first evaluations have shown \nhigh susceptibility to phishing, 5 , 6 role-playing experi-\nments designed to measure fraud detection habits 7 , 8 dem-\nonstrate even more serious vulnerability. \n Subjects were recruited from an undergraduate intro-\nductory noncomputer-science-major class on computer \nusage and literacy. Of a class size exceeding 600 students, \n 4 SurveyMonkey.com, “ Surveymonkey.com-powerful tool for creating \nweb surveys. online survey software made easy! ” www.surveymonkey.\ncom/ , retrieved December 2006. \n 5 A. Genkina and L. J. Camp, “ Phishing and countermeasures: under-\nstanding the increasing problem of electronic identity theft, ” chapter \ncase study: Net Trust , John Wiley & Sons, 2007. \n 6 M. Jakobsson, A. Tsow, A. Shah, E. Blevis, and Y.-K. Lim, \n “ What instills trust? A qualitative study of phishing, ” In submission, \n2006. \n 7 J. S. Downs, M. B. Holbrook, and L. F. Cranor, “ Decision strate-\ngies and susceptibility to phishing, ” In SOUPS ’ 06: Proceedings of the \nSecond Symposium on Usable Privacy and Security , pp. 79 – 90, New \nYork, 2006, ACM Press. \n 8 M. Wu, R. C. Miller, and S. L. Garfi nkel, “ Do security tool-\nbars actually prevent phishing attacks? ” In CHI ’ 06: Proceedings of \nthe SIGCHI Conference on Human Factors in Computing Systems , \npp 601 – 610, New York, 2006, ACM Press. \n" }, { "page_number": 554, "text": "Chapter | 31 Identity Theft\n521\n435 began this study. All but 12 subjects were between \n17 and 22 years old; the gender split was 40.0% male to \n57.9% female (2.0% did not respond to this question). \nAlthough the test population is not demographically rep-\nresentative of general computer users, their enrollment in \nthe introductory course suggests that their computer skill \nlevel is generally representative. The class’s only prereq-\nuisite is high-school algebra. Almost all students in this \nclass had used computers before but had no particular \nexpertise. \n The experiment divided the population into two sets \nthrough random selection. The two sets completed differ-\nent versions of the survey. For 10 of the stimuli, the two \nversions differ only by a target collection of test features. \nOur primary analysis compares the impact of the fea-\nture changes, by using the \r 9 to represent the difference \n 9 R. Dhamija, J. D. Tygar, and M. Hearst, “ Why phishing works, ” In \n CHI ’ 06: Proceedings of the SIGCHI conference on Human Factors in \ncomputing systems , pp. 581 – 590, New York, 2006, ACM Press. \n FIGURE \n31.1 Subjects \nevaluate \nauthenticity based on screenshots \nusing a five-point scale. The survey \nrequired a judgment before proceed-\ning to the next stimulus. \n" }, { "page_number": 555, "text": "PART | IV Privacy and Access Management\n522\nbetween response distributions. In two other question \nsets, subjects evaluate the authenticity of messages and \nWeb pages under third-party administration (a potential \nvector for social engineering). We further designed the \ntest to simulate a roughly equal number of authentic and \nphishing stimuli to avoid effective use of a trivial rating \nstrategy: If there were significantly more phishing stimuli \nthan authentic stimuli, subjects could employ an “ always \nphishing ” strategy that would correctly evaluate most of \nthe stimuli without exercising due consideration. \n Since the stimuli are only screenshots, their inauthen-\ntic features were designed to be evident on examination \n(rather than mouse-over or source analysis). For instance, \nincorrect domains are apparent in email hyperlinks; they \nare not disguised by an inconsistent href attribute. The \ndomains we chose to simulate inauthentic URLs were not \nin use at the time of testing, but some are owned by their \nrespective companies, others are owned by unrelated \ncompanies, and the rest appear to be unregistered, accord-\ning to the Whois database. Our use of these domains as \nrepresentations of inauthentic URLs is still valid because \nnone of these URLs exist with the content we present. \nWe outline the stimuli, their relevant features, and what \nwe hope to learn in Figures 31.2a and 31.2b . \n Authentic Payment Notification: Plain \nversus Fancy Layout \n These two email messages use actual payment notifica-\ntion text from Chase Bank (see sidebar, “ A Strong Phishing \nMessage ” ). The text personalizes its greeting and references \na recent payment transaction; there are no hyperlinks. One \n(a)\n FIGURE 31.2 (a) Plain layout.\n" }, { "page_number": 556, "text": "Chapter | 31 Identity Theft\n523\nversion uses the original layout (a one-color header contain-\ning the company logo followed by the message text); the \nother version uses an enhanced layout (a header that includes \na continuous tone shiny logo and a photograph of a satisfied \ncustomer, a smooth gradient footer graphic that spans the \npage with a gentle concave arch, hyperlinks to Privacy and \nTerms of Use, and a copyright notice; these graphics were \nadapted from the Web page at www.bankone.com ). \n(b)\n FIGURE 31.2 (Continued) (b) fancy layout. \n A Strong Phishing Message \n Dear John Doe, \n JPMorgan Chase & Co. is proud to serve you as a former Bank One client. \n Chase Online’s patented ePIN technology is used both for eDebit transactions and for physical ATM access. While we \nwill support the legacy 4 digit PIN for the remainder of the year, until December 31, 2006, we invite you to register for the \nePIN program through our secure online server: \n https://www.chase.ePIN-simplicity.com \n If you have any questions about this or other Chase programs, do not hesitate to call our toll-free customer service \nnumber, 1-877-CHASEPC. \n Sincerely, \n Client Services \n JPMorgan Chase & Co. \n" }, { "page_number": 557, "text": "(c)\n(a)\n FIGURE 31.3 (a) Using a plain layout schema (b) using a fancy layout schema; and (c) using a \nplain and fancy layout schema. \n(b)\n" }, { "page_number": 558, "text": "Chapter | 31 Identity Theft\n525\n Strong Phishing Message: Plain Versus \nFancy Layout \n We constructed the phishing message text to sound as plau-\nsible as possible. Opening with a personalized g reeting, the \nmessage explains that former Bank One c ustomers will \nneed to register for Chase’s ePIN program — a replace-\nment for ATM PINs that is also bundled with a new eDebit \nonline service. It implicitly threatens service discontinua-\ntion by supporting “ legacy 4 digit PINs for the rest of the \ncalendar year. ” The bogus registration hyperlink uses the \nmade-up URL https://www.chase.ePIN-simplicity.com . \nThe message closes with a bogus phone number to call for \nassistance. The two versions of this message use the same \nplain and fancy layout schema described earlier, with one \nexception: The fancy layout adds shiny letters proclaiming \n “ Bank One is now Chase ” (see Figure 31.3 ) between the \nheader and message text. \n Authentic Promotion: Effect of Small \nFooters \n Figure 31.4 shows an authentic message from the AT & T \nUniversal card that promotes the company’s paperless \nbilling system. It personalizes the greeting and includes \nthe last four digits of the account number. There are mul-\ntiple company logos, a blue outline around text, an Email \nSecurity Zone box, and a small-print footer filled with \ntrademark, copyright, and contact notices as well as various \n(a)\n FIGURE 31.4 (a) An authentic message from the AT & T Universal card that promotes the company’s paperless billing system.\n" }, { "page_number": 559, "text": "(b)\nFIGURE 31.4 (Continued) (b) an authentic message from AT & T Universal card that personalizes the greeting and includes the last four digits of \nthe account number; (c) header detail left side; (d) header detail right side; and (e) hyperlink detail. \n(c)\n(e)\n(d)\n" }, { "page_number": 560, "text": "Chapter | 31 Identity Theft\n527\ninformational and administrative hyperlinks. The principal \nlogin hyperlink conceals its destination. The test pair con-\nsists of the original message and a modified version that \nexcludes the small print footer. \n Weak Phishing Message \n The sidebar “ Phishing Message ” shows a phishing \nmessage promising $50 for opening an account with \n Phishing Message \n Dear Online Banker, \n Citibank has recently upgraded its online banking service \nto provide best-in-industry safety, security and overall better \nexperience. We’re so excited about it, we’ll pay you $50* \nto try it out! For a limited time if you open a FREE account \nwith Citibank and deposit at least $100, we’ll credit $50 \nto your account. It’s our way of saying “ Thanks for banking \nwith us! ” \n This offer is only valid for a limited time. \n Go to http://www.citibank.switch-today.com to start! We \nlook forward to serving you. \n Best Regards, \n Citibank Web Services \n *Offer valid for a limited time only. Account must be opened with $100 \nminimum deposit made by ACH transfer. Account must remain open for at \nleast six months or an early closing penalty will be assessed. See website \nfor full terms and conditions. \nCitibank. There is a simple company logo in the header; \na footer contains legal disclaimers about the offer. There \nis no personalization, and the lone hyperlink is a made-\nup domain (actually owned by an unrelated organiza-\ntion), www.citibank.switch-today.com . Figure 31.5 \ncontains its screenshot. The two versions differ only \nby the presence of a center-aligned “ VeriSign Secured ” \nendorsement logo that follows the footer’s legal \ndisclaimers. \n FIGURE 31.5 (a) Effect of \nendorsement logo.\n(a)\n" }, { "page_number": 561, "text": "PART | IV Privacy and Access Management\n528\n Authentic Message \n The test determines the impact of a “ VeriSign Secured ” \nlogo added to the footer of an authentic message, as \nshown in Figure 31.6 . The notice begins with a person-\nalized greeting and informs the client about changes \nin PayPal’s logo insertion policy. The message body is \nconsiderably longer than all of the other messages except \nfor the Netflix stimulus. The primary message contains \nno hyperlinks, but their small-print footer furnishes a \nhyperlink to unsubscribe from their newsletter. One \ninteresting feature of the message is a boldfaced state-\nment: “ If you do not wish to have PayPal automatically \ninserted in your listings, you must update your prefer-\nences by 9/25. ” Though genuine, this message parallels \nthe account shutdown threats brandished by many phishing \nmessages. The header contains a monochrome company \nlogo and a two-tone horizontal separation bar. \n Login Page \n We say that the URL is strongly aligned with the content \nof the page when these two “ belong together. ” Imagine that \none would look at some ten Web pages (without associated \nURLs), and then some ten randomly ordered URLs, each \none belonging to one of the ten Web pages. The easier it \nis for a potential reader to correctly pair up Web pages \nand URLs, the stronger the alignment. If any Web page \nand associated URL is not correctly matched up, then the \nalignment is very weak. \n Let’s now turn to an example, as shown in Figure 31.7 . \n This browser’s content window displays an exact copy \nof the AT & T Universal card login page. Like most Web \nlogin pages, it displays a high level of layout sophistica-\ntion: photographs of happy clients, navigation bars, product \npictures, a sidebar, promotional windows, and small-print \nlegal disclaimers. It also displays a “ VeriSign Secured ” site \nFIGURE 31.5 (Continued) (b) center-\naligned “ VeriSign Secured ” endorse-\nment logo. \n(b)\n" }, { "page_number": 562, "text": "Chapter | 31 Identity Theft\n529\nendorsement logo. The two versions of this stimulus dif-\nfer by their address bar contents: version (a) uses https://\nwww.accountonline.com/View?docId \u0003 Index & siteId \u0003 AC \n& langId \u0003 EN and consequently displays a browser frame \npadlock; version (b) uses the unencrypted URL http://\nwww.attuniversalcard.com/ (owned by AT & T but not \nin use). \n Login Page: Strong and Weak Content \nAlignment \n This next set takes the alternative approach to aligning \nthe address bar URL with content: change the content. \nBoth pages (see Figure 31.8 ) use the unregistered URL \n www.citicardmembers.com/ . Version (a) displays a \n(a)\n FIGURE 31.6 (a) “ VeriSign Secured ” logo added to the footer of an authentic message.\n" }, { "page_number": 563, "text": "PART | IV Privacy and Access Management\n530\n(b)\n(c)\n(d)\nFIGURE 31.6 (Continued) (b) authentic message — effect of endorsement logo; (c) shared body detail; (d) endorsement footer detail. \n" }, { "page_number": 564, "text": "(a)\n FIGURE 31.7 (a) Strong and weak URL alignment; (b) weak alignment URL detail; (c) strong alignment URL detail. \n(b)\n(c)\n" }, { "page_number": 565, "text": "PART | IV Privacy and Access Management\n532\np recise copy of the authentic Citi Credit Cards login \npage in its content window, whereas the version (b) \ncontent window displays modified logos and links (see \n Figure 31.9 ) for better alignment with the URL. \n Figures 31.8a and 31.8b use a sophisticated lay-\nout with nearly all the same identifiable features of the \nAT & T Universal Card login: photographs of happy \nc lients, navigation bars, product pictures, sidebars, pro-\nmotional windows, and the “ VeriSign Secured ” logo. \n Login Page: Authentic and Bogus \n(But Plausible) URLs \n These two stimuli test the impact of changing a well-\naligned authentic URL (see Figure 31.9 ), http://www.\npaypal.com/ebay , to a reasonably well-aligned bogus \nURL, http://www.ebaygroup.com/paypal (domain owned \nby eBay but not in use). The main content window is the \nsame for both: The eBay decorated version of the PayPal \nlogin page, which contains an eBay logo to the lower \nright of the primary PayPal logo. The page layout con-\ntains all the main features of the previous login pages but \nincludes a more thorough set of third-party endorsement \nlogos: “ VeriSign Secured, ” “ Reviewed by TRUST-e, ” and \n “ Privacy: BBB OnLine. ” SSL is not used in either stimulus. \n Login Page: Hard and Soft Emphasis \non Security \n Figure 31.10 tests whether it is possible to undermine \nconfidence in an authentic login page with excessive \n(a)\n FIGURE 31.8 (a) A precise copy of the authentic Citi Credit Cards login page in its content window. \n" }, { "page_number": 566, "text": "Chapter | 31 Identity Theft\n533\n(b)\nFIGURE 31.8 (Continued) (b) content window displays modified logos and links; (c) location of detail window; (d) original and modified detail \nwindow; (e) header menu of detail window; (f) Cardmember sign on detail window; (g) footer of detail window; and (h) Cardmembers of detail \nwindow. \n(c)\n(d)\n(e)\n(h)\n(g)\n(f)\n" }, { "page_number": 567, "text": "PART | IV Privacy and Access Management\n534\n(a)\n FIGURE 31.9 (a) A reasonably well-aligned bogus URL.\n" }, { "page_number": 568, "text": "Chapter | 31 Identity Theft\n535\nconcern about security and online fraud. These two \nstimuli represent an extreme but real-world case. Clients \nof the Indiana University Employees Federal Credit \nUnion (IUCU) were targeted by a phishing attack in \nearly August 2006. In response, the credit union altered \nits Web page to include a large banner, as shown in \n Figure 31.10f . \n They further augmented the news section with a simi-\nlar message: “ Warning! Phishing Scam in progress (learn \nmore). ” Finally, a section named “ Critical Fraud Alerts ” \ncontained the exact same warning as the one from the \nnews section. The twin page eliminates all phishing warn-\nings (including the banner) and changes the “ Critical \nFraud Alerts ” section heading to read “ Fraud Prevention \nCenter. ” Generally, the language was changed to sound \n “ in control ” rather than alarmist. \n Bad URL, with and without SSL and \nEndorsement Logo \n Can an endorsement logo and SSL padlock overcome \na bad domain name? This next set, as shown in Figure \n31.11 , tests these two features on a Wells Fargo phishing \nsite based on the bogus domain www-wellsfargo.com . \nThe login page is similar in layout to the others but does \nnot feature photographs of people. The only continuous \ntone graphic is an image of a speeding horse-drawn car-\nriage that evokes a Wild West money transfer service. \nOne screenshot contains the original page content using \nan unencrypted connection; the other stimulus uses SSL \nand adds a center-aligned “ VeriSign Secured ” logo to the \npage’s footer. \n High-Profile Recall Notice \n At the time of testing (10/02/2006 – 10/12/2006), the press \nhad been alive with recent reports 10 of laptop battery \nrecalls from both Dell and Apple. Sony, the source of the \nbatteries, ultimately issued a direct recall for the same \nbatteries, by adding several more brands on 10/23/2006. \n As shown in Figure 31.12 , this test set does not follow \nthe controlled-pair format of the previous stimuli. One \nstimulus is a screenshot of the official Dell Battery Return \nProgram Web page. The page layout is substantially sim-\npler than all other Web stimuli. A four-color header logo \nadorns the top of the page. It presents the content as a let-\nter to “ Dell Customer, ” explaining the danger and how \nto determine eligibility for exchange. Notably, there is a \nsingle column of content, no photos, and no promotional \ncontent of any kind. They use the third-party domain dell-\nbatteryprogram.com. Use of this domain for the official \npage makes the replacement service ripe for phishing. \n We constructed a phishing email message using the \nheader, footer, and textual content from this Web page. \nThe phishing message omits the middle section on how \nto identify eligible batteries and instead requests that \nthe recipient go to the bogus Web page at http://www.\ndellbatteryreplacements.com . \n Low-Profile Class-Action Lawsuit \n As shown in Figure 31.13 , this last set of stimuli also fol-\nlows the third-party email and Web page form of the pre-\nvious set. Both stimuli are authentic, but they use altered \ndates to appear relevant at the time of testing. The email \nmessage is a lengthy notice that describes a class-action \nlawsuit against Netflix, a settlement to the lawsuit, and \noptions for claiming benefits. There is no greeting, sig-\nnature, color, or graphics. The only hyperlinks direct \nthe user to the authentic third-party URL, http://www.\nnetflixsettlement.com . The Web page has a similarly \nbare appearance but with much less text. It behaves as a \nhyperlink gateway for more information under the URL \n http://www.netflix.com/settlement/ (the result of redirec-\ntion from http://www.netflixsettlement.com ). \n 2. RESULTS AND ANALYSIS \n Our experiment directly controls for the effect of several \ndesign features. There are some surprises in the direct \n 10 D. Darlin, “ Dell will recall batteries in PCs, ” New York Times , 15 \nAugust 2006, http://select.nytimes.com/search/restricted/article?res \u0003 \nF10A1FF83C5A0C768DDDA10894DE404482 . \n(b)\n(c)\n(d)\nFIGURE 31.9 (Continued) (b) logo detail; (c) bogus URL detail; and \n(d) authentic URL detail. \n" }, { "page_number": 569, "text": "PART | IV Privacy and Access Management\n536\n(a)\n FIGURE 31.10 (a) Soft emphasis on security; (b) hard emphasis on security; (c) soft security detail; (d) hard security detail; (d) hard security \ndetail 1; (e) soft security detail 2; and (f) Web page warning. \n(c)\n(d)\n" }, { "page_number": 570, "text": "Chapter | 31 Identity Theft\n537\n(b)\n(e)\n(f)\nFIGURE 31.10 (Continued)\n" }, { "page_number": 571, "text": "PART | IV Privacy and Access Management\n538\nresults from these tests, including the stunning impact of a \ndetailed small print footer on an otherwise well-conceived \nlegitimate message; however, the experiment reveals \nan unexpected, but in retrospect obvious, lesson about \nemail messages: The “ story ” of the message is critical. \nMessages with strong and succinct narrative components \nrated highly and their ratings appear to be less susceptible \nto changes in graphic design. On the other hand, authen-\nticity perception changed significantly for messages that \nsay little (such as a service promotion) under document \nfeature variances. Two of the five sets of “ twins ” did not \nchange significantly, according to the metric, when aug-\nmented with the very same features that produced sig-\nnificant changes in other messages. Subjects judged these \nmessages principally on their narrative content. \n The Chase phishing message uses the company’s \nrecent acquisition of Bank One as a pretext for imminent \nservice change to ATM card authentication; the story \nfurther bundles this change with the addition of a new \nservice, eDebit, and implicitly threatens discontinuation \nof service by claiming to “ support the legacy 4 digit PIN \nfor the remainder of the year. ” This consequence is much \nless direct than the standard “ Your account will be sus-\npended in 48 hours if ... ” strategy used by many phish-\ners. It works on a less urgent time scale and is pitched \nas a convenience to all clients rather than an anomaly \nspecialized to a specific client. There was no significant \ndifference in subject evaluation between the plainly for-\nmatted version of this message with the monochrome \ncorporate header and the one with several customized \ncontinuous tone graphics. \n Yet these same graphics, minus the shiny “ Bank One \nis now Chase ” banner, produced a significant change \n( p \u0003 0:018) in evaluations of Chase’s authentic pay-\nment notice. The payment notice thanks the receiver for \na recent online payment, then goes on to inform the user \n(a)\n FIGURE 31.11 (a) Without SSL and endorsement logo.\n" }, { "page_number": 572, "text": "Chapter | 31 Identity Theft\n539\n(b)\n(c)\n(d)\n(e)\n(f)\n(g)\nFIGURE 31.11 (Continued) (b) with SSL and endorsement logo; (c) no SSL/endorsement logo; (d) SSL/endorsement detail; (e) Internet no SSL/\nendorsement status bar; (f) Internet SSL/endorsement status bar; and (g) VeriSign logo. \n" }, { "page_number": 573, "text": "PART | IV Privacy and Access Management\n540\n(a)\n FIGURE 31.12 (a) The official Dell Battery Return Program Web page.\n" }, { "page_number": 574, "text": "Chapter | 31 Identity Theft\n541\nabout features of their online account management inter-\nface. In terms of relevance, there is less potential impact \non the user. Ignoring this message won’t expose the cli-\nent to any changes (good or bad). Assuming that the \nname and recent online interaction are correct, the mes-\nsage communicates little that would surprise the aver-\nage client. So, in place of strong narrative components, \nthe subject looks to formatting cues to further inform \nconfidence. This message rated highly in its simple \nform and was pushed higher by the improved graphics. \nWe attribute its initially high rating to its exclusion of \nhyperlinks, informative nature, and well-contextualized \nmessage. \n Subject reactions to the presence of the “ VeriSign \nSecured ” logo differed dramatically between the two test \nmessages. One message, a change of policy notice from \nPayPal, experienced no statistical difference in subject \nevaluations of its endorsed and unendorsed forms. The \n(b)\n(c)\n(d)\nFIGURE 31.12 (Continued) (b) a letter to Dell customer; (c) genuine Web site URL detail; (d) bogus email body detail. \n" }, { "page_number": 575, "text": "PART | IV Privacy and Access Management\n542\n(a)\n FIGURE 31.13 (a) Low-profile class-action suit.\n" }, { "page_number": 576, "text": "Chapter | 31 Identity Theft\n543\npolicy change notice shares several narrative features \nwith the Chase phishing message: Both messages have a \ncustomized salutation, both inform users about an insti-\ntution wide change (in this case, the particulars of their \nlogo insertion policy), and both claim that inaction will \nresult in a change of service. The PayPal message has \nno hyperlink in the message body but does contain a link \nin its small-font gray footer to manage user preferences; \nwe think that this forecasts a potential phishing strategy. \n The other message that tests the impact of the “ VeriSign \nSecured ” logo is a phishing message that exploits the \nCitibank brand. The experiment shows a statistically signif-\nicant change in subject evaluations ( p \u0003 0:047) due to this \nsingle feature change. Of all the messages, this message \nmakes the weakest connections to the receiver. It begins \nwith a generic salutation. Worse, the first sentence promotes \nthe goodness of their online service but fails to involve the \nreceiver in any way. Not until the second sentence does the \nmessage’s relevance become evident to the reader: They are \noffering “ $50* to try it out! ” These two messages had the \nlowest average ranking of all the email stimuli. Ignoring \nthis message has no impact on the user except for failing to \nmiss out on an offer of dubious value. Ultimately, the stim-\nulus fails to engage the reader, and so subjects base more of \ntheir evaluation on nonnarrative factors such as the endorse-\nment logo and the bogus URL. \n The Dell battery replacement program message \npresents a compelling story, but not directly. The incident \nreceived a high level of media coverage due to spectacu-\nlar reports of exploding and burning laptop computers. \nThe Dell message, which we manufactured, benefits \nfrom other sources spreading the story. Without this \nthird-party validation, this message could have bordered \non implausibility, but instead our subjects produced rat-\nings that were statistically indistinguishable from the \ntwo most highly ranked email messages in the batch. \nThis message contains a slightly nicer-than-average lay-\nout (multicolored header, footer graphic with links) but \nless personalization and a fraudulent (but semantically \nplausible) URL. The story was so powerful and present \nin the subjects ’ minds that they were willing to discount \nthe suspicious link and generic greeting. \n(b)\nFIGURE 31.13 (Continued) (b) a lengthy notice that describes a class-action lawsuit against Netflix, a settlement to the lawsuit, and options for \nclaiming benefits. \n" }, { "page_number": 577, "text": "PART | IV Privacy and Access Management\n544\n The other third-party attack (NetFlix) did not benefit \nfrom recent or high-profile media coverage (see Table \n31.1 ). We may have further lowered its rating by altering \nthe dates to appear relevant at the time of testing. \n Subjects could have perceived the timeline as \nimplausibly long (even for legal action) or may have \nbeen familiar with the case and known that the dates \nwere incorrect. In addition to these changes, the origi-\nnal message is particularly poorly conceived. Though \nit has strong narrative elements that present lawsuit \ncontext, the elements of the settlement, and response \noptions, the message is entirely too long. Message \nlength and detail create an incentive for users to \nquickly evaluate a ccording to nonnarrative features. \nThe most visually obvious features are the inclu-\nsion of blue hyperlinks — the only non-black-and-white \nsymbols — that \npoint \nto \n www.netflixsettlement.com . \nThough this is the legitimate domain, it should raise suspi-\ncion because it is an apparent “ cousin domain ” to the par-\nent company’s Web site. The lack of strong design features \nseals its poor evaluation. There is no company header — a \nfeature present on every other stimulus — and no opening \nsalutation or signature. The contact address appears to be \nan afterthought that does not even specify a division of the \ncompany, let alone an appropriate administrator. \n The biggest surprise of the test appears in a pair \nof authentic AT & T Universal card messages. In many \nways, this is the polar opposite of the Netflix settlements \n TABLE 31.1 The Other Third Party Attack \n Stimulus Description \n Mean \n Diff. \n \r2 \n p \n Chase card payment statement (legit)–plain layout \n Chase card payment statement (legit)–fancy layout \n 3.40\n3.76 \n 0.36 \n \n 11.89 \n \n 0.018 \n \n Chase phish-fancy layout \n Chase phish-plain layout \n 3.19\n3.18 \n 0.02 \n 6.31 \n 0.177 \n AT & T Universal Card statement without legal notices \n AT & T Universal Card statement with legal notices \n 3.05\n3.66 \n 0.62 \n 30.18 \n \n 0 \n PayPal policy change \u0002 VeriSign \n PayPal policy change – no VeriSign \n 3.19\n3.30 \n 0.11 \n 5.75 \n 0.219 \n Citibank phish – no VeriSign \n Citibank phish \u0002 VeriSign \n 2.40\n2.69 \n 0.29 \n 9.62 \n 0.047 \n AT & T Universal card login\n https://www.accountonline.com/View?docId \u0003 Index & siteId \u0003 AC & langId \u0003 EN \n http://www.attuniversalcard.com \n 2.76\n3.25 \n 0.49 \n 15.46 \n 0.004 \n Citbank (phish); URL \u0003 http://www.citicardmembers.com/ \n Copy of original site; \n Logos modified to better match domain \n 3.11\n3.43 \n 0.32 \n 7.06 \n 0.133 \n PayPal Web site displaying eBay logo: \n URL \u0003 http://www.ebaygroup.com/paypal/ \nURL \u0003 http://www.paypal.com/ebay/ \n 3.35\n3.70 \n 0.35 \n 8.83 \n 0.065 \n Indiana University Credit Union homepage:\nDeemphasizes security language; no mention of “ attacks ” \n Phishing attack banner \u0002 strong fraud warnings \n 3.69\n3.37 \n 0.32 \n 11.45 \n 0.022 \n Wells Fargo phishing page: \n Reproduces original content;\nURL \u0003 http://www-wellsfargo.com/ \n Adds VeriSign endorsement; uses SSL;\nURL \u0003 https://www-wellsfargo.com/ \n 3.17\n3.48 \n 0.31 \n 10.83 \n 0.029 \n Netflix class-action settlement email (authentic) \n Netflix class-action settlement homepage (authentic) \n 2.72\n2.55 \n \n \n \n Dell battery replacement email (phishing)\nDell battery replacement Web page (authentic) \n 3.61\n3.54 \n \n \n \n Note: The first section of the table reports on the differences between email messages; the next section reports on the Web pages; and the last section \ngives the average rating for the third-party attacks. \n" }, { "page_number": 578, "text": "Chapter | 31 Identity Theft\n545\nmessage: it has strong design elements and a short, weak \nnarrative. The message promotes AT&T’s Statements \nOnline Only program without bundling it with a recent \naction (as Chase does with its payment notification). \nIgnoring this message will not produce any change in \nthe receiver’s service, nor does enrolling in the program \nprovide any obvious benefit to the client; in fact, enroll-\nment could result in unintended late payments due to \nimperfect spam interdiction of electronic billing notices. \nWhat the message lacks in narrative appeal it makes up \nfor in design strength. It customizes the message to the \nreceiver both in the opening salutation and in an “ Email \nSecurity Zone ” header box that displays client name and \nclient account number suffix. The header also has a spam \nawareness message and company logo. A blue outline that \ncomplements the company logo encloses the rest of the \ncontent. The corporate logo appears a second time within \nthe blue content box and the letter opens and closes \nwith personalized salutations: “ Dear John Doe, ” and \n “ Sincerely, Julie A. Garry. ” The two versions of this mes-\nsage differ in the presence of a detailed small-print footer \nbelow the signature, which contains hyperlinks to privacy \nand security policies, as well as hyperlinks bearing the \nuniversalcard.com domain. The footer uses a small gray \nfont and presents text for adjusting “ Email Preferences ” \nand a “ Help/Contact Us ” section containing the postal \naddress and various trademark and copyright notices. \n The one ambiguous feature of this email is a centrally \nlocated hyperlink, labeled “ log in to Account Online. ” It \ndoes not indicate the URL in the text. Phishers frequently \nemploy this sort of hyperlinking strategy to conceal the \nbogus server’s URL. The footer may add confidence \nbecause its hyperlinks appear to reference URLs with \nlegitimate and semantically aligned domain names; \nnone of the hyperlinks outside the header indicate a tar-\nget domain. Alternatively, the contact, copyright, and \ntrademark notices themselves may improve confidence \nin the message. It is particularly interesting that even \nthough the footer-less message displayed the last four \ndigits of the credit-card number, customized the greeting, \nand employed generally strong design elements, except \nfor the Netflix settlement, it was still ranked lower than \nany other legitimate message. This supports the experi-\nmental results in 11 that indicate indifference to custom-\nized greetings in certain contexts. Yet adding the footer \nboosts the evaluations to the point where it is statistically \nindistinguishable from the other two most highly ranked \nmessages — the Chase payment notification with fancy \ngraphics and the Dell battery recall notice. \n Web sites, particularly the login and information collec-\ntion pages associated with phishing scams, do not present \na story the way email messages do. For this reason, their \ncredibility depends much more on document features and \ngraphic design. Subjects assigned significantly different \nratings to three of the five sets of twin stimuli. The results \nshow that address bar alignment with page content, over-\nwrought concerns about fraud, and third-party endorse-\nments substantively change authenticity assessments. \n The biggest rating difference among the Web page \nstimuli was measured between the two versions of \nthe “ AT & T Universal Card Sign-on ” page. The offi-\ncial version, which uses the address URL https://www.\naccountonline.com/View?docId \u0003 Index & siteId \u0003 A\nC & langId \u0003 EN , has the lowest average rating of the \nten Web stimuli in the paired testing. With an average \nrating of 2.76, it rated lower than a simulated phish-\ning Web site based on the suspiciously formed domain \n www-wellsfargo.com (avg. rating: 3.17). Subjects who \nsaw the unused domain http://www.attuniversalcard.\ncom in the address bar of the AT & T card login page \nrated it significantly higher ( p \u0003 0:004) than the authen-\ntic page. The page content strongly aligns with the URL \ntext http://www.attuniversalcard.com : the phrase “ AT & T \nUniversal ” appears no less than seven times on the \nlogin page, whereas the phrase “ AT & T Universal Card ” \nappears four times in the content window. Interestingly, \nthe official page uses HTTPS and displays an SSL pad-\nlock in the lower right-hand browser frame, whereas the \nattuniversalcard.com domain does not use SSL and con-\nsequently does not display the padlock on the browser \nframe. Subjects found the semantic alignment of the \nURL to be a much stronger indicator of authenticity than \nSSL utilization. In fairness, subjects could not examine \nthe certificates, nor would they have been subject to the \n “ Unable to verify the identity of host ” pop-up window in \nthe case of a self-signed certificate. Nevertheless, other \nuser studies have found that in practice, subjects rarely \nconsider these factors. 12, 13 \n 11 M. Jakobsson and J. Ratkiewicz, “ Designing ethical phishing \nexperiments: a study of (rot13) ronl query features, ” In WWW ’ 06: \nProceedings of the 15th International Conference on World Wide Web , \npp. 513 – 522, New York, 2006, ACM Press. \n 12 R. Dhamija, J. D. Tygar, and M. Hearst, “ Why phishing works, ” \nIn CHI ’ 06: Proceedings of the SIGCHI Conference on Human \nFactors in Computing Systems, pp. 581 – 590, New York, 2006, ACM \nPress. \n 13 M. Wu, R. C. Miller, and S. L. Garfi nkel, “ Do security tool-\nbars actually prevent phishing attacks? ” In CHI ’ 06: Proceedings of \nthe SIGCHI Conference on Human Factors in Computing Systems , \npp 601 – 610, New York, 2006, ACM Press. \n" }, { "page_number": 579, "text": "PART | IV Privacy and Access Management\n546\n Much to our surprise, the phishing simulation based on \nthe URL www-wellsfargo.com rated significantly higher \n( p \u0003 0:00001) than the official AT & T Universal card login \npage. Subjects valued semantic alignment between con-\ntent and host domain more than domain well-formedness. \nSyntactically there is nothing wrong with the domain, \nbut replacing the dot with a dash is clearly an attempt at \ndeception. Adding SSL and a “ VeriSign Secured ” logo \nto the Wells-Fargo phishing page produced a significant \nimprovement in authenticity ratings ( p \u0003 0:029). It’s worth \nnoting that the authentic login page (not in the test) does \nnot display a VeriSign logo but does use SSL. In spite of \nsubjects either failing to notice the dash-for-dot exchange \nor not thinking that it was suspicious, they did notice the \npresence of either SSL or the VeriSign logo. Note that the \nAT & T login page also used SSL and displayed a VeriSign \nendorsement, but neither of these features could overcome \nthe mistrust of the accountonline.com domain. \n The last statistically significant difference between \ntwins in the Indiana University Credit Union homepage \n( p \u0003 0:022) shows that too much concern about security \ncan reduce customer confidence. Subjects responded \npositively to use of less fearful language and rated the \nsofter, more constructive content significantly higher \nthan the page that displayed stark warnings. Note that \ncorrect domain names and SSL were used on both stim-\nuli. This is a case where a good-faith effort to educate \nclients about phishing undermines confidence in the Web \nsite’s authenticity. Login pages are no place for fear-\nprovoking messages. \n One way phishers align page content with URLs is by \nchoosing an apt domain name; the other way is to change \nthe page content. Though subjects gave a higher average \nrating to our modified Citibank cardmembers login page, \nthe test showed that the two distributions were not sig-\nnificantly different ( p \u0003 0:133). \n The last test pair was nearly significant ( p \u0003 0:65) \nbut does not confirm that the two ratings come from dif-\nferent distributions. This pair compared the effects of a \nplausible parent company domain and subsidiary sub-\ndirectory, http://www.ebaygroup.com/paypal/ , with the \nauthentic URL that reverses their positions, http://www.\npaypal.com/ebay/ . Although the ebaygroup.com domain \nis unbound, eBay has registered it. Nevertheless, this test \nshows a certain flexibility in user acceptance of domain \nalignment. No PayPal client has seen the PayPal page \ndisplayed under the http://www.ebaygroup.com/paypal/ \naddress, yet their authenticity ratings are not signifi-\ncantly different. This result furthers our conviction that \nsemantic alignment between content and URL is a prin-\ncipal factor in authenticity evaluations. \n The last two Web stimuli sets are not twins; they are the \nDell battery replacement page and the Netflix settlement \npage. Both pages are authentic, although the content of \nthe Netflix page was altered to appear relevant at the \ntime of testing. They received polar opposite ratings. \nThe Dell battery page was statistically indistinguishable \nfrom the highest-rated page (the authentic PayPal site), \nand the Netflix page rated dead last — significantly lower \nthan the second lowest rating ( p \u0003 1:20 10 \t 7 ). As men-\ntioned before, the Dell battery program stimuli benefit \nfrom a high visibility news story. It’s noteworthy that the \nURL in the Web page (authentic) is different from the \nURL in the email (phishing), which subjects saw first. \nSubjects did not penalize the Web page for this incon-\nsistency. Subjects may have had difficulty constructing \na phishing scenario based on the informational nature of \nthe page; there is no request for personal information. \n Similarly, the Netflix settlement page does not make \nany overtures for personal information. Even more sur-\nprising is that the URL http://www.netflix.com/settle-\nment/ aligns well with the content. Subjects may have \ndismissed the page based on mistrust of the email stim-\nulus, which they viewed prior to (several screenshots \nbefore) the Web stimulus. The Netflix page is notable \nfor its brevity and unsophisticated layout. It is the only \npage without graphics or logos of any kind. There are \nno apparent links back to the primary Netflix page. With \nthe exception of the blue underlined hyperlinks and gray \nmargins, the page is black and white. We take from this \nrating that utilizing minimalist design is a poor strategy \nfor unsolicited communications, even for important and \nserious matters such as law. \n 3. IMPLICATIONS FOR CRIMEWARE \n The experiment focused on design features and their effect \non people’s ability to distinguish authentic email and Web \nsites from malicious forgeries. Although presented in the \ncontext of phishing, we do not measure how often subjects \ndisclose passwords or other sensitive data; rather, we iden-\ntify design principles that convey authenticity. Just as phish-\ning bait promises resolution upon revealing information, \nsocial crimeware bait may promise resolution contingent \nupon installing browser extensions or accepting self-signed \nJava applets. Presenting a convincing false identity to the \nvictim is essential in both contexts. Toward this end, our \nresults forecast the following social engineering tactics: \n ● Construct messages with weak narratives (border-\ning on innocuous) but use strong design elements \n(graphics, small-print footer, endorsement logos) \n" }, { "page_number": 580, "text": "Chapter | 31 Identity Theft\n547\nand identifying information to improve authenticity \nimpressions. \n ● Use softer bait. Messages that do not encapsulate an \nimminent request for information, such as the Dell \nbattery bait, rated highly in the test. \n ● Use plausibly unfamiliar administration pages; for \nexample, the Dell Battery Return Program Web site \nprovides a service that is not typically seen, such as a \nlogin page (so visitor expectations are less concrete). \n ● Leverage high-profile news to produce messages \nwith credible and strong narratives. Personalization \nwill be less important in these cases. \n ● Align domain names with page content. Although sub-\njects were turned off by semantic mismatches between \ndomain names and content, they were insensitive to \nmalformed links ( http://www-wellsfargo.com ). \n With respect to the final point, the “ rock-phish ” \ngang has proven that effective domain alignment can be \nachieved through deceptive subdomains, 14 the control of \nwhich is delegated to the domain owner rather than the \nregistrar. The following URL, from a social engineering \nattack in the wild, illustrates this tactic: \n www.paypal.com.cgi.bin.account.webscr.cmd.login.\nrun.php.draciimasi.info/webscr.php?cmd \u0003 Login \n The registered domain is draciimasi.info, but the \nowners have prepended it with a deep subdomain. Since \nsubjects accepted the substitution of a dash for a dot in \n www-wellsfargo.com , they could easily accept a dot for \na slash, as above. Moreover, the preponderance of sub-\ndirectory names such as cgi, bin, webscr, and the like \nfurther clouds the issue for the technically uninformed. \nThis tactic may be particularly effective for download \npages because they tend to be buried several directories \ndeep; login pages, on the other hand, are frequently in \nthe root of a domain. \n Example: Vulnerability of Web-Based \nUpdate Mechanisms \n Legitimate Web sites often make their services contingent \nupon changing settings, installing extensions, or accept-\ning certificates. One important example is Microsoft’s \nWindows Update Web site. It scans the client for instal-\nlation detail through ActiveX extensions. When access-\ning the Web site through a professionally managed client \nat Indiana University, an update is not possible because \nthe administrators have disabled the service. However, \nthe Web site (see Figure 31.14 ) suggests workarounds \ninvolving settings changes. \n None of these suggestions will enable remote update \nfor this professionally managed computer, but subtle \nchanges to the instructions could cause unsophisticated \nusers to disengage important access controls. For exam-\nple, the user could have been instructed to enter Add-\nons Enabled mode. Subsequent installation of malicious \nadd-ons will lead to a compromise. As long as the host’s \nidentity has been convincingly spoofed, users will be \nvulnerable to these kinds of attacks. \n Example: The Unsubscribe Spam Attack \n This attack leverages the first two tactics we discussed: \nweak narrative combined with strong design elements and \nsofter bait. Some of the most highly rated email messages \navoided hyperlinks in the main message text. The Chase \naccount payment was completely devoid of hyperlinks and \ninstead directed receivers to type www.chase.com into their \naddress bar. Similarly, the PayPal message had no hyper-\nlinks in its body, but it included a hyperlink in the footer \nto change preferences. The highly rated AT & T Universal \ncard promotion also contains links in its small-print footer. \n 14 T. Moore and R. Clayton, “ An empirical analysis of the current \nstate of phishing attack and defense, ” The 2007 Workshop on the \nEconomics of Information Security (WEIS 2007), 7 – 8 June 2007. \n FIGURE 31.14 Suggested workarounds involving settings changes. \n" }, { "page_number": 581, "text": "PART | IV Privacy and Access Management\n548\n The attacker will send out promotional email that \nappears to come from the spoofed institution. The \npromotion would employ a weak narrative to shift user \nattention to a plethora of design features (graphical header, \nfooter, small print, personalization, genuine but unlinked \nURLs in the body, and so on). The body will generate \nthe perception of authenticity by referring the receiv-\ners to the phone number on back of her credit card or by \nrequiring users to manually type in the promotional URL. \nAmong the design features is a small-print footer with an \nunsubscribe hyperlink. This link will take users to a Web \npage that spreads crimeware simply by loading malicious \nJavaScript code, like a “ drive-by pharming attack. ” \n The bait message gets users to click on the link indi-\nrectly: annoyance with the volume of unsolicited mes-\nsages. No suspicion is aroused through directions to \nchange settings; the malware spreads on load. \n The Strong Narrative Attack \n The strong narrative attack engages the receiver with a \nplausible story, often bundling actions to well-known \nnews stories. The Chase phishing message that promotes \nePIN to incoming Bank One customers is such an exam-\nple; it leverages in the news of the Bank One acquisition. \nThe Dell battery program stimuli gain most of their cred-\nibility from the story’s media coverage. The message \nmaintains this credibility by deferring the request for per-\nsonal information; standard attacks request an “ account \nlogin ” or “ settings update ” in the message body. This \nbattery exchange program could have been turned into a \n “ patch-now ” attack by claiming that a firmware or oper-\nating system fix would prevent overheating. \n Though scams that exploit strong narratives and cur-\nrent events are not new (many fraud cases capitalized \non the September 11, 2001, and Hurricane Katrina trag-\nedies 15, 16 ), our research suggests that they are less influ-\nenced by design features. This finding is supported by \nthe persistence of the Nigerian code 419 advance fee \nscams. 17 One widespread form of this attack entices vic-\ntims with a story of the death of a foreign dignitary and \nthe need to move large amounts of money (allegedly to \nprotect it from corrupt enemies); they offer the victim \na cut for moving the money. After drawing the victim \ninto this illusion, the scammers request advance fees \nto enable the transfer of money. These messages break \nmany design rules that promote trust: They use poor \nspelling and grammar, email messages are often plain-\ntext, return addresses are essentially anonymous using \nfree email accounts. Yet these scams still account for \nlarge amounts of Internet fraud, exceeding $3 billion in \nlosses according to some estimates. 18 \n 4. CONCLUSION \n This study tested the impact of several document features \non user authenticity perceptions for both email messages \nand Web pages. The influence of these features was con-\ntext dependent in email messages. We were surprised \nthat this context was shaped more by a message’s nar-\nrative strength, rather than its underlying authenticity. \nThird-party endorsements and glossy graphics proved \nto be effective authenticity stimulators when message \ncontent was short and unsurprising. The same document \nfeatures failed to influence authenticity judgments in \na significant way when applied to more involving mes-\nsages. Most surprising was the huge increase in trust \ncaused by a small-print footer in a message that already \nexhibited strong personalization with its greeting and \npresentation of a four-digit account number suffix. \n The data suggest a link between narrative strength \nand susceptibility to trust-improving document features, \nbut the experiment was not designed to test this hypoth-\nesis. Future work should characterize more precisely \nwhat kind of messages can benefit from these features \nand what kind of messages are resistant to their sway. \n Since spoofed Web page content need not differ from \nthe authentic pages, we focused three Web page tests on \nthe effects of semantic alignment between address bar \nURLs and page content. The first showed a clear statisti-\ncal preference for a simulated Web page whose domain \nname matched its content rather than the genuine page \nwhose domain was only weakly aligned with the same \ncontent. The second test, which created better alignment \nwith a bogus domain name by altering company logos, \nfailed to register a statistically significant change in \nauthenticity ratings. The third test compared an authen-\ntic page (and URL) with an authentic version of the \nsame page content paired with a well-aligned but bogus \nURL; the results which favored the genuine URL were \n 15 Fraud Section, Criminal Division, U.S. Department of Justice. \nSpecial report on possible fraud schemes, www.usdoj.gov/criminal/\nfraud/WTCPent-SpecRpt.htm , 27 September 2001, retrieved December \n2006. \n 16 B. Krebs, “ Katrina phishing scams begin, ” WashingtonPost.com: \nSecurity Fix, 31 August 2005. \n 17 M. Zuckofi , “ The perfect mark: How Massachusetts psychothera-\npist fell for a Nigerian e-mail scam, ” The New Yorker , 15 May 2006, \n www.newyorker.com/fact/content/articles/060515fa_fact . \n 18 Ultrascan Advanced Global Investigations, “ Advance fee fraud in \n37 nations, ” www.ultrascan.nl/html/aff_37_countries.html , 25 March \n2006, retrieved December 2006. \n" }, { "page_number": 582, "text": "Chapter | 31 Identity Theft\n549\njust shy of statistical significance. In conclusion, we find \nthat URL can change authenticity ratings. \n This experiment also verified that it is possible to over-\nuse well-intended notices about security and fraud. We \nobserved a statistically significant negative effect of genu-\nine, but heavy-handed, fraud warnings. Another test showed \na statistically significant improvement in authenticity per-\nception when using SSL and a third-party endorsement \nlogo on a fraudulent Web page showing a suspiciously \nformed, but semantically well-aligned, domain name. \n The experiment simulated two sequences (one email \nand one Web page) that appeared to be third parties \ncharged with handling embarrassing incidents for their \ncorporate clients. Though separated by many variables, \none turned out to be among the most trusted stimuli \nin the test, whereas the other rank among the lowest. \nThe poorly ranked one, though authentic, broke all the \nrules: poor publicity, long and rambling message, use of \nthird-party domain names, and no graphics. The highly \nranked one (whose bogus email message was concocted \nby the authors) benefited from a widely publicized \nrecall m essage. The story overrode the message’s poor \npersonalization, illegitimate URL, and relatively simple \nlayout. \n These factors offer a glimpse into what kinds of \nsocial engineering tactics may be deployed in the future. \nWe describe an unsubscribed attack which contains an \ninnocuous message, many authenticity stimulating doc-\nument features, and an unsubscribe link that leads to \na noninteractively infectious Web site. Our tests with \nthird-party administration suggest that organizations in \nthe process of correcting an embarrassing incident are \nhighly vulnerable to social engineering attacks. Finally, \nour findings suggest some common pitfalls for legiti-\nmate Internet communications to avoid: overuse of fraud \nwarnings, utilization of poorly aligned domain names, \nfailure to use HTTPS for rendering login pages, and long \nor rambling email messages. \n" }, { "page_number": 583, "text": "This page intentionally left blank\n" }, { "page_number": 584, "text": "551\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n VoIP Security \n Dan Wing \n Cisco Systems \n Harsh Kupwade Patil \n Southern Methodist University \n Chapter 32 \n The Internet has become an important medium for com-\nmunication, and all kinds of media communications are \ncarried on today’s Internet. Telephony over the Internet \nhas received a lot of attention in the last few years because \nit offers users advantages such as long distance toll bypass, \ninteractive ecommerce, global online customer support, \nand much more. \n 1. INTRODUCTION \n H.323 and Session Initiation Protocol (SIP) are the two \nstandardized protocols for the realization of VoIP. 1 , 2 \nThe multimedia conference protocol H.323 of the \nInternational Telecommunication Union (ITU) consists \nof multiple separate protocols such as the H.245 for \ncontrol signaling and H.225 for call signaling. H.323 is \ndifficult to implement because of its complexity and the \nbulkiness that it introduces into the client application. 3 In \ncontrast, SIP is simpler than H.323 and also leaner on the \nclient-side application. SIP uses the human-readable pro-\ntocol (ASCII) instead of H.323’s binary signal coding. \n VoIP Basics \n SIP is the Internet Engineering Task Force (IETF) stand-\nard for multimedia communications in an IP network. It \nis an application layer control protocol used for creating, \nmodifying, and terminating sessions between one or more \nSIP user agents . It was primarily designed to support user \nlocation discovery, user availability, user capabilities, and \nsession setup and management. \n In SIP, the end devices are called user agents (UAs), \nand they send SIP requests and SIP responses to establish \nmedia sessions, send and receive media, and send other SIP \nmessages (e.g., to send short text messages to each other or \nsubscribe to an event notification service). A UA can be a \nSIP phone or SIP client software running on a PC or PDA. \n Typically, a collection of SIP user agents belongs to \nan administrative domain, which forms a SIP network. \nEach administrative domain has a SIP proxy, which is \nthe point of contact for UAs within the domain and for \nUAs or SIP proxies outside the domain. All SIP sign-\naling messages within a domain are routed through the \ndomain’s own SIP proxy. SIP routing is performed using \n Uniform Resource Identifiers (URIs) for addressing user \nagents. Two types of SIP URIs are supported: the SIP URI \nand the TEL URI. A SIP URI begins with the keyword sip \nor sips , where sips indicates that the SIP signaling must be \nsent over a secure channel, such as TLS. 4 The SIP URI is \nsimilar to an email address and contains a user’s identifier \nand the domain at which the user can be found. For exam-\nple, it could contain a username such as sip:alice@exam-\nple.com , a global E.164 telephone number 5 such as sip:1 \n1-972-310-9882@example.com;user \u0003 phone, or an exten-\nsion such as sip:1234@example.com . The TEL URI only \ncontains an E.164 telephone number and does not contain \na domain name, for example, tel: \u0002 1.408.555.1234 . \n A SIP proxy server is an entity that receives SIP requests, \nperforms various database lookups, and then forwards ( “ prox-\nies ” ) the request to the next-hop proxy server. In this way, \n1 ITU-T Recommendation H.323, Packet-Based Multimedia Commu-\nnications System, www.itu.int/rec/T-REC-H.323-200606-I/en. 1998.\n 2 J. Rosenberg, H. Schulzrinne, G. Camarillo, J. Peterson, R. Sparks, M. \nHandley, and E. Schooler, “ SIP: Session Initiation Protocol, ” IETF \nRFC 3261, June 2002. \n 3 H. Schulzrinne and J. Rosenberg, “ A Comparison of SIP and H.323 \nfor Internet telephony, ” in Proceedings of NOSSDAV , Cambridge, U.K., \nJuly 1998. \n 4 S. Fries and D. Ignjatic, “ On the applicability of various MIKEY \nmodes and extensions, ” IETF draft, March 31, 2008. \n 5 F. Audet, “ The use of the SIPS URI scheme in the Session Initiation \nProtocol (SIP), ” IETF draft, February 23, 2008. \n" }, { "page_number": 585, "text": "PART | IV Privacy and Access Management\n552\nSIP messages are routed to their ultimate destination. Each \nproxy may perform some specialized function, such as \nexternal database lookups, authorization checks, and so on. \nBecause the media does not flow through the SIP proxies —\n but rather only SIP signaling — SIP proxies are no longer \nneeded after the call is established. In many SIP proxy \ndesigns, the proxies are stateless, which allows alternative \nintermediate proxies to resume processing for a failed (or \noverloaded) proxy. One type of SIP proxy called a redirect \nserver receives a SIP request, performs a database query \noperation, and returns the lookup result to the requester \n(which is often another proxy). Another type of SIP proxy \nis a SIP registrar server , which receives and processes reg-\nistration requests. Registration binds a SIP (or TEL) URI \nto the user’s device, which is how SIP messages are routed \nto a user agent. Multiple UAs may register the same URI, \nwhich causes incoming SIP requests to be routed to all of \nthose UAs, a process termed forking , which causes some \ninteresting security concerns. \n The typical SIP transactions can be broadly viewed \nby looking at the typical call flow mechanism in a SIP \nsession setup, as shown in Figure 32.1 . The term SIP \ntrapezoid is often used to describe this message flow \nwhere the SIP signaling is sent to SIP proxies and the \nmedia is sent directly between the two UAs. \n If Alice wants to initiate a session with Bob, she \nsends an initial SIP message ( INVITE ) to the local proxy \nfor her domain (Atlanta.com). Her INVITE has Bob’s \nURI ( bob@biloxi.com ) as the Request-URI, which is \nused to route the message. Upon receiving the initial \nmessage from Alice, her domain’s proxy sends a provi-\nsional 100 Trying message to Alice, which indicates that \nthe message was received without error from Alice. The \nAtlanta.com proxy looks at the SIP Request-URI in the \nmessage and decides to route the message to the Biloxi.\ncom proxy. The Biloxi.com proxy receives the message \nand routes it to Bob. The Biloxi.com proxy delivers the \n INVITE message to Bob’s SIP phone, to alert Bob of an \nincoming call. Bob’s SIP phone initiates a provisional \n 180 Ringing message back to Alice, which is routed \nall the way back to Alice; this causes Alice’s phone to \ngenerate a ringback tone, audible to Alice. When Bob \nanswers his phone a 200 OK message is sent to his \nproxy, and Bob can start immediately sending media \n( “ Hello? ” ) to Alice. Meanwhile, Bob’s 200 OK is routed \nfrom his proxy to Alice’s proxy and finally to Alice’s \nUA. Alice’s UA responds with an ACK message to Bob \nand then Alice can begin sending media (audio and/or \nvideo) to Bob. Real-time media is almost exclusively \nsent using the Real-time Transport Protocol (RTP). 6 \n2. INVITE\n5. 100 Trying\n7. 180 Ringing\nSIP proxy @\nAtlanta.com\nSIP proxy @\nBiloxi.com\n11. 200 OK\n15. ACK\n18. BYE\n21. 200 OK\n1. INVITE\n3. 100 Trying\n8. 180 Ringing\n12. 200 OK\n13. ACK\n19. BYE\n20. 200 OK\n10. Media from Bob to Alice begins\n14. Media from Alice to Bob begins\nAlice\nBob\n4. INVITE\n6. 180 Ringing\n9. 200 OK\n16. ACK\n17. BYE\n22. 200 OK\n FIGURE 32.1 An example of a SIP session setup. \n 6 H. Schulzrinne, R. Frederick, and V. Jacobson, “ RTP: A transport \nprotocol for real-time applications, ” IETF RFC 1889, Jan. 1996. \n" }, { "page_number": 586, "text": "Chapter | 32 VoIP Security\n553\nAt this point the proxies are no longer involved in the \ncall and hence the media will typically flow directly \nbetween Alice and Bob. That is, the media takes a differ-\nent path through the network than the signaling. Finally, \nwhen either Alice or Bob want to end the session, they \nsend a BYE message to their proxy, which is routed to \nthe other party and is acknowledged. \n One of the challenging tasks faced by the industry today \nis secure deployment of VoIP. During the initial design of \nSIP the focus was more on providing new dynamic and \npowerful services along with simplicity rather than security. \nFor this reason, a lot of effort is under way in the industry \nand among researchers to enhance SIP’s security. The sub-\nsequent sections of this chapter deal with these issues. \n 2. OVERVIEW OF THREATS \n Attacks can be broadly classified as attacks against spe-\ncific users (SIP user agents), large scale (VoIP is part of \nthe network) and against network infrastructure (SIP prox-\nies or other network components and resources necessary \nfor VoIP, such as routers, DNS servers, and bandwidth). 7 \nThis chapter does not cover attacks against infrastruc-\nture; the interested reader is referred to the literature. 8 The \nsubsequent parts of this chapter deal with the attacks tar-\ngeted toward the specific host and issues related to social \nengineering. \n Taxonomy of Threats \n The taxonomy of attacks is shown in Figure 32.2 . \n Reconnaissance of VoIP Networks \n Reconnaissance refers to intelligent gathering or probing \nto assess the vulnerabilities of a network, to successfully \nlaunch a later attack; it includes footprinting the target \n(also known as profiling or information gathering ). \n The two forms of reconnaissance techniques are pas-\nsive and active. Passive reconnaissance attacks include the \ncollection of network information through indirect or direct \nmethods but without probing the target; active reconnais-\nsance attacks involve generating traffic with the intention \nof eliciting responses from the target. Passive reconnais-\nsance techniques would involve searching for publicly \n 7 T. Chen and C. Davis, “ An overview of electronic attacks, ” in \n Information Security and Ethics: Concepts, Methodologies, Tools and \nApplications , H. Nemati (ed.), Idea Group Publishing, to appear 2008. \n 8 A. Chakrabarti and G. Manimaran, “ Internet infrastructure security: \na taxonomy, ” IEEE Network, Vol. 16, pp. 13 – 21, Dec. 2002. \nAttacks\nLarge scale\nAgainst network infrastructure\nAgainst specific\nusers (SIP User\nAgents)\nThreats to\nprivacy\nTraffic\nanalysis\nReconnaissance\n(Passive or\nActive)\n1. TFTP\nconfiguration\nfile sniffing\n2. Call pattern\ntracking\n3. Conversation\neavesdropping\nBuffer\noverflow\nand Cross\nsite scripting\nEaves- \ndropping\nSesssion\nhijacking\nPasswords\nCall Modification\nattacks (MiTM,\nReplay attack,\nRedirection attack,\nSession disruption)\nExploits\nDoS or\nDDoS\nMalformed\nDoS and\nLoad-based\nDoS\nSpam over\nInternet\nTelephony\n(SPIT)\nThreats to\ncontrol\nThreats to\navailability\nSocial\nengineering\nMalware\nSpam\nDNS\nRouting\nPackets\nDoS\n FIGURE 32.2 Taxonomy of threats. \n" }, { "page_number": 587, "text": "PART | IV Privacy and Access Management\n554\navailable SIP URIs in databases provided by VoIP service \nproviders or on Web pages, looking for publicly acces-\nsible SIP proxies or SIP UAs. Examples include dig and \n nslookup . Although passive reconnaissance techniques can \nbe effective, they are time intensive. \n If an attacker can watch SIP signaling, the attacker can \nperform number harvesting. Here, an attacker passively \nmonitors all incoming and outgoing calls to build a data-\nbase of legitimate phone numbers or extensions within an \norganization. This type of database can be used in more \nadvanced VoIP attacks such as signaling manipulation or \nSpam over Internet Telephony (SPIT) attacks. \n Active reconnaissance uses technical tools to discover \ninformation on the hosts that are active on the target net-\nwork. The drawback to active reconnaissance, however, \nis that it can be detected. The two most common active \nreconnaissance attacks are call walking attacks and port-\nscanning attacks. \n Call walking is a type of reconnaissance probe in \nwhich a malicious user initiates sequential calls to a block \nof telephone numbers to identify what assets are avail-\nable for further exploitation. This is a modern version of \n wardialing , common in the 1980s to find modems on the \nPublic Switched Telephone Network (PSTN). Performed \nduring nonbusiness hours, call walking can provide infor-\nmation useful for social engineering, such as voicemail \nannouncements that disclose the called party’s name. \n SIP UAs and proxies listen on UDP/5060 and/or \nTCP/5060, so it can be effective to scan IP addresses \nlooking for such listeners. Once the attacker has accumu-\nlated a list of active IP addresses, he can start to investi-\ngate each address further. The Nmap tool is a robust port \nscanner that is capable of performing a multitude of types \nof scans. 9 \n Denial of Service \n A denial-of-service (DoS) attack deprives a user or an \norganization of services or resources that are normally \navailable. In SIP, DoS attacks can be classified as mal-\nformed request DoS and load-based DoS. \n Malformed Request DoS \n In this type of DoS attack, the attacker would craft a SIP \nrequest (or response) that exploits the vulnerability in a \nSIP proxy or SIP UA of the target, resulting in a partial \nor complete loss of function. For example, it has also \nbeen found that some user agents allow remote attackers \nto cause a denial of service ( “ 486 Busy ” responses or \ndevice reboot) via a sequence of SIP INVITE transactions \nin which the Request-URI lacks a username. 10 Attackers \nhave also shown that the IP implementations of some hard \nphones are vulnerable to IP fragmentation attacks [CAN-\n2002-0880] and DHCP-based DoS attacks [CAN-2002-\n0835], demonstrating that normal infrastructure protection \n(such as firewalls) is valuable for VoIP equipment. DoS \nattacks can also be initiated against other network services \nsuch as DHCP and DNS, which serve VoIP devices. \n Load-Based DoS \n In this case an attacker directs large volumes of traffic at a \ntarget (or set of targets) and attempts to exhaust resources \nsuch as the CPU processing time, network bandwidth, \nor memory. SIP proxies and session border controllers \n(SBCs) are primary targets for attackers because of their \ncritical role of providing voice service and the complex-\nity of the software running on them. \n A common type of load-based attack is a flooding \nattack. In case of VoIP, we categorize flooding attacks into \nthese types: \n ● Control packet floods \n ● Call data floods \n ● Distributed denial-of-service attack \n Control Packet Floods \n In this case the attacker will flood SIP proxies with SIP \npackets, such as INVITE messages, bogus responses, or the \nlike. The attacker might purposefully craft authenticated \nmessages that fail authentication, to cause the victim to val-\nidate the message. The attacker might spoof the IP address \nof a legitimate sender so that rate limiting the attack also \ncauses rate limiting of the legitimate user as well. \n Call Data Floods \n The attacker will flood the target with RTP packets, with \nor without first establishing a legitimate RTP session, in \nan attempt to exhaust the target’s bandwidth or processing \npower, leading to degradation of VoIP quality for other \nusers on the same network or just for the victim. \n Other common forms of load-based attacks that could \naffect the VoIP system are buffer overflow attacks, TCP \nSYN flood, UDP flood, fragmentation attacks, smurf \nattacks, and general overload attacks. Though VoIP \n 9 http://nmap.org/ . \n 10 The common vulnerability and exposure list for SIP, http://cve.\nmitre.org/cgibin/cvekey.cgi?keyword \u0003 SIP . \n" }, { "page_number": 588, "text": "Chapter | 32 VoIP Security\n555\nequipment needs to protect itself from these attacks, \nthese attacks are not specific to VoIP. \n A SIP proxy can be overloaded with excessive legiti-\nmate traffic — the classic “ Mother’s Day ” problem when \nthe telephone system is most busy. Large-scale disasters \n(e.g., earthquakes) can also cause similar spikes, which \nare not attacks. Thus, even when not under attack, the \nsystem could be under high load. If the server or the end \nuser is not fast enough to handle incoming loads, it will \nexperience an outage or misbehave in such a way as to \nbecome ineffective at processing SIP messages. This \ntype of attack is very difficult to detect because it would \nbe difficult to sort the legitimate user from the illegiti-\nmate users who are performing the same type of attack. \n Distributed Denial-of-Service Attack \n Once an attacker has gained control of a large number \nof VoIP-capable hosts and formed a “ zombies ” network \nunder the attacker’s control, the attacker can launch \ninteresting VoIP attacks, as illustrated in Figure 32.3 . \n Each zombie can send up to thousands of messages to \na single location, thereby resulting in a barrage of packets, \nwhich incapacitates the victim’s computer due to resource \nexhaustion. \n Loss of Privacy \n The four major eavesdropping attacks are: \n ● Trivial File Transfer Protocol (TFTP) configuration \nfile sniffing \n ● Traffic analysis \n ● Conversation eavesdropping \n TFTP Configuration File Sniffing \n Most IP phones rely on a TFTP server to download their \nconfiguration file after powering on. The configuration \nfile can sometimes contain passwords that can be used \nto directly connect back to the phone and administer it \nor used to access other services (such as the company \ndirectory). An attacker who is sniffing the file when \nthe phone downloads this configuration file can glean \nthrough these passwords and potentially reconfigure \nand control the IP phone. To thwart this attack vector, \nvendors variously encrypt the configuration file or use \nHTTPS and authentication. \n Traffic Analysis \n Traffic analysis involves determining who is talking to \nwhom, which can be done even when the actual con-\nversation is encrypted and can even be done (to a lesser \ndegree) between organizations. Such information can be \nbeneficial to law enforcement and for criminals commit-\nting corporate espionage and stock fraud. \n Conversation Eavesdropping \n An important threat for VoIP users is eavesdropping \non a conversation. In addition to the obvious problem \nof confidential information being exchanged between \nAttack\nPackets\nAttack Packet\nAttacker\nAttack Packet\nAttack Packet\nAttack Packet\nAttack Packet\n FIGURE 32.3 Distributed denial-of-service attack (DDoS). \n" }, { "page_number": 589, "text": "PART | IV Privacy and Access Management\n556\npeople, eavesdropping is also useful for credit-card fraud \nand identity theft. This is because some phone calls — \nespecially to certain institutions — require users to enter \ncredit-card numbers, PIN codes, or national identity \nnumbers (e.g., Social Security numbers), which are sent \nas Dual-Tone Multi-frequency (DTMF) digits in RTP. \nAn attacker can use tools like Wireshark, Cain & Abel, \nvomit (voice over misconfigured Internet telephones), \nVoIPong, and Oreka to capture RTP packets and extract \nthe conversation or the DTMF digits. 11 \n Man-in-the-Middle Attacks \n The man-in-the-middle attack is a classic form of an \nattack where the attacker has managed to insert himself \nbetween the two hosts. It refers to an attacker who is able \nto read, and modify at will, messages between two par-\nties without either party knowing that the link between \nthem has been compromised. As such, the attacker \nhas the ability to inspect or modify packets exchanged \nbetween two hosts or insert new packets or prevent pack-\nets from being sent to hosts. Any device that handles SIP \nmessages as a normal course of its function could be a \nman in the middle: a compromised SIP proxy server or \nsession border controller. If SIP messages are not authen-\nticated, an attacker can also compromise a DNS server or \nuse DNS poisoning techniques to cause SIP messages to \nbe routed to a device under the attacker’s control. \n Replay Attacks \n Replay attacks are often used to impersonate an authorized \nuser. A replay attack is one in which an attacker captures \na valid packet sent between the SIP UAs or proxies and \nresends it at a later time (perhaps a second later, perhaps \ndays later). As an example with classic unauthenticated \ntelnet, an attacker that captures a telnet username and \npassword can replay that same username and password. \nIn SIP, an attacker would capture and replay valid SIP \nrequests. (Capturing and replaying SIP responses is usu-\nally not valuable, as SIP responses are discarded if their \nCall-Id does not match a currently outstanding request, \nwhich is one way SIP protects itself from replay attacks.) \n If Real-time Transport Protocol (RTP) is used with-\nout authenticating Real-time Transport Control Protocol \n(RTCP) packets and without sampling synchronization \nsource (SSRC), an attacker can inject RTCP packets \ninto a multicast group, each with a different SSRC, and \nforce the group size to grow exponentially. A variant on a \nreplay attack is the cut-and-paste attack. In this scenario, \nan attacker copies part of a captured packet with a gen-\nerated packet. For example, a security credential can be \ncopied from one request to another, resulting in a success-\nful authorization without the attacker even discovering the \nuser’s password. \n Impersonation \n Impersonation is described as a user or host pretending to \nbe another user or host, especially one that the intended \nvictim trusts. In case of a phishing attack, the attacker con-\ntinues the deception to make the victim disclose his bank-\ning information, employee credentials, and other sensitive \ninformation. In SIP, the From header is displayed to the \ncalled party, so authentication and authorization of the val-\nues used in the From header are important to prevent imper-\nsonation. Unfortunately, call forwarding in SIP (called \n retargeting ) makes simple validation of the From header \nimpossible. For example, imagine Bob has forwarded his \nphone to Carol and they are in different administrative \ndomains (Bob is at work, Carol is his wife at home). Then \nAlice calls Bob. When Alice’s INVITE is routed to Bob’s \nproxy, her INVITE will be retargeted to Carol’s UA by \nrewriting the Request-URI to point to Carol’s URI. Alice’s \noriginal INVITE is then routed to Carol’s UA. When it \narrives at Carol’s UA, the INVITE needs to indicate that \nthe call is from Alice. The difficulty is that if Carol’s SIP \nproxy were to have performed simplistic validation of the \nFrom in the INVITE when it arrived from Bob’s SIP proxy, \nCarol’s SIP proxy would have rejected it — because it con-\ntained Alice’s From. However, such retargeting is a legiti-\nmate function of SIP networks. \n Redirection Attack \n If compromised by an attacker or via a SIP man-in-the-mid-\ndle attack, the intermediate SIP proxies responsible for SIP \nmessage routing can falsify any response. In this section we \ndescribe how the attacker could use this ability to launch a \nredirection attack. If an attacker can fabricate a reply to a \nSIP INVITE, the media session can be established with the \nattacker rather than the intended party. In SIP, a proxy or \nUA can respond to an INVITE request with a 301 Moved \nPermanently or 302 Moved Temporarily Response. The \n302 Response will also include an Expires header line that \ncommunicates how long the redirection should last. The \nattacker can respond with a redirection response, effectively \ndenying service to the called party and possibly tricking the \ncaller into communicating with, or through, a rogue UA. \n Session Disruption \n Session disruption describes any attack that degrades \nor disrupts an existing signaling or media session. For \n 11 D. Endler and M. Collier, Hacking VoIP Exposed: Voice over IP \nSecurity Secrets and Solutions, McGraw-Hill, 2007. \n" }, { "page_number": 590, "text": "Chapter | 32 VoIP Security\n557\nexample, in the case of a SIP scenario, if an attacker is \nable to send failure messages such as BYE and inject \nthem into the signaling path, he can cause the sessions to \nfail when there is no legitimate reason why they should \nnot continue. For this to be successful, the attacker has to \ninclude the Call-Id of an active call in the BYE message. \nAlternatively, if an attacker introduces bogus packets \ninto the media stream, he can disrupt packet sequence, \nimpede media processing, and disrupt a session. Delay \nattacks are those in which an attacker can capture and \nresend RTP SSRC packets out of sequence to a VoIP \nendpoint and force the endpoint to waste its processing \ncycles in resequencing packets and degrade call qual-\nity. An attacker could also disrupt a Voice over Wireless \nLocal Area Network (WLAN) service by disrupting \nIEEE 802.11 WLAN service using radio spectrum jam-\nming or a Wi-Fi Protected Access (WPA) Message \nIntegrity Check (MIC) attack. A wireless access point \nwill disassociate stations when it receives two invalid \nframes within 60 seconds, causing loss of network con-\nnectivity for 60 seconds. A one-minute loss of service is \nhardly tolerable in a voice application. \n Exploits \n Cross-Site Scripting (XSS) attacks are possible with \nVoIP systems because call logs contain header fields, and \nadministrators (and other privileged users) view those call \nlogs. In this attack, specially crafted From: (or other) fields \nare sent by an attacker in a normal SIP message (such \nas an INVITE). Then later, when someone such as the \nadministrator looks at the call logs using a Web browser, \nthe specially crafted From; causes a XSS attack against the \nadministrator’s Web browser, which can then do malicious \nthings with the administrator’s privileges. This can be a \ndamaging attack if the administrator has already logged \ninto other systems (HR databases, the SIP call controller, \nthe firewall) and her Web browser has a valid cookie (or \nactive session in another window) for those other systems. \n Social Engineering \n SPIT (Spam over Internet Telephony) is classified as a \nsocial threat because the callee can treat the call as unso-\nlicited, and the term unsolicited is strictly bound to be \na user-specific preference, which makes it hard for the \nsystem to identify this kind of transaction. SPIT can be \ntelemarketing calls used for guiding callees to a service \ndeployed to sell products. IM spam and presence spam \ncould also be launched via SIP messages. IM spam is \nvery similar to email spam; presence spam is defined as \na set of unsolicited presence requests for the presence \npackage . A subtle variation of SPIT called Vishing (VoIP \nphishing) is an attack that aims to collect personal data \nby redirecting users toward an interactive voice responder \nthat could collect personal information such as the PIN \nfor a credit card. From a signaling point of view, unsolic-\nited communication is technically a correct transaction. \n Unfortunately, many of the mechanisms that are effec-\ntive for email spam are ineffective with VoIP, for many \nreasons. First, the email with its entire contents arrives at a \nserver before it is seen by the user. Such a mail server can \ntherefore apply many filtering strategies, such as Bayesian \nfilters, URL filters, and so on. In contrast, in VoIP, human \nvoices are transmitted rather than text. To recognize voices \nand to determine whether the message is spam or not is \nstill a very difficult task for the end system. A recipient of \na call only learns about the subject of the message when \nhe is actually listening to it. Moreover, even if the content \nis stored on a voice mailbox, it is still difficult for today’s \nspeech recognition technologies to understand the context \nof the message enough to decide whether it is spam or not. \n One mechanism to fight automated systems that \ndeliver spam is to challenge such suspected incoming \ncalls with a Turing test. These methods include: \n ● Voice menu. Before a call is put through, a computer \nasks the caller to press certain key combinations, for \nexample “ Press #55. ” \n ● Challenge models. Before a call is put through, a \ncomputer asks the caller to solve a simple equation and \nto type in the answer — for example, “ Divide 10 by 2. ” \n ● Alternative number. Under the main number a \ncomputer announces an alternative number. This \nnumber may even be changed permanently by a call \nmanagement server. All these methods can even be \nenforced by enriching the audio signal with noise or \nmusic. This prevents SPIT bots from using speech \nrecognition. \n Such Turing tests are attractive, since it is often hard \nfor computers to decode audio questions. However, these \npuzzles cannot be made too difficult, because human \nbeings must always be able to solve them. \n One of the solutions to the SPIT problem is the \nwhitelist. In a whitelist, a user explicitly states which per-\nsons are allowed to contact him. A similar technique is \nalso used in Skype; where Alice wants to call Bob, she \nfirst has to add Bob to her contact list and send a contact \nrequest to Bob. Only when Bob has accepted this request \ncan Alice make calls to Bob. \n In general, whitelists have an introduction problem, \nsince it is not possible to receive calls by someone who is \nnot already on the whitelist. Blacklists are the opposite of \nwhitelists but have limited effectiveness at blocking spam \n" }, { "page_number": 591, "text": "PART | IV Privacy and Access Management\n558\nbecause new identities (which are not on the blacklist) \ncan be easily created by anyone, including spammers. \n Authentication mechanisms can be used to provide \nstrong authentication, which is necessary for strong \nwhitelists and reputation systems, which form the basis \nof SPIT prevention. Strong authentication is generally \nPublic Key Infrastructure (PKI) dependent. Proactive \npublishing of incorrect information, namely SIP \naddresses, is a possible way to fill up spammers ’ data-\nbases with existing contacts. Consent-based communica-\ntion is the other solution. Address obfuscation could be \nan alternative wherein spam bots are unable to identify \nthe SIP URIs. \n 3. SECURITY IN VoIP \n Much existing VoIP equipment is dedicated to VoIP, which \nallows placing such equipment on a separate network. \nThis is typically accomplished with a separate VLAN. \nDepending on the vendor of the equipment, this can be \nautomated using CDP (Cisco Discovery Protocol), LLDP \n(Link Layer Discovery Protocol), or 802.1x, all of which \nwill place equipment into a separate “ voice VLAN ” to \nassist with this separation. This provides a reasonable \nlevel of protection, especially within an enterprise where \nemployees lack much incentive to interfere with the tele-\nphone system. \n Preventative Measures \n However, the use of VLANs is not an ideal solution \nbecause it does not work well with softphones that are not \ndedicated to VoIP, because placing those softphones onto \nthe “ voice VLAN ” destroys the security and management \nadvantage of the separate network. A separate VLAN \ncan also create a false sense of security that only benign \nvoice devices are connected to the VLAN. However, even \nthough 802.1x provides the best security, it is still pos-\nsible for an attacker to gain access to the voice VLAN \n(with a suitable hub between the phone and the switch). \nMechanisms that provide less security, such as CDP or \nLLDP, can be circumvented by software on an infected \ncomputer. Some vendors ’ Ethernet switches can be con-\nfigured to require clients to request inline Ethernet power \nbefore allowing clients to join certain VLANs (such as \nthe voice VLAN), which provides protection from such \ninfected computers. But, as mentioned previously, such \nprotection of the voice VLAN prevents deployment of \nsoftphones, which is a significant reason that most compa-\nnies are interested in deploying VoIP. \n Eavesdropping \n To counter the threat of eavesdropping, the media can be \nencrypted. The method to encrypt RTP traffic is Secure \nRTP (RFC3711), which does not encrypt the IP, UDP, \nor RTP headers but does encrypt the RTP payload (the \n “ voice ” itself). SRTP’s advantage of leaving the RTP \nheaders unencrypted is that header compression pro-\ntocols (e.g., cRTP, 12 ROHC, 13 ) and protocol analyzers \n(e.g., looking for RTP packet loss and (S)RTCP reports) \ncan still function with SRTP-encrypted media. \n The drawback of SRTP is that approximately 13 \nincompatible mechanisms exist to establish the SRTP \nkeys. These mechanisms are at various stages of deploy-\nment, industry acceptance, and standardization. Thus, at \nthis point in time it is unlikely that two SRTP-capable \nsystems from different vendors will have a compatible \nSRTP keying mechanism. A brief overview of some of \nthe more popular keying mechanisms is provided here. \n One of the popular SRTP keying mechanisms, Security \nDescriptions, requires a secure SIP signaling channel \n(SIP over TLS) and discloses the SRTP key to each SIP \nproxy along the call setup path. This means that a passive \nattacker, able to observe the unencrypted SIP signaling and \nthe encrypted SRTP, would be able to eavesdrop on a call. \nS/MIME is SIP’s end-to-end security mechanism, which \nSecurity Descriptions could use to its benefit, but S/MIME \nhas not been well deployed and, due to specific features of \nSIP (primarily forking and retargeting), it is unlikely that \nS/MIME will see deployment in the foreseeable future. \n Multimedia Internet Keying (MIKEY) has approxi-\nmately eight incompatible modes defined; these allow estab-\nlishing SRTP keys. 14 Almost all these MIKEY modes are \nmore secure than Security Descriptions because they do not \ncarry the SRTP key directly in the SIP message but rather \nencrypt it with the remote party’s private key or perform a \nDiffie-Hellman exchange. Thus, for most of the MIKEY \nmodes, the attacker would need to actively participate in the \nMIKEY exchange and obtain the encrypted SRTP to listen \nto the media. \n Zimmermann Real-time Transport Protocol (ZRTP) 15 \nis another SRTP key exchange mechanism, which uses a \n 12 T. Koren, S. Casner, J. Geevarghese, B. Thompson, and P. Ruddy, \n “ Enhanced compressed RTP (CRTP) for links with high delay, ” IETF \nRFC 3545, July 2003. \n 13 G. Pelletier and K. Sandlund, “ Robust header compression version 2 \n(ROHCv2): Profi les for RTP, UDP, IP, ESP and UDP-Lite, ” IETF RFC \n5225, April 2008. \n 14 S. Fries and D. Ignjatic, “ On the applicability of various MIKEY \nmodes and extensions, ” IETF draft, March 31, 2008. \n 15 P. Zimmermann, A. Johnston, and J. Callas, “ ZRTP: Media path \nkey agreement for secure RTP, ” IETF draft, July 9, 2007. \n" }, { "page_number": 592, "text": "Chapter | 32 VoIP Security\n559\nDiffie-Hellman exchange to establish the SRTP keys and \ndetects an active attacker by having the users (or their \ncomputers) validate a short authentication string with \neach other. It affords useful security properties, includ-\ning perfect forward secrecy and key continuity (which \nallows the users to verify authentication strings once, \nand never again), and the ability to work through session \nborder controllers. \n In 2006, the IETF decided to reduce the number of \nIETF standard key exchange mechanisms and chose \nDTLS-SRTP. DTLS-SRTP uses Datagram TLS (a mech-\nanism to run TLS over a non-reliable protocol such as \nUDP) over the media path. To detect an active attacker, \nthe TLS certificates exchanged over the media path must \nmatch the signed certificate fingerprints sent over the SIP \nsignaling path. The certificate fingerprints are signed using \nSIP’s identity mechanism. 16 \n A drawback with SRTP is that it is imperative (for \nsome keying mechanisms) or very helpful (with other key-\ning mechanisms) for the SIP user agent to encrypt its SIP \nsignaling traffic with its SIP proxy. The only standard for \nsuch encryption, today, is SIP-over-TLS which runs over \nTCP. To date, many vendors have avoided TCP on their \nSIP proxies because they have found SIP-over-TCP scales \nworse than SIP-over-UDP. It is anticipated that if this can-\nnot be overcome we may see SIP-over-DTLS standard-\nized. Another viable option, especially in some markets, \nis to use IPsec ESP to protect SIP. \n Another drawback of SRTP is that diagnostic and \ntroubleshooting equipment cannot listen to the media \nstream. This may seem obvious, but it can cause diffi-\nculties when technicians need to listen to and diagnose \necho, gain, or other anomalies that cannot be diagnosed \nby examining SRTP headers (which are unencrypted) \nbut can only be diagnosed by listening to the decrypted \naudio itself. \n Identity \n As described in the “ Threats ” section, it is important \nto have strong identity assurance. Today there are two \nmechanisms to provide for identity: P-Asserted-Identity, 17 \nwhich is used within a trust domain (e.g., within a com-\npany or between a service provider and its paying cus-\ntomers) and is simply a header inserted into a SIP request, \nand SIP Identity, 18 which is used between trust domains \n(e.g., between two companies) and creates a signature \nover some of the SIP headers and over the SIP body. \n SIP Identity is useful when two organizations con-\nnect via SIP proxies, as was originally envisioned as the \nSIP architecture for intermediaries between two organiza-\ntions — often a SIP service provider. Many of these service \nproviders operate session border controllers (SBCs) rather \nthan SIP proxies, for a variety of reasons. One of the draw-\nbacks of SIP Identity is that an SBC, by its nature, will \nrewrite the SIP body (specifically the m \u0003 /c \u0003 lines), which \ndestroys the original signature. Thus, an SBC would need \nto rewrite the From header and sign the new message with \nthe SBC’s own private key. This effectively creates hop-\nby-hop trust; each SBC that needs to rewrite the message \nin this way is also able to manipulate the SIP headers and \nSIP body in other ways that could be malicious or could \nallow the SBC to eavesdrop on a call. Alternative crypto-\ngraphic identity mechanisms are being pursued, but it is \nnot yet known whether this weakness can be resolved. \n Traffic Analysis \n The most useful protection from traffic analysis is to \nencrypt your SIP traffic. This would require the attacker \nto gain access to your SIP proxy (or its call logs) to deter-\nmine who you called. \n Additionally, your (S)RTP traffic itself could also pro-\nvide useful traffic analysis information. For example, some-\none may learn valuable information just by noticing where \n(S)RTP traffic is being sent (e.g., the company’s in-house \nlawyers are calling an acquisition target several times a \nday). Forcing traffic to be concentrated to a device can help \nprevent this sort of traffic analysis. In some network topol-\nogies this can be achieved using a NAT, and in all cases it \ncan be achieved with an SBC. \n Reactive \n An intrusion prevention system (IPS) is a useful way to \nreact to VoIP attacks against signaling or media. An IPS \nwith generic rules and with VoIP-specific rules can detect \nan attack and block or rate-limit traffic from the offender. \n IPS \n Because SIP is derived from, and related to, many well-\ndeployed and well-understood protocols (HTTP), IDS/\nIPS vendors are able to create products to protect against \n 16 J. Peterson and C. Jennings, “ Enhancements for authenticated \nidentity management in the Session Initiation Protocol (SIP), ” IETF \nRFC 4474, August 2006. \n 17 C. Jennings, J. Peterson and M. Watson, “ Private extensions to the \nSession Initiation Protocol (SIP) for asserted identity within trusted \nnetworks, ” IETF RFC 3325, November 2002. \n 18 J. Peterson and C. Jennings, “ Enhancements for authenticated \nidentity management in the Session Initiation Protocol (SIP), ” IETF \nRFC 4474, August 2006. \n" }, { "page_number": 593, "text": "PART | IV Privacy and Access Management\n560\nSIP quite readily. Often an IDS/IPS function can be built \ninto a SIP proxy, SBC, or firewall, reducing the need for \na separate IDS/IPS appliance. An IDS/IPS is marginally \neffective for detecting media attacks, primarily to notice \nan excessive amount of bandwidth is being consumed \nand to throttle it or alarm the event. \n A drawback of IPS is that it can cause false positives \nand deny service to a legitimate endpoint, thus causing a \nDoS in an attempt to prevent a DoS. An attacker, knowl-\nedgeable of the rules or behavior of an IPS, may also be \nable to spoof the identity of a victim (the victim’s source \nIP address or SIP identity) and trigger the IPS/IDS into \nreacting to the attack. Thus, it is important to deny \nattackers that avenue by using standard best practices for \nIP address spoofing 19 and employing strong SIP identity. \nUsing a separate network (VLAN) for VoIP traffic can \nhelp reduce the chance of false positives, as the IDS/IPS \nrules can be more finely tuned for that one application \nrunning on the voice VLAN. \n Rate Limiting \n When suffering from too many SIP requests due to an \nattack, the first thing to consider doing is simple rate \nlimiting. This is often naïvely performed by simply rate \nlimiting the traffic to the SIP proxy and allowing excess \ntraffic to be dropped. Though this does effectively \nreduce the transactions per second the SIP proxy needs \nto perform, it interferes with processing of existing calls \nto a significant degree. For example, a normal call is \nestablished with an INVITE, which is reliably acknowl-\nedged when the call is established. If the simplistic rate \nlimiting were to drop the acknowledgment message, the \nINVITE would be retransmitted, incurring additional \nprocessing while the system is under high load. A sepa-\nrate problem with rate limiting is that both attackers and \nlegitimate users are subject to the rate limiting; it is more \nuseful to discriminate the rate limiting to the users caus-\ning the high rate. This can be done by distributing the \nsimple rate limiting toward the users rather than doing \nthe simple rate limiting near the server. \n On the server, a more intelligent rate limiting is useful. \nThese are usually proprietary rate-limiting schemes, but \nthey attempt to process existing calls before processing new \ncalls. For example, such a scheme would allow process-\ning the acknowledgment message for a previously proc-\nessed INVITE, as described above; process the BYE \nassociated with an active call, to free up resources; or \nprocess high-priority users ’ calls (the vice president’s \noffice is allowed to make calls, but the janitorial staff is \nblocked from making calls). \n By pushing rate limiting toward users, effective use \ncan be made of simple packet-based rate limiting. For \nexample, even a very active call center phone does not \nneed to send 100 Mb of SIP signaling traffic to its SIP \nproxy; even 1 Mb would be an excessive amount of traffic. \nBy deploying simplistic, reasonable rate limiting very near \nthe users, ideally at the Ethernet switch itself, bugs in the \ncall processing application or malicious attacks by unau-\nthorized software can be mitigated. \n A similar situation occurs with the RTP media itself. \nEven high-definition video does not need to send or \nreceive 100 Mb of traffic to another endpoint and can be \nrate-limited based on the applications running on the \ndedicated device. This sort of policing can be effective at \nthe Ethernet switch itself, or in an IDS/IPS (watching for \nexcessive bandwidth), a firewall, or SBC. \n Challenging \n A more sophisticated rate-limiting technique is to pro-\nvide additional challenges to a high-volume user. This \ncould be done when it is suspected that the user is \nsending spam or when the user has initiated too many \ncalls in a certain time period. A simple mechanism is to \ncomplete the call with an interactive voice response sys-\ntem that requests the user to enter some digits ( “ Please \nenter 5, 1, 8 to complete your call ” ). Though this tech-\nnique suffers from some problems (it does not work \nwell for hearing-impaired users or if the caller does not \nunderstand the IVR’s language), it is effective at reduc-\ning the calls per second from both internal and external \ncallers. \n 4. FUTURE TRENDS \n Certain SIP proxies have the ability to forward SIP \nrequests to multiple user agents. These SIP requests \ncan be sent in parallel, in series, or a combination of \nboth series and parallel. Such proxies are called forking \nproxies . \n Forking Problem in SIP \n The forking proxy expects a response from all the user \nagents who received the request; the proxy forwards \n 19 P. Ferguson and D. Senie, “ Network ingress fi ltering: Defeating \ndenial of service attacks which employ IP source address spoofi ng, ” \nIETF 2827, May 2000. \n" }, { "page_number": 594, "text": "Chapter | 32 VoIP Security\n561\nonly the “ best ” final response back to the caller. This \nbehavior causes a situation known as the heterogeneous \nerror response forking problem [HERFP], which is illus-\ntrated in Figure 32.4 . 20 \n Alice initiates an INVITE request that includes a \nbody format that is understood by UAS2 but not UAS1. \nFor example, the UAC might have used a MIME type \nof multipart/mixed with a session description and an \noptional image or sound. As UAC1 does not support \nthis MIME format, it returns a 415 (Unsupported Media \nType) response. Unfortunately the proxy has to wait until \nall the branches generate the final response and then pick \nthe “ best ” response, depending on the criteria mentioned \nin RFC 3261. In many cases the proxy has to wait a long \nenough time that the human operating the UAC aban-\ndons the call. The proxy informs the UAS2 that the call \nhas been canceled, which is acknowledged by UAS2. It \nthen returns the 415 (Unsupported Media Type) back to \nAlice, which could have been repaired by Alice by send-\ning the appropriate session description. \n Security in Peer-to-Peer SIP \n Originally SIP was specified as a client/server proto-\ncol, but recent proposals suggest using SIP in a peer-\nto-peer setting. 21 One of the major reasons for using SIP \nin a peer-to-peer setting is its robustness, since there is \nno centralized control. As defined, “ peer to peer (P2P) \nsystems are distributed systems without any centralized \ncontrol or hierarchical organization. ” This definition \ndefines pure P2P systems. Even though many networks \nare considered P2P, they employ central authority or \nuse supernodes. Early systems used flooding to route \nmessages, which was found to be highly inefficient. To \nimprove lookup time for a search request, structured \noverlay networks have been developed that provide \nload balancing and efficient routing of messages. They \nINVITE (1)\nAlice (UAC)\nForking\nProxy\nBob 1(UAS)\nBob 2(UAS)\nINVITE (2)\nINVITE (2)\n415 (Unsupported\nMedia Type) (3)\nACK (4)\n180 Ringing (6)\nCANCEL(7)\n200 OK (8)\n415 (Unsupported\nMedia Type) (13)\nACK(12)\n487 (Request Terminated) (11)\n200 OK (10)\nCANCEL(9)\nTime Passes\n180 Ringing (5)\nACK (14)\n FIGURE 32.4 The heterogeneous error response forking problem. \n 20 H. Schulzrinne, D. Oran, and G. Camarillo, “ The reason header \nfi eld for the Session Initiation Protocol (SIP), ” IETF RFC 3326, \nDecember 2002. \n 21 K. Singh and H. Schulzrinne, “ Peer-to-peer Internet telephony \nusing SIP, ” in 15th International Workshop on Network and Operating \nSystems Support for Digital Audio and Video, June 2005. \n" }, { "page_number": 595, "text": "PART | IV Privacy and Access Management\n562\nuse distributed hash tables (DHTs) to provide efficient \nlookup. 22 Examples of structured overlay networks are \nCAN, Chord, Pastry, and Tapestry. 23 , 24 , 25 , 26 \n We focus on Chord Protocol because it is used as a \nprototype in most proposals for P2P-SIP. Chord has a \nring-based topology in which each node stores at most \nlog( N ) entries in its finger table, which is like an appli-\ncation-level routing table, to point to other peers. Every \nnode’s IP address is mapped to an m bit chord identifier \nwith a predefined hash function h . The same hash func-\ntion h is also used to map any key of data onto a key ID \nthat forms the distributed hash table. Every node main-\ntains a finger table of log( N ) \u0003 6 entries, pointing to the \nnext-hop node location at distance 2 i \t 1 (for i \u0003 1,2… m ) \nfrom this node identifier. Each node in the ring is respon-\nsible for storing the content of all key IDs that are equal \nto the identifier of the node’s predecessor in the Chord \nring. In a Chord ring each node n stores the IP address of \n m successor nodes plus its predecessor in the ring. The \n m successor entries in the routing table point to nodes at \nincreasing distance from n . Routing is done by forward-\ning messages to the largest node-ID in the routing table \nthat precedes the key-ID until the direct successor of a \nnode has a longer ID than the key ID. \n Singh and Schulzrinne envision a hierarchical archi-\ntecture in which multiple P2P networks are represented \nby a DNS domain. A global DHT is used for interdo-\nmain routing of messages. \n Join/Leave Attack \n Security of structured overlay networks is based on the \nassumption that joining nodes are assigned node-IDs at ran-\ndom due to random assignment of IP addresses. This could \nlead to a join/leave attack in which the malicious attacker \nwould want to control O(log N ) nodes out of N nodes as \nsearch is done on O(log N ) nodes to find the desired key ID. \nWith the adoption of IPv6, the join/leave attack can be more \nmassive because the attacker will have more IP addresses. \nBut even with IPv4, join/leave attacks are possible if the IP \naddresses are assigned dynamically. Node-ID assignment \nin Chord is inherently deterministic, thereby allowing the \nattacker to compute Node-IDs in advance where the attack \ncould be launched by spoofing IP addresses. A probable \nsolution would be to authenticate nodes before allowing \nthem to join the overlay, which can involve authenticating \nthe node before assigning the IP address. \n Attacks on Overlay Routing \n Any malicious node within the overlay can drop, alter, or \nwrongly forward a message it receives instead of routing \nit according to the overlay protocol. This can result in \nsevere degradation of the overlay’s availability. Therefore \nan adversary can perform one of the following: \n ● Registration attacks . One of the existing challenges \nto P2P-SIP registration is to provide confidentiality \nand message integrity to registration messages. \n ● Man-in-the-middle attacks. Let’s consider the case \nwhere a node with ID 80 and a node with ID 109 \nconspire to form a man-in-the-middle attack, as \nshown in Figure 32.5 . The honest node responsible \nfor the key is node 180. Let’s assume that a recursive \napproach is used for finding the desired key ID, \nwherein each routing node would send the request \nmessage to the appropriate node-ID until it reaches \nthe node-ID responsible for the desired key-ID. The \nsource node (node 30) will not have any control \nnor can it trace the request packet as it traverses \nthrough the Chord ring. Therefore node 32 will \nestablish a dialog with node 119, and node 80 would \nimpersonate node 32 and establish a dialog with \nnode 108. This attack can be detected if an iterative \nrouting mechanism is used wherein a source node \nchecks whether the hash value is closer to the key-ID \nthan the node-ID it received on the previous hop. 27 \nTherefore the source node (32) would get suspicious \nif node 80 redirected it directly to node 119, because \nit assumes that there exists a node with ID lower than \nKey ID 107. \n ● Attacks on bootstrapping nodes. Any node wanting to \njoin the overlay needs to be bootstrapped with a static \nnode or cached node or discover the bootstrap node \n 22 H. Balakrishnan, M. FransKaashoek, D. Karger, R. Morris, and \nI. Stoica, “ Looking up data in P2P systems, ” Communications of the \nACM , Vol. 46, No. 2, February 2003. \n 23 S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker, \n “ A scalable content-addressable network, ” in Proceedings of ACM \nSIGCOMM 2001. \n 24 I. Stoica, R. Morris, D. Karger, M. F Kaashoek, and H. \nBalakrishnan, “ Chord: A scalable peer-to-peer lookup service for \ninternet applications, ” in Proceedings of the 2001 Conference on \nApplications, Technologies, Architectures, and Protocols for Computer \nCommunication, pp. 149 – 160, 2001. \n 25 A. Rowstron and P. Druschel, “ Pastry: Scalable, decentralized \nobject location and routing for large-scale peer-to-peer systems, ” in \n IFIP/ACM International Conference on Distributed Systems Platforms \n(Middleware), Heidelberg, Germany, pp. 329 – 350, 2001. \n 26 B. Y. Zhao, L. Huang, J. Stribling, S. C. Rhea, A. D. Joseph, and J. \nD. Kubiatowicz, “ Tapestry: A resilient global-scale overlay for service \ndeployment, ” IEEE Journal on Selected Areas in Communications, \nVol. 22, No. 1, pp. 41 – 53, Jan. 2004. \n 27 M. Srivatsa and L. Liu, “ Vulnerabilities and security threats in \nstructured overlay networks: A quantitative analysis, ” in Proceedings \nof 20th Annual Computer Science Application Conference, Tucson, \npp. 251 – 261, Dec. 6 – 10, 2004. \n" }, { "page_number": 596, "text": "Chapter | 32 VoIP Security\n563\nthrough broadcast mechanisms (e.g., SIP-multicast). \nIn any case, if an adversary gains access to the \nbootstrap node, the joining node can easily be \nattacked. Securing the bootstrap node is still an open \nquestion. \n ● Duplicate identity attacks. Preventing duplicate \nidentities is one of the open problems whereby a hash \nof two IP addresses can lead to the same node-ID. The \nSingh and Schulzrinne approach reduces this problem \nsomewhat by using a P2P network for each domain. \nFurther, they suggest email-based authentication in \nwhich a joining node would receive a password via \nemail and then use the password to authenticate itself \nto the network. \n ● Free riding . In a P2P system there is a risk of free \nriding in which nodes use services but fail to provide \nservices to the network. Nodes use the overlay for \nregistration and location service but drop other \nmessages, which could eventually result in a reduction \nof the overlay’s availability. \n The other major challenges that are presumably even \nharder to solve for P2P-SIP are as follows: \n ● Prioritizing signaling for emergency calls in an over-\nlay network and ascertaining the physical location of \nusers in real time may be very difficult. \n ● With the high dynamic nature of P2P systems, \nthere is no predefined path for signaling traffic, \nand therefore it is impossible to implement a \nsurveillance system for law enforcement agencies \nwith P2P-SIP. \n End-to-End Identity with SBCs \n As discussed earlier, 28 End-to-End Identity with SBCs \nprovides identity for SIP requests by signing certain \nSIP headers and the SIP body (which typically contains \nthe Session Description Protocol (SDP)). This iden-\ntity is destroyed if the SIP request travels through an \nSBC, because the SBC has to rewrite the SDP as part \nof the SBC’s function (to force media to travel through \nthe SBC). Today, nearly all companies that provide SIP \ntrunking (Internet telephony service providers, ITSPs) \nutilize SBCs. In order to work with 29 those SBCs, one \nwould have to validate incoming requests (which is \nnew), modify the SDP and create a new identity (which \nthey are doing today), and sign the new identity (which \nis new). As of this writing, it appears unlikely that ITSPs \nwill have any reason to perform these new functions. \n A related problem is that, even if we had end-to-\nend identity, it is impossible to determine whether a \ncertain identity can rightfully claim a certain E.164 phone \nnumber in the From: header. Unlike domain names, which \ncan have their ownership validated (the way email address \nvalidation is performed on myriad Web sites today), there \nis no de facto or written standard to determine whether an \nidentity can rightfully claim to “ own ” a certain E.164. \nKey ID \u0003 75\nKey ID \u0003 24\nNode ID\nh(IP address)\n\u000332\nNode ID 119 is responsible for\nKey 107\n(3)\n(4)\n(2)\n(5)\n(1)\n(6)\nNode ID\nh(IP address)\n\u0003108\nKey ID h(alice@atlanta.com) \u0003 19\n30\n64\n105\n108\n119\n32\nKey ID\nh(bob@biloxi.com) \u0003 107\n FIGURE 32.5 Man-in-the-middle attack. \n 28 J. Peterson and C. Jennings, “ Enhancements for authenticated identity \nmanagement in the Session Initiation Protocol (SIP), ” IETF RFC 4474, \nAugust 2006. \n 29 J. Peterson and C. Jennings, “ Enhancements for authenticated identity \nmanagement in the Session Initiation Protocol (SIP), ” IETF RFC 4474, \nAugust 2006. \n" }, { "page_number": 597, "text": "PART | IV Privacy and Access Management\n564\n It is anticipated that as SIP trunking becomes more \ncommonplace, SIP spam will grow with it, and the growth \nof SIP spam will create the necessary impetus for the \nindustry to solve these interrelated problems. Solving the \nend-to-end identity problem and the problem of attesting \nE.164 ownership would allow domains to immediately \ncreate meaningful whitelists. Over time these whitelists \ncould be shared among SIP networks, end users, and oth-\ners, eventually creating a reputation system. But as long as \nspammers are able to impersonate legitimate users, even \ncreating a whitelist is fraught with the risk of a spammer \nguessing the contents of that whitelist (e.g., your bank, \nfamily member, or employer). \n 5. CONCLUSION \n With today’s dedicated VoIP handsets, a separate voice \nVLAN provides a reasonable amount of security. Going for-\nward, as nondedicated devices become more commonplace, \nmore rigorous security mechanisms will gain importance. \nThis will begin with encrypted signaling and encrypted \nmedia and will evolve to include spam protection and \nenhancements to SIP to provide cryptographic assurance of \nSIP call and message routing. \n As VoIP continues to grow, VoIP security solutions will \nhave to consider consumer, enterprise and policy concerns. \nSome VoIP applications, commonly installed on PCs may \nbe against corporate security policies (e.g., Skype). One \nof the biggest challenges with enabling encryption is with \nmaintaining a public key infrastructure and the complexities \ninvolved in distributing public key certificates that would \nspan to end users 30 and key synchronization between vari-\nous devices belonging to the same end user agent. 31 \n Using IPsec for VoIP tunneling across the Internet \nis another option; however, it is not without substantial \noverhead. 32 Therefore end-to-end mechanisms such as \nSRTP are specified for encrypting media and establish-\ning session keys. \n VoIP network designers should take extra care in \ndesigning intrusion detection systems that are able to \nidentify never-before-seen activities and react according \nto the organization’s policy. They should follow industry \nbest practices for securing endpoint devices and servers. \nCurrent softphones and consumer-priced hardphones \nuse the “ haste-to-market ” implementation approach and \ntherefore become vulnerable to VoIP attacks. Therefore \nVoIP network administrators may evaluate VoIP endpoint \ntechnology, identify devices or software that will meet \nbusiness needs and can be secured, and make these the \ncorporate standards. With P2P-SIP, the lack of central \nauthority makes authentication of users and nodes diffi-\ncult. Providing central authority would dampen the spirit \nof P2P-SIP and would conflict the inherent features of \ndistributed networks. A decentralized solution such as the \nreputation management system, where the trust values are \nassigned to nodes in the network based on prior behavior, \nwould lead to a weak form of authentication because the \ncredibility used to distribute trust values could vary in a \ndecentralized system. Reputation management systems \nwere more focused on file-sharing applications and have \nnot yet been applied to P2P-SIP. \n 30 D. Berbecaru, A. Lioy, and M. Marian, “ On the complexity of pub-\nlic key certifi cate validation, ” in Proceedings of the 4th International \nConference on Information Security, Lecture Notes in Computer \nScience , Springer-Verlag, Vol. 2200, pp. 183 – 203, 2001. \n 31 C. Jennings and J. Fischl, “ Certifi cate management service for the \nSession Initiation Protocol (SIP), ” IETF draft April 5, 2008. \n 32 Z. Anwar, W. Yurcik, R. Johnson, M. Hafi z, and R. Campbell, \n “ Multiple design patterns for voice over IP (VoIP) security, ” IPCCC \n2006, pp. 485 – 492, April 10 – 12, 2006. \n" }, { "page_number": 598, "text": " Storage Security \n Part V \n CHAPTER 33 SAN Security \n John McGowan, Jeffrey Bardin and John McDonald \n CHAPTER 34 Storage Area Networking Security Devices \n Robert Rounsavall \n CHAPTER 35 Risk Management \n Sokratis K. Katsikas \n" }, { "page_number": 599, "text": "This page intentionally left blank\n" }, { "page_number": 600, "text": "567\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n SAN Security \n John McGowan \n EMC Corporation \n Jeffrey Bardin \n Independent consultant \n John McDonald \n EMC Corporation \n Chapter 33 \n As with any IT subsystem, implementing the appropriate \nlevel of security for storage area networks (SANs) depends \non many factors. The resources expended on protecting the \nSAN should reflect the value of the information stored on \nthe SAN using a risk-based approach. A full assessment \nand classification of the data including threats, vulnerabili-\nties, existing controls, and potential impact should the loss, \ndisclosure, modification, interruption, and/or destruction \nof the data occur should be performed prior to configura-\ntion of the SAN. Anytime you consider security prior to \nactual build-out of a system or device, your expenditures \nare lower than attempting to bolt on the security after the \nfact. There are a number of inexpensive steps that can be \ntaken to ensure that security is appropriate for the classifi-\ncation of data stored in the SAN. \n As the use of SANs increases, the amount of data \nbeing stored increases exponentially, making the SAN a \ntarget for hackers, criminals, and disgruntled employees. \nTo effectively protect a SAN, it is important to under-\nstand what actions increase security and what impact \nthese actions have on the performance and usability of \nthe environment. Ensuring a balance among protection \ncapability, cost, performance, and operational consid-\nerations must be at the top of your list when applying \ncontrols to your SAN environment. \n One thing to consider is that the most probable ave-\nnue of attack in a SAN is through the hosts connected to \nthe SAN. There are potentially thousands of host, appli-\ncation, and operating system-specific security considera-\ntions that are beyond the scope of this chapter but should \nbe followed as your systems and application administra-\ntors properly configure their owned devices. \n Information security, that aspect of security that \nseeks to protect data confidentiality, data integrity, and \naccess to the data, is an established commercial sector \nwith a wide variety of vendors marketing mature prod-\nucts and technologies, such as VPNs, firewalls, antivirus, \nand content management. Recently there has been a sub-\ntle development in security. Organizations are expanding \ntheir security perspectives to secure not only end-user \ndata access and the perimeter of the organization but \nalso the data within the datacenter. Several factors drive \nthese recent developments. The continuing expansion of \nthe network and the continued shrinking of the perimeter \nexpose datacenter resources and the storage infrastruc-\nture to new vulnerabilities. Data aggregation increases \nthe impact of a security breach. IP-based storage net-\nworking potentially exposes storage resources to tradi-\ntional network vulnerabilities. Recently the delineation \nbetween a back-end datacenter and front-end network \nperimeter is less clear. Storage resources are potentially \nbecoming exposed to unauthorized users inside and out-\nside the enterprise. In addition, as the plethora of com-\npliance regulations continues to expand and become \nmore complicated, IT managers are faced with address-\ning the threat of security breaches from both within and \noutside the organization. Complex international regula-\ntions require a greater focus on protecting not only the \nnetwork but the data itself. This chapter describes best \npractices for enhancing and applying security of SANs. \n 1. ORGANIZATIONAL STRUCTURE \n Every company has its own organizational structures and \nsecurity requirements. These are typically driven by the \ntype of business, types of regulations and statutes that \nfocus corporate compliance, and the type of data stored in \nthe SAN. All factors should be evaluated when developing \n" }, { "page_number": 601, "text": "PART | V Storage Security\n568\na security policy that is appropriate for your environment. \nIt is wise to incorporate existing security policies such as \nacceptable use policies (AUPs), data classification, and \nintellectual property policies along playbooks or standard \noperating procedures (SOPs) describing how the data is \nstored and managed. \n As with best practice, implementation can lead to \ntradeoffs. Making a SAN more secure may result in addi-\ntional management overhead or a reduction in ease-of-use \ncapabilities or the introduction of ease-of-use capabilities \nthat reduce the overall SAN security posture. The use of \nencryption could create an unacceptable performance \nhit if not applied properly. You may even find that SAN \nsecurity best practices conflict with other IT policies. In \nsome instances, functions required to implement a recom-\nmendation may not be available on a certain SAN. Other \ncompensating controls need to be considered such as a \nprocess, policy, or triggered script that assists in imple-\nmenting the control. Your implementation of security \ncontrols should be based on risk as defined in an assess-\nment process during SAN deployment. \n AAA \n Authentication, authorization, and accounting (AAA) is a \nterm for a framework for intelligently controlling access \nto computer resources, enforcing policies, auditing usage, \nand providing the information necessary to bill for serv-\nices. These combined processes are important for effec-\ntive network management and security. \n Authentication provides a way of identifying a user, \ntypically by having the user enter a valid username and \npassword before granting access. The process of authenti-\ncation is based on each user having a unique set of criteria \nfor gaining access. The AAA server compares the user’s \ncredentials with credentials stored in a database. If the cre-\ndentials match, the user is granted access to the resource. \n Authorization is the process of enforcing poli-\ncies: determining what types or qualities of activities, \nresources, or services a user is permitted. For example, \nafter logging into a system, the user may try to issue com-\nmands. The authorization process determines whether the \nuser has the authority to issue such commands. Once you \nhave authenticated a user, she may be authorized for mul-\ntiple types of access or activity. \n The final process in the AAA framework is account-\ning, which measures the resources a user consumes dur-\ning access. This can include the amount of system time \nor the amount of data a user has sent and/or received \nduring a session. Accounting is accomplished by logging \nsession statistics and usage information and is used for \nauthorization control, billing, trend analysis, resource \nutilization, and capacity planning activities. \n Authentication, authorization, and accounting servi-\nces are often provided by a dedicated AAA server, a pro-\ngram that performs these functions. A common standard \nby which network access servers interface with the AAA \nserver is the Remote Authentication Dial-In User Service \n(RADIUS). \n Assessment and Design \n The first step in developing adequate datacenter controls is \nto know what the controls need to address — for example, \nthe vulnerabilities that can be exploited. Some threats may \nseem very difficult, and hence unlikely to be exploited, \nbut it is an essential piece of the process to understand \nthem so that you can demonstrate that they are either miti-\ngated or that it is not commercially reasonable to fix the \nproblem. This step is essential since it is not usually the \nvulnerabilities that you already know about that will be \nexploited. Once you have identified the gaps in the data-\ncenter, you can communicate the vulnerabilities to your \ndata by implementing a comprehensive SAN security pro-\ngram. The first step is to develop a storage security stand-\nard that outlines the architecture, workflow process, and \ntechnologies to leverage when deploying enterprisewide \nstorage networks. Today SAN security solutions are \navailable to meet some needs of every individual topol-\nogy, incorporating a wide variety of vendor products and \narchitectural needs. It is essential that corporations aggres-\nsively take on the challenge of integrating these new secu-\nrity features and technologies to interoperate with existing \nsecurity technologies while complying with and redefin-\ning existing standards, policies, and processes. \n The remainder of this section details the tasks within \nthe planning and design phase. Included in this section \nare architecture planning and design details, considera-\ntions for design, best practices, and notes on data collec-\ntion and documentation. \n Review Topology and Architecture Options \n Fabric security augments overall application security. It \nis not sufficient on its own; host and disk security are \nalso required. You should consider each portion of the \ncustomer’s SAN when determining the correct security \nconfiguration. Review the current security infrastructure \nand discuss present and future needs. Listed here are the \nmost common discussion segments: \n ● SAN management access. Secure access to manage-\nment services. \n ● Fabric access. Secure device access to fabric service. \n" }, { "page_number": 602, "text": "Chapter | 33 SAN Security\n569\n ● Target access. Secure access to targets and Logical \nUnit Numbers (LUNs). \n ● SAN protocol. Secure switch-to-switch \ncommunication protocols. \n ● IP storage access. Secure Fibre Channel over TCP/\nIP (FCIP) and Internet Small Computer System \nInterface (iSCSI) services. \n ● Data integrity and secrecy. Encryption of data both \nin transit and at rest. \n Additional subjects to include in a networked storage \nstrategy involve: \n ● Securing storage networking ports and devices \n ● Securing transmission and ISL interfaces \n ● Securing management tools and interfaces (Simple \nNetwork Management Protocol (SNMP), Telnet, IP \ninterfaces) \n ● Securing storage resources and volumes \n ● File system permissions for Network-attached Storage \n(NAS) file access using Network File System (NFS), \nand Common Internet File System (CIFS) \n ● Operating system access control and management \ninterfaces \n ● Control and monitor root and other supervisory access \n ● Physical and logical security and protection \n ● Virus protection and detection on management \nservers and PC \n ● Disabling SNMP management interfaces not used or \nneeded \n ● Restricting use and access to Telnet and FTP for \ncomponents \n There are a several major areas of focus for securing \nstorage networks. These include securing the fabric and its \naccess, securing the data and where it is stored, securing \nthe components, securing the transports, and securing the \nmanagement tools and interfaces. This part of the chap-\nter describes the following components: \n ● Protection rings (see sidebar, “ Security and \nProtection ” ) \n ● Restricting access to storage \n ● Access control lists (ACLs) and policies \n ● Port blocks and port prohibits \n ● Zoning and isolating resources \n Security and Protection \n Establish an overall security perimeter that is both physical \nand logical to restrict access to components and applications. \nPhysical security includes placing equipment in locked cabi-\nnets and facilities that have access monitoring capabilities. \nLogical security involves securing those applications, servers, \nand other interfaces, including management consoles and \nmaintenance ports, from unauthorized access. Also, consider \nwho has access to backup and removable media and where \nthe company stores them as part of an overall security perim-\neter and defense. \n Secure your networks, including Local Area Networks \n(LANs), Metropolitan Area Networks (MANs), and Wide \nArea Networks (WANs), with various subnets and segments \nincluding Internet, intranet, and extranets with firewalls and \nDMZ access where applicable. \n Secure your servers so that if someone attempts to access \nthem using other applications, using public or private net-\nworks, or simply by walking up to a workstation or console, \nthe servers are protected. Server protection is important; it is \none of the most common points that an attacker will target. \nMake sure that the server has adequate protection on user-\nnames, passwords, files, and application access permissions. \nControl and monitor who has access to root and other super-\nvisory modes of access, as well as who can install and con-\nfigure software. \n Protect your storage network interfaces including Fibre \nChannel, Ethernet for iSCSI, and NAS as well as management \nports, interfaces, and tools. Tools including zoning, access \ncontrol lists, binding, segmentation, authorizing, and authen-\ntication should be deployed within the storage network. \n Protect your storage subsystems using zoning and LUN/\nvolume mapping and masking. Your last line of defense \nshould be the storage system itself, so make sure it is ade-\nquately protected. \n Protect wide area interfaces when using Internet File \nChannel Protocol (iFCP), Fibre Channel over Internet Protocol \n(FCIP), Internet Small Computer System Interface (iSCSI), \nSynchronous Optical Networking/Synchrous Digital Hierarchy \n(SONET/SDH), Asynchrous Transfer Mode (ATM), and other \nmeans for moving data between locations. Set up VPNs to \nhelp guard data while it’s in transit, along with compression \nand encryption. Maintain access control and audit trails of the \nmanagement interface tools, and make sure that you change \ntheir passwords. \n Restricting Access to Storage \n Securing a storage network involves making sure that \nyou protect the SAN itself as well as the storage. LUN \nmapping works by creating an access table on the storage \ndevice or host servers (persistent binding) that determines \nwhat servers, using World Wide Node Names (WWNN) \nor World Wide Port Names (WWPN), can access (read, \nread/write, etc.) a specific volume or LUN. Servers that \n" }, { "page_number": 603, "text": "PART | V Storage Security\n570\ndo not have access to the specific LUN receive an I/O \nreject error or may not see the storage at all. Storage-\nbased security is the last line of defense when it comes to \ncontrolling access to a storage resource LUN. \n Device masking hides the existence of a storage \ndevice from all but a desired set of host connections. \nBecause Fibre Channel fabrics support zoning based on \nindividual devices (WWN Zoning) it is possible to per-\nform device masking in the fabric (King, 2001). \n 2. ACCESS CONTROL LISTS (ACL) AND \nPOLICIES \n Authentication involves verifying the identity of people \nand devices that are attempting to gain authorization to \nstorage network resources. Authentication involves use of \na server such as a remote access dial-up server (RADIUS) \ncommonly used in network environments to verify iden-\ntity credentials. Access control lists implement authori-\nzation to determine who and what can have access to \nstorage network resources. When looking at controlling \naccess and isolating traffic within a single switch or direc-\ntor as well as in a single fabric of two or more switches \nall connected together, use the following techniques: \n ● Fabric, switch, and port binding with policies \nand ACLs \n ● Fabric and device zoning to control access \n ● Networking segmentation (traffic isolation) \n ● Port isolation (port blocks, prohibits, port isolation, \nand disablement) \n ● Partitioning and segmentation (logical domains, \nVirtual Storage Area Network (VSAN), Logical \nStorage Area Network (LSAN), virtual switches) \n Data Integrity Field (DIF) \n DIF provides a standard data-checking mechanism to \nmonitor the integrity of data. DIF sends a block of infor-\nmation with integrity checks to an HBA. The HBA vali-\ndates the data and sends the data block with its integrity \ncheck across the Fibre Channel fabric to the storage \narray. The storage array in turn validates the metadata \nand writes the data to Redundant Array of Independent \nDisks (RAID) memory. The array then sends the block of \ndata to the disk, which validates the information before \nwriting it to disk. DIF pinpoints where in the process of \nwriting data to disk that the corruption occurred. \n Diffie-Hellman: Challenge Handshake \nAuthentication Protocol (DH-CHAP) \n DH-CHAP is a forthcoming Internet Standard for the \nauthentication of devices connecting to a Fibre Channel \nswitch. DH-CHAP is a secure key-exchange authenti-\ncation protocol that supports both switch-to-switch and \nhost-to-switch authentication. DH-CHAP supports MD-5 \nand SHA-1 algorithm-based authentication. \n Fibre-Channel Security Protocol (FC-SP) \n FC-SP is a security framework that includes protocols to \nenhance Fibre Channel security in several areas, including \nauthentication of Fibre Channel devices, cryptographi-\ncally secure key exchange, and cryptographically secure \ncommunication between Fibre Channel devices. FC-SP is \nfocused on protecting data in transit throughout the Fibre \nChannel network. FC-SP does not address the security of \ndata which is stored on the Fibre Channel network. \n Fibre-Channel \nAuthentication \nProtocol \n(FCAP) \n FCAP is an optional authentication mechanism employed \nbetween any two devices or entities on a Fibre Channel \nnetwork using certificates or optional keys. \n Fibre-Channel Password Authentication Protocol \n(FCPAP) FCPAP is an optional password based authen-\ntication and key exchange protocol that is utilized in \nFibre Channel networks. FCPAP is used to mutually \nauthenticate Fibre Channel ports to each other. \n Switch Link Authentication Protocol (SLAP) \n SLAP is an authentication method for Fibre Channel \nswitches that utilizes digital certificates to authenticate \nswitch ports. SLAP was designed to prevent the unauthor-\nized addition of switches into a Fibre Channel network. \n Port Blocks and Port Prohibits \n You can use zoning to isolate ports — for example, Fiber \nConnectivity (FICON) ports from open systems ports, and \ntraffic. Other capabilities that exist on switches and direc-\ntors that support Enterprise Systems Connection (ESCON) \nor FICON are port blocks and port prohibits. Port blocks \nand port prohibits are another approach independent of the \nupper-layer fabric and name server for protecting ESCON \nand FICON ports. Unlike fabric zoning that can span mul-\ntiple switches and directors in a fabric, port blocks and \nprohibits are specific to an individual director. \n Zoning and Isolating Resources \n While a Fibre Channel-based storage network can theo-\nretically have approximately 16 million addresses for serv-\ners, devices, and switches, the reality is a bit lower. Zones \ncan be unique with devices isolated from each, or they \ncan overlap with devices existing in different overlapping \nzones. You can accomplish port and fabric security using \n" }, { "page_number": 604, "text": "Chapter | 33 SAN Security\n571\nzoning combinations, including WWNN Soft zoning, \nWWPN Soft zoning as part of the T11 FC-SW-2 standard, \nalong with hardware enforced port zoning. \n 3. PHYSICAL ACCESS \n Physical access is arguably one of the most criti-\ncal aspects of Fibre Channel SAN security. The Fibre \nChannel protocol is not designed to be routed (although \nSANs can be bridged), which means that what happens \nin the SAN should stay in the SAN. By implementing \nthe rest of the best practices discussed in this chapter, \nstrong physical access controls can almost ensure that \nany attempts to access the SAN from the outside will \nbe thwarted, since an attacker would need to physically \nconnect a fiber optic cable to a switch or array to gain \naccess. \n Physical controls should also include clear labeling \nfor all cables and components. In performing a physical \naudit of the infrastructure, clear and accurate labeling \nwill ensure that any changes are immediately detectable. \n Controlling physical access to any storage medium is \nalso critical. This applies not only to the disk drives in a \nSAN storage system but to any copies of the data, such \nas backup tapes. The best security management system \ncan be easily defeated if an attacker can walk into a \nlightly controlled office area and walk out with backup \ntapes that were stored there. \n 4. CHANGE MANAGEMENT \n A documented and enforced process should be in place \nfor any actions that may change the configuration of any \ncomponent connected to the SAN. Changes should be \nreviewed beforehand to ensure that they will not com-\npromise security, and any unapproved changes uncov-\nered during a regular audit (discussed shortly) should be \nthoroughly investigated. \n 5. PASSWORD POLICIES \n Normal password-related best practices, such as regu-\nlar changing of passwords and the use of hard-to-guess \npasswords, should be implemented for all components \non the SAN. This includes the storage systems, switches \nand hubs, hosts, and so on. \n The generally accepted best practice for the industry \nis that passwords should be changed every three to six \nmonths, and passwords should be a minimum of eight \ncharacters long and include at least one number or sym-\nbol. However, customers should follow the policies that \nhave been established to address their particular business \nrequirements. \n 6. DEFENSE IN DEPTH \n No single method of protecting a SAN can be considered \nadequate in the light of constantly evolving threats. The \nbest possible approach is to implement a policy of layers \nof protection for every aspect of the environment, which \nforces an attacker to defeat multiple mechanisms to \nachieve their goal, increasing the probability of an attack \nbeing detected and defeated. The LAN that interconnects \nthe SAN component’s management interfaces, which is \nthe most likely avenue of attack, should be protected by \nmultiple layers of security measures, such as firewalls, \nsecurity monitoring tools, and so forth. The first phase \nof many attempts to attack the SAN will usually come \nthrough these LAN (Ethernet) management connections. \n 7. VENDOR SECURITY REVIEW \n In considering new hardware and software that will be \ninstalled in or interact with the SAN, a security review \nshould be performed with customer security personnel, \ncustomer technical/administrative personnel, and vendor \ntechnical personnel to gain a complete understanding of \nthe security capabilities of the product and how they can \nbe integrated into the existing security framework. \n Failure to implement any of these basic security \npractices can increase the possibility that a SAN may \ncome under attack and that any such an attack may suc-\nceed in impacting the ability of the SAN to perform its \nfunctions. For additional information on IT security best \npractices, refer to the SANS Institute’s Resources Web \nsite ( www.sans.org/resources/ ) and to the SAN Security \nsite ( www.sansecurity.com ). \n 8. DATA CLASSIFICATION \n To develop and implement security policies and tools that \neffectively protect the data in a SAN at an appropriate \nlevel, it is important to understand the value of the data \nto the organization. Though implementing a one-size-\nfits-all approach to protecting all the data can simplify \nsecurity management, such an approach may impose \nexcessive costs for certain types of data while leaving \nother data underprotected. For example, encrypting data \non the host can provide a significant level of protection; \nhowever, such encryption can impose significant over-\nhead on the host processing, reducing performance for \nthe entire environment. \n" }, { "page_number": 605, "text": "PART | V Storage Security\n572\n A review of all data on the SAN should be per-\nformed and appropriate classifications assigned based on \nbusiness requirements. These classifications should be \nassigned appropriate protection requirements. \n 9. SECURITY MANAGEMENT \n When any new SAN component (such as a storage array, \nswitch, or host) is installed, it usually contains at least \none factory configured access method (such as a default \nusername/password). It is imperative that any such access \nmethod be immediately reconfigured to conform to the \nsecurity policies that have been set up for the rest of the \nenvironment. Strong passwords should be utilized (as dis-\ncussed shortly) and, if possible, the account name should \nbe changed. \n Security Setup \n More complex components, such as the hosts and storage \nsystems, provide even more sophisticated security mech-\nanisms, such as support for access domains, which allow \nthem to be integrated into a broader security infrastruc-\nture. The mechanism should be configured according to \nestablished security policies immediately on installation. \n Unused Capabilities \n Most SAN hardware provides multiple methods of access \nand monitoring, such as a Web-based interface, a telnet \ncommand-line interface, and an SNMP interface. Many \nalso provide some method for uploading new versions of \nfirmware, such as FTP or Trivial File Transfer Protocol \n(TFTP). Any interface capabilities that will not be uti-\nlized on a regular basis should be explicitly disabled on \nevery SAN device. Update interfaces such as FTP should \nonly be enabled while an update is being performed and \nshould be immediately disabled when done, to prevent \nthem from being exploited during attacks. \n On SAN switches any ports that will not be used \nin the configuration should also be explicitly disabled \nto prevent them from being utilized if an attacker does \nmanage to gain physical access to the SAN. \n 10. AUDITING \n Virtually every component in a SAN provides some form \nof capability to log changes when they occur. For exam-\nple, some storage systems make a log entry when a new \nLUN is created; most Fibre Channel switches make log \nentries when zoning is changed. Regular audits should \nbe performed to ensure that the current configuration of \nthe SAN and all its components agree with the currently \ndocumented configuration, including any changes made \nthrough the change management system. \n Automated tools and scripts can be used to implement \nan effective auditing process. Most SAN components \nprovide the capability to download a detailed text listing \nthat shows the current configuration. By performing this \ntask on a regular basis and comparing the results to estab-\nlished baseline configurations, changes can be quickly \ndetected and, if necessary, investigated. \n Updates \n All patches and software/firmware updates to any SAN \ncomponents should be reviewed in a timely manner to \ndetermine whether they may have an impact on security. \nIf a particular update includes changes to address known \nsecurity vulnerabilities, it should be applied as quickly \nas possible. \n Monitoring \n It is important to continuously monitor an infrastruc-\nture to detect and prevent attacks. For example, a brute-\nforce password attack that generates an abnormally high \namount of TCP/IP network traffic to a SAN switch can \nbe easily detected and stopped using standard network \nmonitoring tools before any impact occurs. \n Monitoring can also detect changes to the SAN con-\nfiguration that may indicate an attack is underway. For \nexample, if an attacker attempts to “ spoof ” the world \nwide name (WWN) of another server to gain access to \nits storage, most switches will detect the existence of a \nduplicate WWN and generate error messages. \n Security Maintenance \n The security configuration of the SAN should be \nupdated in a timely manner to reflect changes in the \nenvironment — for example, the removal of a previously \nauthorized user or host from the infrastructure due to \ntermination or transfer. SAN ports should be managed \nto ensure that any unused ports are explicitly disabled \nwhen not being used, such as when a previously con-\nnected server is removed from the SAN. \n Configuration Information Protection \n All attacks on a SAN start with obtaining information on \nthe SAN’s configuration, such as number and types of \ncomponents, their configuration, and existing accounts. \nWithout sufficient information it is virtually impossible \n" }, { "page_number": 606, "text": "Chapter | 33 SAN Security\n573\nto carry out a successful attack on a SAN environment. \nIt is therefore critical that any information that provides \ndetails on how the IT infrastructure (including the SAN) \nis configured be carefully controlled. Some examples of \ninformation that can be used to plan an attack include: \n ● Network diagrams with TCP/IP addresses of SAN \ncomponents \n ● SAN diagrams with component information \n ● Lists of usernames and users \n ● Inventory lists of components with firmware revisions \n All such information should be labeled appropriately \nand its distribution restricted to only IT personnel with a \nconfirmed need to know. When it is necessary to share \nthis information with outside agencies (such as vendors \nwhen creating a Request for Proposal (RFP)). the infor-\nmation should be distributed utilizing positive controls, \nsuch as a password-protected view-only Adobe Acrobat \nPDF document. \n 11. MANAGEMENT ACCESS: SEPARATION \nOF FUNCTIONS \n Management of functions supported by the SAN should \nbe divided up among personnel to ensure that no single \nperson has control over the entire SAN. For example, one \nindividual should be responsible for managing the storage \nsystems, one responsible for configuring the SAN hard-\nware itself, and a different individual should be responsi-\nble for managing the hosts connected to the SAN. When \na change needs to be made it should be requested utiliz-\ning the change management system (see previous discus-\nsion), reviewed by all personnel involved, and thoroughly \ndocumented. \n Limit Tool Access \n Many Fibre Channel Host Bus Adapter (HBA) vendors \nprovide the capability to change the world wide name \nof an HBA. This is usually accomplished via either a \nhost-based utility or a firmware utility that gets executed \nwhen the host is booted. In the case of a host-based util-\nity, such utilities should be removed and the host should \nbe regularly scanned to ensure that they have not been \nreinstalled. For management purposes the utilities should \nbe copied to a CD-ROM and kept locked up until needed. \nFor firmware-based utilities, many vendors provide the \nability to disable the utility at boot time. \n Note that in either case changing the WWN of an HBA \nusually requires that the host be rebooted. Monitoring of \nthe environment should detect any unscheduled reboots \nand report them immediately. Also, any switches that \nthe host is connected to should detect the change in the \nWWN and report an entry in its internal log; monitor-\ning of these logs should also be performed and a security \nreport issued when such changes are detected. \n Limit Connectivity \n The management ports on most SAN components are \narguably the weakest points in terms of security. Most \nutilize a standard TCP/IP connection and some form of \nWeb-based or telnet protocol to access the management \ninterface. In most environments these connections are uti-\nlized infrequently — usually only when making a change \nto the configuration or requesting access to a log for audit-\ning purposes. To maximize security and defeat denial-of-\nservice forms of attacks, the port on the Ethernet switch \nthat these interfaces are connected to should be disabled \nwhen they are not being used. This minimizes the number \nof potential attack points within the SAN.\n Note: If a host requires network access to a storage sys-\ntem to perform its functions, disabling the port may not \nbe possible. \n Secure Management Interfaces \n LAN connections to management interfaces should be \nsecured utilizing some form of TCP/IP encryption such \nas Secure Sockets Layer (SSL) for Web-based interfaces \nor Secure Shell (SSH) for command-line interfaces. This \ncan ensure that even in the event an attacker does manage \nto gain access to a management LAN, it will be difficult \nfor them to gain any useful information, such as user-\nnames and passwords. \n 12. HOST ACCESS: PARTITIONING \n Partitioning defines methods to subdivide networks to \nrestrict which components have access to other compo-\nnents. For Fibre Channel SANs this involves two distinct \nnetworks: the LAN to which the management interfaces \nof the components are connected and the SAN itself. \n The network management interfaces for all SAN \ncomponents should be connected to an isolated LAN \n(e.g., a VLAN). One or two management stations should \nbe configured into the VLAN for management purposes. \nIf a host requires LAN access to SAN components to \nfunction, a dedicated Network Interface Card (NIC) \n" }, { "page_number": 607, "text": "PART | V Storage Security\n574\nshould be used in that host and included in the VLAN. \nFor maximum security there should be no external routes \ninto this management LAN. \n The SAN itself should be partitioned using zoning. \nThere are two forms of zoning: soft (or world wide name) \nzoning, which restricts access based on the world wide \nname (WWN), and hard zoning, which restricts access \nbased on the location of the actual physical connection. \nSoft zoning is generally easier to implement and manage, \nsince changes to zones are done entirely via the manage-\nment interface and don’t require swapping physical con-\nnections. However, should an attacker manage to gain \ncontrol of a host connected to a SAN and fake the world \nwide name of another server (sometimes referred to as \n WWN spoofing ), they could potentially gain access to a \nbroader range of the storage on the SAN. Hard zoning \nrequires slightly more effort when making changes, but it \nprovides a much higher level of security, since an attacker \nwould have to gain physical access to the SAN compo-\nnents to gain access outside the server they subverted. \nNote that for most current SAN switches, hard zoning is \nthe default form of zoning. \n A relatively new partitioning capability provided by \nsome switch vendors is the concept of a virtual SAN, \nor VSAN. VSANs are similar to VLANs on an Ethernet \nnetwork in that each VSAN appears to be a fully separate \nphysical SAN with its own zoning, services, and man-\nagement capabilities, even though multiple VSANs may \nreside on a single physical SAN switch. Strict segrega-\ntion is maintained between VSANs, ensuring that no traf-\nfic can pass between them. \n Finally, the storage systems themselves should be \npartitioned utilizing LUN masking, which controls which \nservers can access which LUNs on the storage system. \nMost modern SAN storage systems provide some form \nof LUN masking. \n Combining VSANs and port zoning with LUN mask-\ning on the storage system provides a degree of defense in \ndepth for the SAN, since an attacker would need to pen-\netrate multiple separate levels of controls to gain access \nto the data. \n S_ID Checking \n When packets are transmitted through a SAN, they usually \ncontain two fields that define where the packet originated: \nthe source ID (S_ID) and the destination ID (D_ID). \nUnder some configurations, such as soft zoning, the S_ID \nmay not be validated, allowing an illegal host on the SAN \nto send packets to a storage server. Some switch vendors \nprovide the capability to force S_ID checking under all \nconfigurations; if this capability is available it should be \nenabled. Note that hard zoning (discussed earlier) will \nminimize the need for S_ID checking, since the available \npath for any SAN traffic will be strictly controlled. \n Some high-end storage systems provide an even stricter \nmethod of defeating S_ID attacks, called S_ID lockdown . \nThis SID feature provides additional security for data \nresiding within the system. Since a WWN can potentially \nbe spoofed to match the current WWN of another HBA, \na host with a duplicate WWN can gain access to the data \ndestined for the spoofed HBAs. S_ID lockdown prevents \nan unauthorized user from spoofing the WWN of an \nHBA. When the S_ID lockdown feature is enabled, the \nsource ID (SID) of the switch port to which the protected \nHBA is connected is added to the Virtual Configuration \nManagement Database (VCMDB) record. Once an asso-\nciation between the HBA’s WWN, the SID, and the fiber \nadapter is created, the HBA is considered locked. When a \nSID is locked, no user with a spoofed WWN can log in. If \na user with a spoofed WWN is already logged in, that user \nloses all access through that HBA. \n 13. DATA PROTECTION: REPLICAS \n Many practices can be implemented that can significantly \nenhance the security of Fibre Channel SANs, but it is \nvirtually impossible to guarantee that no attack will ever \nsucceed. Though little can be done after data has been \nstolen, having in a SAN multiple replicas of the data that \nare updated regularly can help an organization recover \nfrom an attempted denial-of-service attack (see the sec-\ntion titled “ Denial-of-Service Attacks ” for more details). \nThis includes not only having backups but maintaining \nregular disk-based replicas as well. For example, per-\nforming a point-in-time incremental update of a clone of \na LUN every four hours provides a recovery point in the \nevent the LUN gets corrupted or deleted by an attacker. \nIt is critical that all replicas, whether they are disk or tape \nbased, be protected at the same level as the original data. \n Erasure \n Any data that is stored in a Fibre Channel SAN is gener-\nally stored on some form of nonvolatile media (e.g., disk \ndrive, tape, and so on). When that media reaches the end \nof its useful life, such as when upgrading to a new stor-\nage system, it is usually disposed of in the most efficient \nmanner possible, usually with little consideration that the \nmedia may still contain sensitive data. \n Any media that may have ever contained sensitive \ndata should undergo a certified full data erasure procedure \n" }, { "page_number": 608, "text": "Chapter | 33 SAN Security\n575\nbefore leaving your infrastructure or be disposed of by \na vendor that can provide assurance that the media will \nundergo such a procedure and will be under positive con-\ntrol until the procedure occurs. For extremely sensitive \ndata, certified destruction of the media should be consid-\nered. The same level of consideration should be given to \nany media that may have contained sensitive data at one \ntime, such as disks used to store replicas of data (e.g., \nsnapshots or clones). \n Potential Vulnerabilities and Threats \n To effectively understand how secure a SAN is, it is \nimportant to understand what potential vulnerabilities \nexist and the types of attacks it could potentially face. \n Physical Attacks \n Physical attacks involve gaining some form of physi-\ncal access to the SAN or the data stored on it. This may \ninvolve gaining access to the SAN switches to plug in an \nillegal host to be used for other attacks, stealing the disk \ndrives themselves, or stealing backup tapes. It may also \ninvolve even more subtle methods, such as purchasing \nused disk media to search them for data that hasn’t been \nerased or “ dumpster diving ” for old backup tapes that \nmay have been disposed of in the trash. The following \nare physical attack countermeasures: \n ● Solid physical security practices, such as access \ncontrol to the datacenter and locking racks for \nequipment, will defeat most physical attacks. \n ● Security monitoring of the environment will \ndetect any changes to the SAN, such as a new host \nattempting to log in and fabric topology changes. \n ● Host-based encryption of critical data will ensure \nthat the data on any stolen media cannot be accessed. \n ● Hard zoning and VSANs will limit the amount of \naccess an attacker can obtain even if they do manage \nto gain access to an unused port. \n ● Explicitly disabling any unused (open) ports on a \nSAN switch will prevent them from being used in the \nevent an attacker does gain access. The attacker will \nbe forced to unplug an existing connection to gain \naccess, which should become immediately apparent \nin any environment with even minimal monitoring. \n ● Regular audits can detect any changes in the physical \ninfrastructure. \n ● Implementing data erasure procedures can prevent an \nattacker from gaining access to data after old media \nhas been disposed of. \n Management Control Attacks \n Management control attacks involve an attacker attempt-\ning to gain control of the management interface to a SAN \ncomponent. This involves accessing the LAN that the \nmanagement interface is on and utilizing some form of \nusername/password cracking technique or TCP/IP attack \n(e.g., buffer overflow) to gain control of the interface. \nThis type of attack is usually the first phase in a more \ndetailed attack or else an attempt to deny access to SAN \nresources. The following are management control attack \ncountermeasures: \n ● Setting up the initial security utilizing strong security \npolicies will increase the ability of the SAN to resist \nthese types of attacks. \n ● Strong password policies will hinder these types of \nattacks by making it difficult for an attacker to guess \nthe passwords. \n ● A formal change management system and regular \nauditing will allow any successful attacks to be \ndetected. \n ● Partitioning these interfaces into a VLAN will \nminimize the number of potential avenues of attack. \n ● Defense-in-depth will force an attacker to penetrate \nmany layers of security to gain access to the \nmanagement interfaces, significantly decreasing the \nprobability of success and increasing the probability \nof detection. \n ● Regular security maintenance will ensure that \nan attacker cannot gain access by using an old \naccount. \n ● Active monitoring will detect significant changes in \nLAN traffic going to these interfaces. \n ● Limiting connectivity to management ports when \nnot required can limit the available window for \nsuch an attack. \n ● Regular auditing will detect any changes to the \nmanagement environment and ensure that the \nsecurity configuration is up to date. \n ● Performing a vendor security review will ensure that \nthe SAN components have been configured for the \nmaximum level of security. \n Host Attacks \n Host attacks have the greatest potential risk of occurring, \nsince attacking operating systems via a TCP/IP network \nis the most widely understood and implemented form of \nattack in the IT industry. These types of attacks usually \ninvolve exploiting some form of weakness in the oper-\nating system. Once an attacker has gained control of \n" }, { "page_number": 609, "text": "PART | V Storage Security\n576\nthe host, they can then proceed to attack the SAN. The \nfollowing are host attack countermeasures: \n ● A solid initial security setup will minimize the \nnumber of potential vulnerabilities on a host. \n ● Strong password policies will minimize the risk of \nan attacker gaining access to the host. \n ● A formal change management system and regular \nand active auditing and monitoring will detect the \nchanges an attacker will have to make to a host to \ngain access to the SAN. \n ● Hard zoning on the SAN and LUN masking will \nlimit the amount of data an attacker may be able to \ngain access to if they manage to subvert a host on \nthe SAN. \n ● Defense in depth will reduce the probability of an \nattacker gaining access to a host in the first place. \n ● Timely security maintenance will ensure that an \nattacker cannot penetrate the host utilizing an unused \naccount. \n ● Installing security updates in a timely manner \nwill ensure that an attacker cannot exploit known \nvulnerabilities in the host’s operating system. \n ● Regular auditing can detect changes in the host \nenvironment that may indicate an increased level of \nvulnerability. \n ● Classification of the data in the SAN can ensure that \neach host is protected at the level that is appropriate \nfor the data it can access. \n World Wide Name Spoofing \n WWN spoofing involves an attacker assuming the identity \nof another host by changing the WWN of an HBA to gain \naccess to that host’s storage. This type of attack can occur \nin one of two ways: by subverting an existing host and \nchanging its existing WWN or by installing a new host \nthat the attacker controls on the SAN. Note that chang-\ning the WWN name of an HBA requires a host to be \nrebooted, which should be easily detectable with stand-\nard monitoring tools. The following are WWN spoofing \ncountermeasures: \n ● Installing a new host requires physical access to \nthe SAN, which can be defeated by the methods \ndescribed in the section titled “ Physical Attacks. ” \n ● Partitioning the SAN utilizing hard zoning tightly \ncontrols what resources an existing host can access, \neven if its WWN changes. \n ● Enabling port binding on the switch to uniquely \nidentify a host by WWN and port ID on the fabric. \n ● Enable S_ID lockdown if the feature is available. \n ● Changing the WWN of an HBA in an existing host \nrequires the attacker to first subvert the host, which \nis addressed in the section titled “ Host Attacks. ” \n ● Utilizing host-based encryption can prevent \nan attacker from reading any data, even if they \ndo manage to subvert the SAN, since the host \nperforming the spoofing should not have access to \nthe encryption keys used by the original host. \n ● Ensuring the tools necessary to change the WWN of \nan HBA are not installed on any host can prevent an \nattacker from spoofing a WWN. \n Man-in-the-Middle Attacks 1 \n Man-in-the-middle attacks involve an attacker gain-\ning access to Fibre Channel packets as they are being \nexchanged between two valid components on the SAN \nand requires the attacker have a direct connection to the \nSAN. These types of attacks are roughly analogous to \nEthernet sniffer attacks whereby packets are captured \nand analyzed. Implementing this type of attack requires \nsome method that allows an attacker to gain access to \npackets being sent to other nodes on the SAN (referred \nto as promiscuous mode on Ethernet LANs), which is not \ngenerally supported in the Fibre Channel protocol. The \nfollowing are man-in-the-middle attack countermeasures : \n ● Since this type of attack requires that an attacker \nbe physically plugged into the SAN, they can be \ndefeated by the methods described in the section \ntitled “ Physical Attacks. ” \n ● Disable any port-mirroring features on a SAN switch \nif they are not being used. This prevents an attacker \nfrom gaining access to SAN configuration data. \n ● By utilizing host-based encryption the data contained in \nany intercepted packets cannot be read by the attacker. \n E-Port Replication Attack \n In an e-port attack, an attacker plugs another switch or a \nspecially configured host into the e-port on an existing \nswitch in the SAN. When the switch sees a new valid peer \nconnected on the e-port, it will send it a copy of all its \nconfiguration tables and information. This method is not \nnecessarily an actual attack in and of itself but a method to \ngain information to be used to perpetuate other attacks. The \nfollowing are e-port replication attack countermeasures: \n ● Since this type of attack requires that an attacker \nbe physically plugged into the SAN, they can be \n1 An attack in which an attacker is able to read, insert, and modify \nmessages between two parties without either party knowing that the \nlink between them has been compromised.\n" }, { "page_number": 610, "text": "Chapter | 33 SAN Security\n577\ndefeated by the methods described in the section \ntitled “ Physical Attacks. ” \n ● Enable switch and Fabric binding on the switch \nto “ lock down ” the topology and connectivity of \nthe fabric after initial configuration and after any \nlegitimate changes are made. \n Denial-of-Service Attacks \n Denial-of-service (DoS) attacks are designed to deprive \nan organization of access to the SAN and the resources \nit contains. These types of attacks can take many forms, \nbut they usually involve one of the following: \n ● Saturating a component with so much traffic that it \ncannot perform its primary function of delivering \ndata to hosts \n ● Taking advantage of a known vulnerability and \ncrashing a component in the SAN \n ● Gaining access to the management interface and \ndeleting LUNs to deprive the owner of access to \nthe data \n A new type of attack that has surfaced recently also \nfits into this category. An attacker gains access to the \ndata, usually through a host, encrypts the data, and then \ndemands payment to decrypt the data (that is, extortion). \nThe following are DoS attack countermeasures: \n ● Partitioning the LAN that the SAN component \nmanagement interfaces are on can prevent an \nattacker from ever gaining access to those com-\nponents to implement a DoS attack. This includes \ndisabling those interfaces when they are not in use. \n ● Defense in depth will force an attacker to defeat \nseveral security layers to launch the DoS attack, \nreducing the probability of success and increasing \nthe probability of detection before the attack can be \nlaunched. \n ● Deploying VSANs will prevent DoS traffic on one \nSAN from interfering with the others in the event of \na successful attack. \n ● Maintaining up-to-date protected replicas of all data \ncan allow easy recovery in the event a DoS attack \nresults in data being deleted or encrypted. \n Session Hijacking Attacks \n A session hijacking attack involves an attacker intercept-\ning packets between two components on a SAN and tak-\ning control of the session between them by inserting their \nown packets onto the SAN. This is basically a variant of \nthe man-in-the-middle attack but involves taking control \nof an aspect of the SAN instead of just capturing data \npackets. As with man-in-the-middle attacks, the attacker \nmust gain physical access to the SAN to implement this \napproach. Session hijacking is probably more likely to \noccur on the LAN in an attempt to gain access to the \nmanagement interface of a SAN component. The follow-\ning is a session hijacking attack countermeasure: Since \nthis type of attack requires that an attacker be physically \nplugged into the SAN, they can be defeated by the meth-\nods described in the section titled “ Physical Attacks. ” \n Table 33.1 summarizes the various best practices and \nthe potential vulnerabilities they address. \n 15. ENCRYPTION IN STORAGE \n Encryption is used to prevent disclosure of either stored \nor transmitted data by converting data to an unintelligible \nform called ciphertext . Decryption of the ciphertext con-\nverts the data back into its original form, called plaintext . \n For environments that require even higher levels of \nsecurity you can encrypt all transmissions (data and con-\ntrol) within the SAN utilizing a commercially available \nSAN encryption device. Also, for extremely sensitive \ndata, host-based encryption should be considered. Most \nmodern operating systems provide some form of encryp-\ntion for their file systems. By utilizing these capabilities, \nall data is encrypted before it even leaves the host and is \nnever exposed on the SAN in an unencrypted form. \n The Process \n Encryption simplifies the problem of securely sharing \ninformation by securely sharing a small key used to \nencrypt the information. In a two-party system, a process \nsimilar to these steps would be followed. 2 \n 1. Alice and Bob agree on an encryption algorithm to \nbe used. \n 2. Alice and Bob agree on a key to be used for encryption/\ndecryption. \n 3. Alice takes her plaintext message and encrypts it \nusing the algorithm and key. \n 4. Alice sends the resulting ciphertext message to Bob. \n 5. Bob decrypts the ciphertext message with the same \nalgorithm and key as the original encryption process. \n 6. Any change in the key or encryption algorithm has to \nbe agreed on between Alice and Bob. The process of \nconverting to a new key or algorithm requires decrypt-\ning the ciphertext using the original key and algorithm \nand reencrypting with the new key and algorithm. It \nis important that the key management system used \n2 N. Ferguson, Practical Cryptography, Wiley Publishing, 2003.\n" }, { "page_number": 611, "text": "PART | V Storage Security\n578\nsecurely preserves the old key for as long as the data \nretention policy for that data prescribes. Premature \ndestruction of the key will result in loss of data. \n The secure exchange of data in a two-party system \nis typically accomplished using a public/private key \nmechanism. Protecting data at rest, however, is best han-\ndled with a symmetric (private) key because the data is \naccessed from fixed and/or known locations. Typically \none host would use the same algorithm and key to \nencrypt the data when writing to disk/tape and to decrypt \nthe data when reading from disk/tape. In the case of \nmultipathing or situations in which multiple applications \nfrom different nodes will access the data, centralized key \nmanagement is essential. \n Throughout the remainder of the chapter, only sym-\nmetric-key encryption will be discussed and will be \nreferred to simply as encryption. Symmetric-key encryp-\ntion , as noted, refers to the process by which data \nis encrypted and decrypted with the same key. This \nmethod of encryption is more suited to the perform-\nance demands of data path operations. Asymmetric-key \nencryption refers to the process where encryption is \nperformed with one key and decryption is performed \nwith another key, often referred to as a public/private \nkey pair . Asymmetric-key encryption is not well suited \nto encrypting bulk data at rest due to performance con-\nstraints and manageability. \n Encryption Algorithms \n The algorithm used can be any one of a variety of \nwell-known cryptosystems described in the industry. \n TABLE 33.1 Best Practices and Potential Vulnerabilities \n Best Practices \n Threats \n \n Physical \n Mgmt. \nControl \n Host \n WWN \nSpoof \n Man-in-\nthe-Middle \n E-Port \nReplication \n DoS \n Session \nHijack \n Physical access \n X \n \n \n X \n X \n X \n \n X \n Change management \n \n X \n X \n X \n \n \n \n \n Password policies \n \n X \n X \n X \n \n \n \n \n Defense in depth \n \n X \n X \n X \n \n \n X \n \n Vendor review \n \n X \n \n \n \n \n \n \n Data classification \n \n \n X \n \n \n \n \n \n Security setup \n X \n X \n X \n X \n \n \n \n \n Unused capabilities \n X \n X \n \n \n \n \n \n \n Auditing \n X \n X \n X \n \n \n \n \n \n Updates \n \n \n X \n X \n \n \n \n \n Monitoring \n X \n X \n \n \n \n \n \n \n Security maintenance \n \n X \n X \n X \n \n \n \n \n Configuration information \nprotection \n X \n X \n X \n X \n X \n X \n X \n X \n Separation of functions \n \n X \n \n \n \n \n \n \n Tool access \n \n \n \n X \n \n \n \n \n Limit connectivity \n \n X \n \n \n \n \n \n \n Partitioning \n X \n X \n X \n X \n X \n X \n X \n \n S_ID checking \n \n \n \n X \n \n \n \n \n Encryption \n X \n \n \n X \n X \n \n \n \n Replicas \n \n \n \n \n \n \n X \n \n Erasure \n X \n \n \n \n \n \n \n \n" }, { "page_number": 612, "text": "Chapter | 33 SAN Security\n579\nThe U.S. Federal Information Processing Standards \n(FIPS) document the Advanced Encryption Standard \n(AES 3 ) and specify it as the industry-standard algo-\nrithm in the United States. AES is the most common \nalgorithm implemented in the current encryption meth-\nods described as follows. Triple-DES (Data Encryption \nStandard) is still a certified algorithm by the National \nInstitute of Standards and Technology (NIST) and may \nbe used but is not recommended. 4 \n Encryption algorithms typically operate on block \nlengths of 64 to 128 bytes. To encrypt longer messages \nan encryption mode of operation may be used, such as: \n ● CBC–Cipher-block chaining \n ● CTR–Counter \n ● XTS–Tweakable narrow block \n ● GCM–Galois/counter mode \n The CBC, CTR, and GCM modes of operation used \nfor encryption require the use of an initialization vector \n(IV), or nonce. The IV is a seed block used to start and \nprovide randomization to the encryption process. The \nsame IV and key combination must not be used more \nthan once. XTS is the only one of the four that does not \nrequire an IV but instead has a second key called the \n tweak key . \n In the event the length of the message to be encrypted \nis not a multiple of the block size, it may be required to \npad the final block. \n Key Management \n The protection potentially afforded by encryption is only \nas good as the management, generation, and protection \nof the keys used in the encryption process. Keys must be \navailable and organized in such a fashion that they can \nbe easily retrieved, but at the same time, access to keys \nmust be tightly controlled and limited only to authorized \nusers. This attention to key management must persist for \nthe lifetime of the data, not just the lifetime of the sys-\ntem that generates or encrypts the data. Generation of \nkeys should follow some simple guidelines: \n ● The key generated must be random, for example, \nas specified by FIPS 186-2. 5 There can be no \npredictability to the key used for encryption; \npseudorandom number generators are not \nacceptable for key generation. \n ● Key length for AES can be 128, 192, or 256 bits. \n Once the keys are generated, their protection is \ncrucial to guaranteeing confidentiality. This requires the \nfollowing: \n ● Secure access to the key management solution. The \nkey management solution must provide a method \nto guarantee that unauthorized access to keys is \nrestricted. This access restriction should also extend \nto the facility for generating and managing keys. \nThis can be accomplished via a number of mecha-\nnisms including secure Web, smart cards, or split key \narrangements. The key management solution must \nalso protect against physical tampering as outlined in \nFIPS 140-2. 6 \n ● Backup and recovery facilities for configuration \nand key information. This information itself must \nbe encrypted and stored to a secure backup medium \n(for example, a smart card). The keys used for \nencryption must never be visible in plaintext outside \nthe key management solution and, under most \ncircumstances, should not be visible at all. For \nadditional security, the recovery of the configuration/\nkeys should be performed by a group of security \nadministrators. This eliminates the potential for \nmisuse due to corruption of a single administrator \nand utilizes a group key recovery model where M of \n N (that is, 2 of 3, 3 of 5, and so on) or a quorum of \nadministrators is needed to reconstruct an encrypted \nconfiguration. \n ● The ability to apply high availability and business \ncontinuity practices and protocols to key stores. \n ● The ability to store keys and identify where they have \nbeen used for the lifetime of the data. This covers \ndata that is written to tape and that may be read up to \n30 years later. \n ● Integrity checking of keys. This is particularly \nimportant if there are no integrity checks on the data. \n ● Comprehensive logging and regular auditing of how \nand when the keys are used. \n Key management can be distributed or centralized. \nA common implementation of these requirements is a \nkey management station that can reside either online with \nthe encryption engine or out of band via TCP/IP. The key \nmanagement station provides a centralized location where \nkeys can be managed and stored securely and meet the \n3 FIPS 197, http://csrc.nist.gov/publications/fi ps/fi ps197/fi ps-197.pdf.\n4 FIPS 46-3, http://csrc.nist.gov/publications/fi ps/fi ps46-3/fi ps46-3.pdf.\n5 http://csrc.nist.gov/publications/fi ps/fi ps186-2/fi ps186-2-change1.pdf.\n6 http://csrc.nist.gov/cryptval/140-2.htm.\n" }, { "page_number": 613, "text": "PART | V Storage Security\n580\nstringent standards of FIPS 140-2. At this point, there are \nvery few certified, standalone key management systems. \n Configuration Management \n In configuring any of the methods for encrypting data \ndescribed here, there are several common steps that need \nto be executed. The unit to be encrypted needs to be \nidentified (for example, record, file, file system, volume, \ntape) and an associated key needs to be generated. This \nconfiguration information needs to be recorded, securely \ntransmitted to the encryption engine, and securely stored \nfor the lifetime of the encrypted data. \n To ensure access to the encrypted data, the configu-\nration must account for all paths available to the data \nand identify which applications, hosts, or appliances will \naccess the data through those paths. Each needs access to \nthe algorithm and key to be able to read/write uniformly \nfrom each path. In addition, replicas (for example, snaps, \nclones, and mirrors) need to be identified and associated \nwith the original source data to ensure that they can also \nbe correctly decoded when read. \n 16. APPLICATION OF ENCRYPTION \n Encryption is only one tool that can be applied as \npart of a comprehensive information security strategy, \nand as such, should be applied selectively, only where \nit makes sense. Determining exactly where and how this \ntakes place begins with an assessment of risks to the \ndata, the suitability of encryption to address the risk, and \nthen, if appropriate, the options for deployment of the \ntechnology. \n Risk Assessment and Management \n Risk assessment is a calculation that requires three key \npieces of information: the number and nature of threats, \nthe likelihood of a threat being realized in the form of \nan attack, and the impact to the business in the event the \nattack succeeds. Let’s consider these in the context of a \ndecision of whether it is appropriate to deploy encryption \ntechnology. \n As administrators manage the flow of data from \napplication to storage, they need to understand the nature \nof possible threats to the data and the likelihood of occur-\nrence. These threats may take the form of: \n ● Unauthorized disclosure \n ● Destruction \n ● Denial of service \n ● Unauthorized access \n ● Unauthorized modification \n ● Masquerade 7 \n ● Replay 8 \n ● Man-in-the-middle attacks \n These threats may occur at any point from where the \ninformation is generated to where it is stored. For each \nof these threats, an evaluation must be made as to the \nlikelihood of attacks occurring and succeeding in light of \nexisting protection measures. If any attack is determined \nto be likely, the value of the information subject to threat \nmust be also considered. If the value to the business of \nthe data being threatened is low, it ultimately may not \nwarrant additional protection. \n For those risks deemed to be significant, another cal-\nculation is required: are the tradeoffs of the proposed \nsolution (in this case, encryption) worth making in con-\ntext of the level of threat to the data. Considerations \nshould include: \n ● Cost to deploy \n ● Level of threat \n ● Severity of vulnerability \n ● Consequences \n ● Detection time \n ● Response time \n ● Recovery time \n ● New risks introduced by encryption, such as prema-\nture loss of keys \n In this case, by restricting access to the information \nvia authentication and authorization, the administrator \ncan identify who has rights to use the information as well \nas who has attempted to use the information. Access priv-\nileges can be granted at various points in the information \nflow: at the application, operating system, network, and \nstorage platform layers. If these measures are deemed \ninsufficient, encryption might provide another layer of \ndefense. \n Modeling Threats \n To make this process more specific to the problem at \nhand, Figure 33.1 illustrates some of the risks to data \n7 An attack in which a third party tries to mislead participants in a \nprivileged conversation using forged information.\n8 A form of network attack in which a valid transmission is mali-\nciously or fraudulently repeated or delayed.\n" }, { "page_number": 614, "text": "Chapter | 33 SAN Security\n581\nin the enterprise. By understanding the attacks that \ncan occur, administrators can determine where encryp-\ntion may help to protect data and where it would not be \napplicable. \n Figure 33.1 shows the following: \n ● Encrypting the information at the application level \nprotects against unauthorized viewing of information \nat the operating system (user) and network levels, \nas well as protects against media theft. However, \nencryption at this level will not protect against \nunauthorized access at the application level (as the \ninformation is decrypted at that point) nor root access \nfrom the operating system unless strong application \naccess controls are in place. \n ● Encrypting the information at the host or operating \nsystem level protects against unauthorized viewing of \ninformation at the network level as well as protects \nagainst media theft. Encryption at this level will not \nprotect against unauthorized access at the application \nor operating system level as the information is \ndecrypted at that point. Access control technology \nwould be required to provide additional security at \nthe operating system and application levels. \n ● Encrypting the information in the network protects \nagainst unauthorized viewing of information from \nthe encryption device to the storage device in the \nnetwork as well as protects against media theft. \nEncryption at this level will not protect against \nunauthorized access at the application or operating \nsystem level or in the network up to the encryption \ndevice as the information is decrypted at that point. \n ● Encrypting the information at the device level \nprotects against media theft. Encryption at this \nlevel will not protect against unauthorized access \nat the application or operating system level or in \nthe network as all data external to the device is \nunencrypted. \n Use Cases for Protecting Data at Rest \n The following are some specific use cases that warrant \ndeployment of encryption of data at rest. The primary \nuse case is protecting data that leaves administrators ’ \ndirect control. Some examples of this situation include: \n ● Backup to tape \n – Tapes that are sent offsite \n ● Removal of disk for repair \n – Key-based data erasure for removed disk or array \nfor return \n – Data sent to a disaster recovery or remote site \n ● Protection of data between and in disaster recovery \nsites \n – Consolidating data from many geographies to a \nsingle datacenter while still following each coun-\ntry’s security laws \n – Using Type 1 encryption to share data between \nmultiple secure sites \n – Data in harm’s way (used for military applica-\ntions such as planes, Humvees, embassies) \n ● Data extracts sent to service providers and partners \n – Outsourcing scenarios where sensitive data \nresides in vendor systems \nUnauthorized\naccess to\napplication\nMedia\ntheft\nSpoofing\nhost identity\nUnauthorized\naccess to host\nOS\nConnectivity\nServers\nFibre Channel Tape Library\nStorage Array\n FIGURE 33.1 Threats to plaintext customer data. \n" }, { "page_number": 615, "text": "PART | V Storage Security\n582\n A second use case is protecting data from unauthor-\nized access in the datacenter when existing access con-\ntrols are deemed to be insufficient. Some examples of this \nsituation are: \n ● Shared/consolidated storage used by numerous groups \n – Sharing a single datacenter/array for multiple \nlevels of security \n – Sharing a platform between an intranet and \nInternet for consolidation \n ● Protecting data from insider theft (employees, \nadministrators, contractors, janitors) \n ● Protection of application/executables from alteration \n In addition, data encryption is mandated or recom-\nmended by a number of regulations. Deploying encryp-\ntion will enable or aid in compliance. Selected examples \nof these regulations include: \n ● Sarbanes-Oxley Act. U.S. regulation with respect to \ndisclosure of financial and accounting information. \n ● CA 1798 (formerly SB-1386). California state \nlegislation requiring public disclosure when \nunencrypted personal information is compromised. \n ● HIPAA. U.S. health-care regulation that recommends \nencryption for security of personal information. \n ● Personal Information Protection Act. Japanese \nregulation on information privacy. \n ● Gramm-Leach-Bliley Act. U.S. finance industry \nregulation requiring public disclosure of personal \ndata breaches. \n ● EU Data Protection Directive. European Union \ndirective on privacy and electronic communications. \n ● National data privacy laws. Becoming pervasive \nin many nations, including Spain, Switzerland, \nAustralia, Canada, and Italy. \n Use Considerations \n There are additional factors to consider when using \nencryption: \n ● Data deduplication at the disk level may be affected. \nAny good encryption algorithm will generate dif-\nferent ciphertext for the same plaintext in different \ncircumstances. As a result, algorithms for capacity \nreduction by analyzing the disk for duplicate blocks \nwill not work on encrypted data. \n ● Encrypted data is not compressible. Lossless \ncompression algorithms could potentially expand \nas often as they compress encrypted data if applied. \nThis will impact any WAN connectivity needing to \ntransmit encrypted traffic. \n ● There is overhead in converting current plaintext \ndata to ciphertext. This is done as a data migration \nproject, even when it is done in place. Host \nresources, impact to CPU utilization, and running \napplications must be considered. \n ● An additional benefit to encrypting data at any level \ndescribed is the ability to provide data shredding \nwith the destruction of the key. This is especially \nefficient when there are multiple, distributed copies \nof the data encrypted with the same key. For the \ndata to be considered shredded, all management \ncopies of the key need to be destroyed for all \nsecurity administrators, smart cards, backups, key \nmanagement stations, and so on. Key destruction \nmust follow similar guidelines as to the data erasure \noutlined in NIST SP 800-88. \n Deployment Options \n As we have seen, the use cases discuss why encryption \nwould be used and the threats being protected against \ndetermine where encryption should be deployed. The \nfollowing sections discuss in further detail deployments \nat each layer of the infrastructure. \n Application Level \n Perhaps the greatest control over information can be \nexercised where it originates, from the application. The \napplication has the best opportunity to classify the infor-\nmation and manage who can access it, during what times \nand for what purpose. If the administrator has concerns/\nrisks over the information at all levels in the infrastruc-\nture, it makes sense to begin with security at the applica-\ntion level and work down. In this case application-based \nencryption should be an option. Adding encryption at the \napplication level allows for granular, specific information \nto be secured as it leaves the application. For example, a \ndatabase could encrypt specific rows/columns of sensi-\ntive information (for example, Social Security numbers \nor credit-card numbers) while leaving less sensitive infor-\nmation unencrypted. Attempts to snoop writes-to-disk \nor to read-data directly from disk without the application \ndecrypting it would yield useless information. \n Encryption at the application level provides security \nfrom access at the operating system level as well as from \nother applications on the server as shown in Figure 33.2 . \nThe application would still need to provide user authen-\ntication and authorization to guarantee that only those \nwith a need to know can access the application and the \ndata. If the application lacks these strong access controls, \n" }, { "page_number": 616, "text": "Chapter | 33 SAN Security\n583\napplication-based encryption will provide no additional \nsecurity benefit. End-user activities with data after it is \nconverted to clear text are potentially the highest secu-\nrity risk for organizations. \n There are some drawbacks to encrypting at the \napplication level. First, encryption is done on a per-\napplication basis. If multiple applications need encryp-\ntion, each would have to handle the task separately, cre-\nating additional management complexities to ensure that \nall confidential data is protected. Second, application-\nlevel encryption solutions are typically software based. \nEncryption is a CPU-intensive process and will com-\npete with normal operating resources on the server. In \naddition, the encryption keys will be stored in dynamic, \nnonvolatile memory on the server. If a hacker were to \nbreak into the server and find the keys, the information \ncan be decrypted. Externalizing the encryption engine or \nkey manager may address these issues at the expense of \nadditional solution cost. An external key manager also \nenables clustered applications to share key information \nacross nodes and geography (provided that each node \ncan supply a secure channel from the server to the key \nmanager.) If FIPS 140-2 compliance is a requirement \nfor the encryption solution, an external appliance is \ntypically used. \n Application-based encryption also presents chal-\nlenges in the area of rekeying. Any effort to rekey the \ndata (to protect the integrity of the keys) will have to \nbe done by the application. The application will need to \nread and decrypt the data using the old key and reencrypt \nand rewrite to disk using the new key. The application \nwill also have to manage old and new key operations \nuntil all the previously encrypted information is reen-\ncrypted with the new key. This most likely will be done \nwhile the application is handling normal transactions, \nagain presenting resource contention issues. \n Another challenge occurs with the introduction of \nediscovery solutions in the enterprise. Encryption at the \napplication level will expose only encrypted information \nto other applications (including backup) and devices in \nthe stack. Any attempt to perform analysis on the data \nwill be useless as patterns and associations will be lost \nthrough the randomization process of encryption. To \naccomplish any analysis the ediscovery applications will \nneed to be associated and linked to the application per-\nforming encryption to allow for a decryption of data at \na level outside of the application, and a possible security \nrisk could be introduced. \n Application-based encryption (see Figure 33.2 ) must \nalso account for variable record lengths. Encryption \nschemes must pad data up to their block size to generate \nvalid signatures. Depending on the implementation, this \nmay require some changes to application source code. \n Application-based encryption doesn’t take into \naccount the impact on replicated data. Any locally rep-\nlicated information at the storage layer, that is, a clone, \ndoes not have visibility into the application and the keys \nand the application does not have visibility into the repli-\ncation process. Key management can become more com-\nplex. In addition, compression in the WAN is impossible \nfor remote replication of the encrypted information caus-\ning WAN capacity issues. \nFibre-Channel Tape Library\nConnectivity\nStorage Array\nEncrypted Data\nUnencrypted \nData\nServers\n FIGURE 33.2 Coverage for application-based encryption. \n" }, { "page_number": 617, "text": "PART | V Storage Security\n584\n Host Level \n Encrypting at the host level provides very similar benefits \nand tradeoffs to application-based encryption. At the host \nlevel, there are still opportunities to classify the data, but \non a less granular basis; encryption can be performed at \nthe file level for all applications running on the host (as \nshown in Figure 33.3 ). However, there are options for a \nhost-based adapter or software to provide encryption of \nany data leaving the host as files, blocks, or objects. As \nwith application-level implementations, the operating \nsystem must still provide user authentication and authori-\nzation to prevent against host-level attacks. If these strong \naccess controls are absent, host-level encryption will pro-\nvide no additional security benefit (aside from protection \nagainst loss or theft of media). If implemented correctly \nand integrated with the encryption solution, they can pro-\nvide some process authorization granularity, managing \nwhich users should be allowed to view plaintext data. \n At the host level, encryption can be done in software, \nusing CPU resources to perform the actual encryption \nand storing the keys in memory, or offloaded to spe-\ncialized hardware. Offload involves use of an HBA or \nan accelerator card resident in the host to perform the \nactual encryption of the data. In the case of the HBA the \nencryption can be performed in-band and is dedicated \nto the particular transport connection from the host, that \nis, Fibre Channel. For an accelerator card approach, the \nencryption is done as a look-aside operation independent \nof the transport. This provides flexibility for host con-\nnectivity but increases the memory and I/O bus load in \nthe system. In either case the host software would control \nthe connection to the key manager and management of \nthe keys. \n There may be a need in the enterprise for the host-\nbased encryption solution to support multiple operating \nsystems, allowing for interoperability across systems or \nconsistency in the management domain, something to \nconsider when evaluating solutions. In addition, when \nencryption is implemented at the host level, there is the \nflexibility of being storage and array independent, allow-\ning for support of legacy storage with no new hardware \nneeded. Host-based encryption does present a challenge \nwhen coupled with storage-based functionality, that is, \nreplication. If replication is employed underneath the \nhost encryption level the host implementation must have \nthe ability to track replicas and associate encryption \nkeys, eliminating the need for users to manually man-\nage the replication and encryption technology. As host \nencryption supplies encrypted data to the array, remote \nreplication would transmit encrypted, uncompressible \ndata. This would severely impact WAN performance. \n As with application-based encryption, ediscovery \nsolutions in the enterprise pose additional complexities. \nEncryption at the host level will expose only encrypted \ninformation to other hosts and devices in the stack, \nintroducing the same challenges with analysis as those \ndescribed in the prior section. \n As encryption is performed at the host level, the data \ncan be of variable record length. Similar to the applica-\ntion-based approach, the encryption solution can add \ninformation to the encryption payload to allow for a dig-\nital signature or cryptographic authentication. This would \nEncrypted Data\nFibre-Channel Tape Library\nConnectivity\nStorage Array\nUnencrypted Data\nServers\n FIGURE 33.3 Coverage for host-based encryption. \n" }, { "page_number": 618, "text": "Chapter | 33 SAN Security\n585\nprevent a “ man-in-the-middle ” from substituting bad \npackets for the good encrypted packets from the host. \n Network Level \n If the threats in the enterprise are not at the server, oper-\nating system, or application level, but instead at the net-\nwork or storage level, then a network-based appliance \napproach for encryption may work best. This approach \nis operating system independent and can be applied to \nfile, block, tape, Fibre Channel, iSCSI, or NAS data. \nEncryption and key management are handled entirely in \nhardware and run at wire speed for the connection. The \nappliance presents an “ unencrypted side ” and “ encrypted \nside ” to the network. Encryption can be designated on a \nper block, file or tape basis and the keys maintained for \nthe life of the data. Appliances available today are typi-\ncally FIPS 140-2 level 3 validated . \n There are two implementations for a network-based \nappliance design: store-and-forward or transparent. The \nstore-and-forward design appears as storage to the server \nand a server to the storage, and supports iSCSI, Fibre \nChannel, SAN, NAS, and tape. An I/O operation comes \nto the appliance, is terminated, the data encrypted, and \nthen forwarded to the destination storage device. This \napproach adds latency and as a result, some form of \n “ cut-through “ ideally needs to be offered to minimize \nthe impact of the device for nonencrypted traffic. In addi-\ntion, to appear as both server and storage, the store-and-\nforward appliance either needs to spoof the identities of \nthe attached devices or rely on robust security practices \nto counteract the attempts to circumvent the appliance. \nWhile there may be a latency penalty for encrypting data \nthrough the appliance, the store-and-forward-based design \nhas the benefit of allowing the attached storage devices to \nbe rekeyed in the background. This is performed with no \ndisruption to host operations as all I/O operations to the \nstorage are handled independently of the host. There may \nstill be some performance impact to the rekeying process, \ndepending on the I/O load on the encryptor. \n The transparent approach provides a flow-through \nmodel for the data being encrypted, supporting Fibre \nChannel SAN and tape. The appliance inspects SCSI \nheaders as data flows through the appliance and encrypts \nonly the data payloads that match preset source/destina-\ntion criteria in the appliance configuration. The latency \nassociated with this approach is minimal. The transpar-\nent design does, however, have a drawback when the \nencrypted data needs to be rekeyed. Unlike the store-\nand-forward design, the device is essentially transparent \nin the data flow, requiring the host to perform the reads \nand writes required in rekeying the encrypted data. This \nprocess can be done by a separate host agent and could \nbe performed while normal operations are in process. \n For block-based implementations, the size of the \nencrypted data cannot increase. This means no additional \ninformation can be added to the encrypted payload (for \nexample, a digital signature). This is not true for file or \ntape-based encryption where the record information may \nbe variable. As noted in the discussion on standards, the \nIEEE is working to provide standards for encrypting \nblock data at rest, in IEEE P1619 . \n There may be a need in the enterprise for the encryp-\ntion to support multiple operating systems, allowing for \ninteroperability across systems or consistency in the \nmanagement domain. In addition, when encryption is \nimplemented at the network level, there is the flexibil-\nity of being storage- and array-independent, allowing \nfor support of legacy storage — at the cost of adding new \nhardware. Hardware in this case is added in increments \nof ports, typically two at a time, adding to the power, \npackage, and cooling issues currently facing enterprises \ntoday. In addition, adding appliances in these increments \ncan add complications in managing additional devices in \nthe enterprise. Network-level encryption does present a \nchallenge when coupled with storage-based functionality \nsuch as replication. If replication is employed underneath \nencryption at the network level, the implementation \nmust have the ability to track replicas and associate \nencryption keys, eliminating the need for users to manu-\nally manage the replication and encryption technology. \nAs network-level encryption supplies encrypted data to \nthe array, remote replication would transmit encrypted, \nuncompressible data. This would severely impact WAN \nperformance. \n There are also implementations moving to use data \nintegrity features as part of the protocols. Encryption in \nthe network level (see Figure 33.4 ) would encrypt both \nthe data and the data integrity, resulting in mismatches at \nthis level of checking performed at the arrays. \n Network-level encryption doesn’t take into account \nthe impact on replicated data. Any locally replicated \ninformation at the storage layer, that is, a clone, does \nnot have visibility into the network device management \nand the keys and the network device does not have vis-\nibility into the replication process. Key management can \nbecome more complex and more manual. In addition, \ncompression in the WAN is impossible for remote repli-\ncation of the encrypted information causing WAN capac-\nity issues. \n" }, { "page_number": 619, "text": "PART | V Storage Security\n586\n Device Level \n Encryption at the device level — array, disk, or tape — is \na sufficient method of protecting sensitive data residing \non storage media, which is a primary security risk many \norganizations are seeking to address. All data written to \nthe device would be encrypted and stored as such and \nthen decrypted when read from the device. Encryption \nat this level would be application and host independ-\nent and can be transport independent as well. When \naddressing media theft, the granularity for encryption, \nand keys, can be at the disk or tape level. As demon-\nstrated in Figure 33.5 , exposure for unencrypted data is \nincreased as compared to the previous implementation \nexamples. \n Array-Level Encryption \n There are a number of design points for encryption in the \narray, that is, at the disk or controller level. Design con-\nsiderations for encryption include the interfaces to the \narray, software support, performance, FIPS validation, \nkey management, and encryption object granularity to \nname a few. The intent is to have the encryption imple-\nmentation transparent to the hosts attached while protect-\ning the removable media. The connected hosts may not \nFibre-Channel Tape Library\nEncryption\nAppliance\nCluster\nConnectivity\nServers\nUnencrypted Data\nEncrypted Data\nStorage Array\n FIGURE 33.4 Coverage for network-based encryption. \nEncrypted Data\nUnencrypted\nData\nServers\nConnectivity\nFibre Channel Tape Library\nStorage Array\n FIGURE 33.5 Coverage for device-based encryption. \n" }, { "page_number": 620, "text": "Chapter | 33 SAN Security\n587\nbe knowledgeable of the encryption implementation but \nmay be with respect to management and performance. \nAll aspects of the design must be considered. \n One possible approach is to implement the encryp-\ntion in the disk drive, at the back end of the array. Some \npoints to consider: \n ● As encryption is on a per-drive basis, the computes \nrequired are included in the drive enclosure, allowing \nfor a scalable solution, adding encryption with every \nunit. The downside to this is cost to the functionality \nthat is added with every unit. So while performance \nscales, so can cost. \n ● Customers might be unable to verify that encryption \nis enabled and functioning on the array, because data \nis always plaintext when it is external to the disk \ndrive. \n ● Any approach to encryption at this level would \nalso require interoperability of the encryption \nimplementation across drive vendors to maintain \nflexibility and customer choice. \n ● Bulk drive encryption would not provide key \ngranularity at the LUN/device level, which in many \ncases would eliminate the possibility of erasing \nspecific confidential projects via key deletion. \n ● Last, as driven by the Trusted Computing Group, \nencryption at this level may follow a different path \nfor validation, an alternative to FIPS 140 yet to \nbe developed. Without a standard to evaluate it is \nimpossible to understand the disk drive encryption \nvalidation proposal. \n Another approach might be to implement encryption \nin the I/O controller connected to the disk drives. Some \npoints to consider for this potential implementation are: \n ● Encryption is on the interface level and is required to \nsupport full wire speed versus interface speed in the \ndrive approach. \n ● The cost model would be based on a single controller \nversus tens of drives connected to a single controller. \n ● The controller approach has the ability to perform \nencryption at the I/O level, allowing the granularity \nfor key management to be at the LUN or disk level. \nThis approach allows for future support of LUN-\nbased erasure and logical data management. \n ● The controller approach is drive-independent, not \nrelying on any specific vendor or interface, allowing \nfor all standard tools and failure analysis to be \nperformed. \n ● In supporting encryption at the controller level the \ncrypto boundary can be well defined, allowing for \nFIPS 140-2 validation. \n ● Encryption and key control would be separate from \nthe disk drive containing the encrypted data. This \nallows the customer to validate the encryption func-\ntionality is working and not be concerned with keys \nleaving with a removed disk drive. \n An alternative to the encryption option for the protec-\ntion against media theft is data erasure. It addresses the \nsame primary use case: protection of disk media con-\ntaining sensitive data and is available today, as erasure \nservices (for removed drives) and software (for in-frame \nerasure). Erasure overwrites data multiple times, in \naccordance with the Department of Defense specification \n5220.22-M, removing the data from the media. One con-\nsideration is that a minority of drives are not erasable for \nmechanical reasons. \n Tape Encryption \n As part of normal operations, data is frequently written \nfrom storage devices to tape for backup/data protection \nor third-party use. Data on tape cartridges becomes sus-\nceptible to theft or loss due to the size of the tape car-\ntridge and quantity of the number of tapes to track during \nnormal backup operations. To best protect the data on \ntape against unintended/unauthorized viewing, it can be \nencrypted. There are several approaches to encrypting \ntape as part of the backup operation: \n ● Reading encrypted data from application/disk and \nwriting as encrypted data to tape \n ● Reading unencrypted data from application/disk and \nencrypting as part of the backup application \n ● Encrypting any/all data sent to tape via an encryption \nappliance in the network \n ● Encrypting any/all data written to the tape via an \nencrypting tape library or tape device \n Tape encryption also presents key management chal-\nlenges. Tapes may be stored for an extended period of \ntime before an attempt is made to recover information. \nDuring the normal process of managing encrypted data, \nthe application may have rekeyed the data on disk, updat-\ning all data on the disk to use a new key. This process \nwould present the application with active data using one \nkey and data on tape using an older key. The application \nmust be therefore be able to manage keys for the lifetime \nof the data, regardless of where the data is stored. The \nfollowing are tape encryption deployment options. \n Application Level \n Backup is typically another operation running on a \nhost as a peer to the encrypting application. Any peer \n" }, { "page_number": 621, "text": "PART | V Storage Security\n588\napplication or process will read data from the storage \narray as encrypted data. This allows the backup proc-\ness to write already encrypted data to tape without hav-\ning to perform the encryption itself. It will, however, \nprevent data compression during the backup process, \nas encrypted data is not compressible. Because typical \ncompression ratios reduce data volumes anywhere from \n2:1 to 4:1, this will impact performance of the backup \nprocess if a large amount of bulk data is encrypted. \n Applications providing encryption can also provide \naccess for authorized peer applications to read data \nin encrypted or unencrypted form. This would allow a \nbackup application to read data in unencrypted form and \nallow for compression followed by encryption to be per-\nformed as part of the backup process. \n Operating System/Host \n Backup is another process on the host when using host-\nbased encryption. The encryption process in the host \noperating system has the option of allowing the backup \nprocess to read data in encrypted or unencrypted form. \nIf the authorization module determines that the backup \nprocess can read plaintext data, backup will receive \ndecrypted data to be sent to tape. Encryption will also \nneed to be performed by the backup application to allow \nfor writing secure tapes. The backup process could take \nadvantage of compression in this data flow. If the backup \nprocess is not allowed to view decrypted data, it will read \nencrypted data from disk and write it as such to tape. \nAs in the application-based approach, compression may \nnot be able to be utilized on this encrypted data, creat-\ning potential performance issues. In addition, the encryp-\ntion engine for the host will have to maintain the keys for \nthe lifetime of the data to ensure that decryption can take \nplace in the event a restore from tape is needed. \n In the Network \n If an encryption appliance is placed in the network, \nbackup can be handled in one of two ways. If backup is \nvolume based, any data read from the storage array may \nalready be encrypted. The backup application will read \nthe encrypted data and write it directly to tape. In this \nscenario, there would be no benefit of compression in the \ndata path. If the backup is file or incremental based, the \nbackup process would read the data through the appli-\nance, decrypting it in the process, and could then write \nthe data to tape. To provide encryption, a tape encryp-\ntion appliance would be positioned in front of the tape \ndevice, compressing and encrypting the data as it is writ-\nten to tape. The tape encryption appliance would manage \nthe keys for the lifetime of the tape. \n At the Tape Library/Drive \n Data can be encrypted at the tape drive level, independ-\nent of the backup process and application software. All \nencryption is performed at the device, or library, when \ndata is written to the tape and decryption performed at the \ndrive when data is read from the tape. The backup applica-\ntion deals with nothing but plaintext data. The tape drive \nor library can be the management interface to the key \nmanager, requesting generation of keys for new tapes writ-\nten and retrieval of keys for each tape read. Association \nof keys to tapes is managed at the key manager appli-\nance. In some cases the key manager can be integrated \nto work cooperatively with a volume pool policy defined \nwith backup application. Jobs directed to use tapes in a \npool associated with this policy begin with a request by \nthe drive or library for an encryption key only when the \nbackup or restore job uses tapes in this volume pool. \n 17. CONCLUSION \n Security is a complex and constantly evolving practice in \nthe IT industry. Companies must recognize that threats \nto information infrastructures require vigilance on the \npart of IT managers and the vendors they rely on. \n The management, integrity, and availability of your \ndata should be your first priority. As this analysis has \nshown, basic security best practices can be implemented \nto greatly increase the security of your Fibre Channel \nSAN and the data it contains. The available avenues of \nattack are minimized by carefully controlling access to \nresources, both physically and logically. \n While many storage vendors are making extensive \ninvestments in the security of their SAN products, they \nalso recognize that it is impossible to predict every pos-\nsible current and future combination of threats that might \nimpact an IT environment. Ensure that your vendor con-\nstantly monitor changes and advances in security threats \nand technology, and updates its products with new fea-\ntures and functionality to address any issues that might \nimpact the data on your SAN. \n Encryption is a tool that can be used to protect the \nconfidentiality of the information in the enterprise. To \nunderstand if and how an encryption solution should be \ndeployed, administrators need to understand and assess \nthe risks of unauthorized access and disclosure at each \npoint of the information flow. They must also understand \nhow deployment of encryption technology may add risk \nto other areas of the business, including complexity added \nto management, and risks to availability of encrypted \ndata to authorized users. Data unavailability can come \n" }, { "page_number": 622, "text": "Chapter | 33 SAN Security\n589\n TABLE 33.2 Summary of encryption approaches \n \n Encryption \n Key Management \n Backup \n Issues \n Risks \nAddressed \n Application \n Typically done \nin software but \ncan be done in \nhardware. \n Typically stored \nin memory or \nfile. Coordination \nof keys across \napplications \npresents challenges \nto sharing \ninformation. Needs \nexternal appliance \nto meet FIPS 140-2 \nLevel 3 \n Peer process to \nthe application \nand will back up \nencrypted data. \nNo compression. \nLifetime key \nmanagement \nchallenges. \n Encryption can be host \nsystem intensive and \nis a per-application \nprocess. If more than \none application is used \non a host, sharing of \ninformation can be an \nissue. Storing keys for \nlifetime of data can \nalso be an issue for \napplication upgrades. \nCan impact ediscovery. \n Protects against \noperating system \nand network \nattacks as well as \nmedia theft. \n Host \n Typically done \nin software but \ncan be done in \nhardware. Can \nbe file or block \nbased. \n Typically stored \nin memory but \ncan have external \nappliance. \n Peer process will \nback up data and \nhost will need to re-\nencrypt. \n Encryption can be host \nsystem intensive. Storing \nkeys for lifetime of data \ncan also be an issue for \nOS upgrades (if external \nkey management facility \nis not used). Can impact \nediscovery. \n Protects against \nnetwork attacks \nas well as media \ntheft. \n Network \n Typically done in \nhardware. \n Managed for the \nlifetime of data in \nhardware. \n Can perform block-\nbased encryption to \ndisk or tape or file-\nbased encryption. \nCan also incorporate \ncompression for \ntape backup or \ncoordination for \nreplication. \n A single aggregation \npoint in the network \nfor encryption can be a \nperformance bottleneck. \n Protects against \nsome network \nattacks as well as \nmedia theft. \n Disk based \n Typically done \nin hardware but \ncan be done in \nsoftware. \n Can be done per \ndisk or LUN. Key \nmanagement can \nbe resident-to-array \nor leveraged from \nexternal appliance. \n Always presents \nunencrypted data \nexternal to disk. \n Handles very focused use \ncase. Largest exposure \nof encrypted data in an \nenterprise. \n Protects against \nmedia theft. \nfrom something as simple as key management, which is \nperhaps the most important factor to consider in imple-\nmenting an encryption solution. Encryption should be \nconsidered as part of a total security solution, but not the \nonly solution; administrators need to take advantage of \nprotection options at all levels of the information flow and \narchitecture. Two general issues that are present across \nencryption implementations are: \n ● The conversion of plaintext to ciphertext when \nencrypting data for the first time or ciphertext to \nciphertext when encrypting with a new key. Both \nare done as a data migration project, even when \nit is done in place. Host resources, impact to CPU \nutilization, and running applications must be \nconsidered. \n ● The replication of encrypted data across the WAN. \nEncryption, if done correctly, produces random, \nuncompressible data that will impact the utilization \nof remote connectivity. \n Table 33.2 summarizes the various deployment options. \n REFERENCES \n [1] J.C. Blaul , Storage Security , Wiley Publishing, Inc. , 2003 . \n [2] H. Dwivedi , Securing Storage: A Practical Guide to SAN and NAS \nSecurity , Pearson Education, Inc. , 2006 . \n [3] B. King, LUN Masking in a SAN , Aliso Viejo, California, October 8, \n2001. \n [ 4] J. McDonald, Security Considerations for Fibre Channel Storage \nArea Networks , Hopkinton, Massachusetts, June 5, 2005. \n [5 ] C. McGowan, Approaches for Encryption of Data-at-Rest in the \nEnterprise , Hopkinton, Massachusetts, September 10, 2008. \n" }, { "page_number": 623, "text": "This page intentionally left blank\n" }, { "page_number": 624, "text": "591\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Storage Area Networking Security \nDevices \n Robert Rounsavall \n Terremark Worldwide, Inc. \n Chapter 34 \n Storage area networking (SAN) devices have become a \ncritical IT component of almost every business today. The \nupside and intended consequences of using a SAN are to \nconsolidate corporate data as well as reduce cost, com-\nplexity, and risks. The tradeoff and downside to imple-\nmenting SAN technology are that the risks of large-scale \ndata loss are higher in terms of both cost and reputation. \nWith the rapid adoption of virtualization, SANs now \nhouse more than just data; they house entire virtual serv-\ners and huge clusters of servers in “ enterprise clouds. ” 1 \nIn addition to all the technical, management, deployment, \nand protection challenges, a SAN comes with a full range \nof legal regulations such as PCI, HIPAA, SOX, GLBA, \nSB1386, and many others. Companies keep their infor-\nmational “ crown jewels ” on their SANs but in most cases \ndo not understand all the architecture issues and risks \ninvolved, which can cost an organization huge losses. \nThis chapter covers all the issues and security concerns \nrelated to storage area network security. \n 1. WHAT IS A SAN? \n The Storage Network Industry Association (SNIA) 2 \ndefines a SAN as a data storage system consisting of \nvarious storage elements, storage devices, computer sys-\ntems, and/or appliances, plus all the control software, all \ncommunicating in efficient harmony over a network. Put \nin simple terms, a SAN is a specialized, high-speed net-\nwork attaching servers and storage devices and, for this \nreason, it is sometimes referred to as “ the network behind \nthe servers. ” A SAN allows “ any-to-any ” connections \nacross the network, using interconnected elements such \nas routers, gateways, hubs, switches, and directors. It \neliminates the traditional dedicated connection between a \nserver and storage as well as the concept that the server \neffectively “ owns and manages ” the storage devices. It \nalso eliminates any restriction to the amount of data that \na server can access, currently limited by the number of \nstorage devices attached to the individual server. Instead, \na SAN introduces the flexibility of networking to enable \none server or many heterogeneous servers to share a \ncommon storage utility, which may comprise many stor-\nage devices, including disk, tape, and optical storage. \nAdditionally, the storage utility may be located far from \nthe servers that use it. \n The SAN can be viewed as an extension to the storage \nbus concept, which enables storage devices and servers \nto be interconnected using similar elements to those used \nin local area networks (LANs) and wide area networks \n(WANs). SANs can be interconnected with routers, hubs, \nswitches, directors, and gateways. A SAN can also be \nshared between servers and/or dedicated to one server. It \ncan be local or extended over geographical distances. \n 2. SAN DEPLOYMENT JUSTIFICATIONS \n Perhaps a main reason SANs have emerged as the leading \nadvanced storage option is because they can often allevi-\nate many if not all the data storage “ pain points ” of IT \nmanagers. 3 For quite some time IT managers have been \nin a predicament in which some servers, such as data-\nbase servers, run out of hard disk space rather quickly, \nwhereas other servers, such as application servers, tend to \nnot need a whole lot of disk space and usually have stor-\nage to spare. When a SAN is implemented, the storage \ncan be spread throughout servers on an as-needed basis. \n 1 The Enterprise Cloud by Terremark, www.theenterprisecloud.com . \n 2 Storage Network Industry Association, www.snia.org/home . \n 3 SAN justifi cations: http://voicendata.ciol.com/content/enterprise_\nzone/105041303.asp . \n" }, { "page_number": 625, "text": "PART | V Storage Security\n592\n The following are further justifications and benefits for \nimplementing a storage area network: \n ● They allow for more manageable, scalable, and \nefficient deployment of mission-critical data. \n ● SAN designs can protect resource investments from \nunexpected turns in the economic environment and \nchanges in market adoption of new technology. \n ● SANs help with the difficulty of managing large, \ndisparate islands of storage from multiple physical \nand virtual locations. \n ● SANs reduce the complexity of maintaining \nscheduled backups for multiple systems and difficulty \nin preparing for unscheduled system outages. \n ● The inability to share storage resources and achieve \nefficient levels of subsystem utilization is avoided. \n ● SANs help address the issue of a shortage of \nqualified storage professionals to manage storage \nresources effectively. \n ● SANs help us understand how to implement the \nplethora of storage technology alternatives, including \nappropriate deployment of Fibre Channel as well as \nInternet small computer systems interface (iSCSI), \nFibre Channel over IP (FCIP), and InfiniBand. \n ● SANs allow us to work with restricted budgets and \nincreasing costs of deploying and maintaining them, \ndespite decreasing prices for physical storage in \nterms of average street price per terabyte. \n In addition to all these benefits, the true advantage of \nimplementing a SAN is that it enables the management of \nhuge quantities of email and other business-critical data \nsuch as that created by many enterprise applications, such \nas customer relationship management (CRM), enterprise \nresource planning (ERP), and others. The popularity of \nthese enterprise applications, regulatory compliance, and \nother audit requirements have resulted in an explosion \nof information and data that have become the lifeblood of \nthese organizations, greatly elevating the importance of a \nsound storage strategy. Selecting a unified architecture \nthat integrates the appropriate technologies to meet user \nrequirements across a range of applications is central to \nensuring storage support for mission-critical applications. \nThen matching technologies to user demands allows for \noptimized storage architecture, providing the best use of \ncapital and IT resources. \n A large number of enterprises have already imple-\nmented production SANs, and many industry analysts \nhave researched the actual benefits of these implemen-\ntations. A Gartner 4 study of large enterprise data center \nmanagers shows that 64% of those surveyed were either \nrunning or deploying a SAN. Another study by the \nAberdeen Group cites that nearly 60% of organizations \nthat have SANs installed have two or more separate \nSANs. The study also states that 80% of those surveyed \nfelt that they had satisfactorily achieved their main goals \nfor implementing a SAN. Across the board, all vendor \ncase studies and all industry analyst investigations have \nfound the following core benefits of SAN implementation \ncompared to a direct attached storage (DAS) environment: \n ● Ease of management \n ● Increased subsystem utilization \n ● Reduction in backup expense \n ● Lower Total Cost of Ownership (TCO) \n 3. THE CRITICAL REASONS FOR \nSAN SECURITY \n SAN security is important because there is more con-\ncentrated, centralized, high-value data at risk than in \nnormal distributed servers with built-in, smaller-scale \nstorage solutions. On a SAN you have data from multiple \ndevices and multiple parts of the network shared on one \nplatform. This typically fast-growing data can be consol-\nidated and centralized from locations all over the world. \nSANs also store more than just data; with the increasing \nacceptance of server virtualization, multiple OS images \nand the data they create are being retrieved from and \nenabled by SANs. \n Why Is SAN Security Important? \n Some large-scale security losses have occurred by inter-\ncepting information incrementally over time, but the vast \nmajority of breaches involve access or loss of data from \nthe corporate SAN. (For deeper insight into the numbers, \ncheck out the Data Loss Web site. This website tracks \nincidents and is a clearinghouse of data loss each month. 5 ) \n A wide range of adversaries can attack an organization \nsimply to access its SAN, which is where all the company \ndata rests. Common adversaries who will be looking to \naccess the organization’s main data store are: \n ● Financially motivated attackers and competitors \n ● Identity thieves \n ● Criminal gangs \n ● State-sponsored attackers \n ● Internal employees \n ● Curious business partners \n 4 Gartner, www.gartner.com . \n 5 Data Loss Database, http://datalossdb.org/ . \n" }, { "page_number": 626, "text": "Chapter | 34 Storage Area Networking Security Devices\n593\n If one or some of these perpetrators were to be suc-\ncessful in stealing or compromising the data in the SAN, \nand if news got around that your customer data had been \ncompromised, it could directly impact your organization \nmonetarily and cause significant losses in terms of: \n ● Reputation \n ● Time lost \n ● Forensics investigations \n ● Overtime for IT \n ● Business litigation \n ● Perhaps even a loss of competitive edge — for \nexample, if the organization’s proprietary \nmanufacturing process is found in the wild \n 4. SAN ARCHITECTURE AND \nCOMPONENTS \n In its simplest form, a SAN is a number of servers \nattached to a storage array using a switch. Figure 34.1 is \na diagram of all the components involved. \n SAN Switches \n Specialized switches called SAN switches are at the \nheart of a typical SAN. Switches provide capabilities \nto match the number of host SAN connections to the \nnumber of connections provided by the storage array. \nSwitches also provide path redundancy in the event of \na path failure from host server to switch or from storage \narray to switch. SAN switches can connect both serv-\ners and storage devices and thus provide the connection \npoints for the fabric of the SAN. For smaller SANs, the \nstandard SAN switches are called modular switches and \ncan typically support eight or 16 ports (though some \n32-port modular switches are beginning to emerge). \nSometimes modular switches are interconnected to create \na fault-tolerant fabric. For larger SAN fabrics, director-\nclass switches provide a larger port capacity (64 to 128 \nports per switch) and built-in fault tolerance. The type of \nSAN switch, its design features, and its port capacity all \ncontribute to its overall capacity, performance, and fault \ntolerance. The number of switches, types of switches, \nand manner in which the switches are interconnected \ndefine the topology of the fabric. \n Network Attached Storage (NAS) \n Network attached storage (NAS) is file-level data storage \nproviding data access to many different network clients. \nThe Business Continuity Planning (BCP) defined in this \ncategory address the security associated with file-level \nstorage systems/ecosystems. They cover the Network \nFile System (NFS), which is often used by Unix and \nLinux (and their derivatives ’ ) clients as well as SMB/\nCIFS, which is frequently used by Windows clients. \n Fabric \n When one or more SAN switches are connected, a fabric \nis created. The fabric is the actual network portion of the \nSAN. Special communications protocols such as Fibre \nChannel (FC), iSCSI, and Fibre Channel over Ethernet \n(FCoE) are used to communicate over the entire net-\nwork. Multiple fabrics may be interconnected in a single \nSAN, and even for a simple SAN it is not unusual for it \nto be composed of two fabrics for redundancy. \n HBA and Controllers \n Host servers and storage systems are connected to the \nSAN fabric through ports in the fabric. A host connects \nto a fabric port through a Host Bus Adapter (HBA), and \nthe storage devices connect to fabric ports through their \ncontrollers. Each server may host numerous applications \nthat require dedicated storage for applications process-\ning. Servers need not be homogeneous within the SAN \nenvironment. \n Tape Library \n A tape library is a storage device that is designed to hold, \nmanage, label, and store data to tape. Its main benefit \nUsers\nNetwork\nWindows Servers\nUnix Servers\nSAN Switch\nTape Library\nStorage\nSAN Switch\n FIGURE 34.1 Simple SAN elements. \n" }, { "page_number": 627, "text": "PART | V Storage Security\n594\n is related to cost/TB, but its slow random access rel-\negates it to an archival device. \n Protocols, Storage Formats and \nCommunications \n The following protocols and file systems are other import-\nant components of a SAN. \n Block-Based IP Storage (IP) \n Block-based IP storage is implemented using protocols \nsuch as iSCSI, Internet Fibre Channel Protocol (iFCP), \nand FCIP to transmit SCSI commands over IP networks. \n Secure iSCSI \n Internet SCSI or iSCSI, which is described in IETF RFC \n3720, is a connection-oriented command/response proto-\ncol that runs over TCP and is used to access disk, tape, \nand other devices. \n Secure FCIP \n Fibre Channel over TCP/IP (FCIP), defined in IETF RFC \n3821, is a pure Fibre Channel encapsulation protocol. It \nallows the interconnection of islands of Fibre Channel \nstorage area networks through IP-based networks to form \na unified storage area network. \n Fibre Channel Storage (FCS) \n Fibre Channel is a gigabit-speed network technology \nused for block-based storage and the Fibre Channel \nProtocol (FCP) is the interface protocol used to transmit \nSCSI on this network technology. \n Secure FCP \n Fibre Channel entities (host bus adapters or HBAs, \nswitches, and storage) can contribute to the overall secure \nposture of a storage network by employing mechanisms \nsuch as filtering and authentication. \n Secure Fibre Channel Storage Networks \n A SAN is architected to attach remote computer stor-\nage devices (such as disk arrays, tape libraries, and \noptical jukeboxes) to servers in such a way that, to the \noperating system, the devices appear as though they’re \nlocally attached. These SANs are often based on a Fibre \nChannel fabric topology that utilizes the Fibre Channel \nProtocol (FCP). \n SMB/CIFS \n SMB/CIFS is a network protocol whose most common \nuse is sharing files, especially in Microsoft operating \nsystem environments. \n Network File System (NFS) \n NFS is a client/server application, communicating with a \nremote procedure call (RPC)-based protocol. It enables \nfile systems physically residing on one computer sys-\ntem or NAS device to be used by other computers in the \nnetwork, appearing to users on the remote host as just \nanother local disk. \n Online Fixed Content \n An online fixed content system usually contains at least \nsome data subject to retention policies and a retention-\nmanaged storage system/ecosystem is commonly used \nfor such data. \n 5. SAN GENERAL THREATS AND ISSUES \n A SAN is a prime target of all attackers due to the gold-\nmine of information that can be attained by accessing it. \nHere we discuss the general threats and issues related \nto SANs. \n SAN Cost: A Deterrent to Attackers \n Unlike many network components such as servers, rout-\ners, and switches, SANs are quite expensive, which \ndoes raise the bar for attackers a little bit. There are not \nhuge numbers of people with SAN protocol expertise, \nand not too many people have a SAN in their home lab, \nunless they are a foreign government that has dedicated \nresources to researching and exploiting these types of vul-\nnerabilities. Why would anyone go to the trouble when \nit would be much easier to compromise the machines of \nthe people who manage the SANs or the servers that are \nthemselves connected to the SAN? \n The barrier to entry to directly attack the SAN is \nhigh; however, the ability to attack the management tools \nand administrators who access the SAN is not. Most are \nadministered via Web interfaces, software applications, \nor command-line interfaces. An attacker simply has to \ngain root or administrator access on those machines to be \nable to attack the SAN. \n Physical Level Threats, Issues, and \nRisk Mitigation \n There can be many physical risks involved in using a \nSAN. It is important to take them all into consideration \nwhen planning and investing in a storage area network. \n ● Locate the SAN in a secure datacenter \n ● Ensure that proper access controls are in place \n" }, { "page_number": 628, "text": "Chapter | 34 Storage Area Networking Security Devices\n595\n ● Cabinets, servers, and tape libraries come with locks; \nuse them \n ● Periodically audit the access control list \n ● Verify whether former employees can access the \nlocation where the SAN is located \n ● Perform physical penetration and social engineering \ntests on a regular basis \n Physical Environment \n The SAN must be located in an area with proper ventila-\ntion and cooling. Ensure that your datacenter has proper \ncooling and verify any service-level agreements with a \nthird-party provider with regard to power and cooling. \n Hardware Failure Considerations \n Ensure that the SAN is designed and constructed in such \na way that when a piece of hardware fails, it does not \ncause an outage. Schedule failover testing on a regular \nbasis during maintenance windows so that it will be com-\npleted on time. \n Secure Sensitive Data on Removable Media \nto Protect “ Externalized Data ” \n Many of the data breaches that fill the newspapers and \ncreate significant embarrassments for organizations are \neasily preventable and involve loss of externalized data \nsuch as backup media. To follow are some ideas to avoid \nunauthorized disclosure while data is in transit: \n ● Offsite backup tapes of sensitive or regulated data \nshould be encrypted as a general practice and must \nbe encrypted when leaving the direct control of the \norganization; encryption keys must be stored sepa-\nrately from data. \n ● Use only secure and bonded shippers if not \nencrypted. (Remember that duty-of-care contractual \nprovisions often contain a limitation of liability \nlimited to the bond value. The risk transfer value is \noften less than the data value.) \n ● Secure sensitive data transferred between \ndatacenters. \n ● Sensitive/regulated data transferred to and \nfrom remote datacenters must be encrypted in \nflight. \n ● Secure sensitive data in third-party datacenters. \n ● Sensitive/regulated data stored in third-party \ndatacenters must be encrypted prior to arrival \n(both in-flight and at-rest). \n ● Secure your data being used by ediscovery tools. \n Know Thy Network (or Storage Network) \n It is not only a best practice but critical that the SAN is \nwell documented. All assets must be known. All physi-\ncal and logical interfaces must be known. Create detailed \nphysical and logical diagrams of the SAN. Identify all \ninterfaces on the SAN gear. Many times people overlook \nthe network interfaces for the out-of-band management. \nSome vendors put a sticker with login and password phys-\nically on the server for the out-of-band management ports. \nEnsure that these are changed. Know what networks can \naccess the SAN and from where. Verify all entry points \nand exit points for data, especially sensitive data such as \nfinancial information or PII. If an auditor asks, it should \nbe simple to point to exactly where that data rests and \nwhere it goes on the network. \n Use Best Practices for Disaster Recovery \nand Backup \n Guidelines such as the NIST Special Publication 800-34 6 \noutline best practices for disaster recovery and backup. The \nseven steps for contingency planning are outlined below: \n 1. Develop the contingency planning policy statement. \nA formal department or agency policy provides \nthe authority and guidance necessary to develop an \neffective contingency plan. \n 2. Conduct the business impact analysis (BIA) . The \nBIA helps identify and prioritize critical IT systems \nand components. A template for developing the BIA \nis also provided to assist the user. \n 3. Identify preventive controls . Measures taken to \nreduce the effects of system disruptions can increase \nsystem availability and reduce contingency life-cycle \ncosts. \n 4. Develop recovery strategies . Thorough recovery \nstrategies ensure that the system may be recovered \nquickly and effectively following a disruption. \n 5. Develop an IT contingency plan . The contingency \nplan should contain detailed guidance and proce-\ndures for restoring a damaged system. \n 6. Plan testing, training, and exercises . Testing the plan \nidentifies planning gaps, whereas training prepares \nrecovery personnel for plan activation; both activi-\nties improve plan effectiveness and overall agency \npreparedness. \n 7. Plan maintenance . The plan should be a living docu-\nment that is updated regularly to remain current with \nsystem enhancements. \n 6 NIST Special Publication 800-34, http://csrc.nist.gov/publications/\nnistpubs/800-34/sp800-34.pdf . \n" }, { "page_number": 629, "text": "PART | V Storage Security\n596\n Logical Level Threats, Vulnerabilities, and \nRisk Mitigation \n Aside from the physical risks and issues with SANs, there \nare also many logical threats. A threat is defined as any \npotential danger to information or systems. These are the \nsame threats that exist in any network and they are also \napplicable to a storage network because Windows and \nUnix servers are used to access and manage the SAN. \nFor this reason, it is important to take a defense-in-depth \napproach to securing the SAN. \n Some of the threats that face a SAN are as follows: \n ● Internal threats (malicious). A malicious employee \ncould access the sensitive data in a SAN via manage-\nment interface or poorly secured servers. \n ● Internal threats (nonmalicious). Not following \nproper procedure such as using change management \ncould bring down a SAN. A misconfiguration could \nbring down a SAN. Poor planning for growth could \nlimit your SAN. \n ● Outside threats. An attacker could access your SAN \ndata or management interface by compromising a \nmanagement server, a workstation or laptop owned \nby an engineer, or other server that has access to \nthe SAN. \n The following parts of the chapter deal with protect-\ning against these threats. \n Begin with a Security Policy \n Having a corporate information security policy is essen-\ntial. 7 Companies should already have such policies, and \nthey should be periodically reviewed and updated. If \norganizations process credit cards for payment and are \nsubject to the Payment Card Industry (PCI) 8 standards, \nthey are mandated to have a security policy. Federal \nagencies subject to certification and accreditation under \nguidelines such as DIACAP 9 must also have security \npolicies. \n Is storage covered in the corporate security policy? \nSome considerations for storage security policies include \nthe following: \n ● Identification and classification of sensitive data \nsuch as PII, financial, trade secrets, and business-\ncritical data \n ● Data retention, destruction, deduplication, and \nsanitization \n ● User access and authorization \n Instrument the Network with Security Tools \n Many of the network security instrumentation devices \nsuch as IDS/IPS have become a commodity, required for \ncompliance and a minimum baseline for any IT network. \nThe problem with many of those tools is that they are \nsignature based and only provide alerts and packet cap-\ntures on the offending packet alerts. Adding tools such \nas full packet capture and network anomaly detection \nsystems can allow a corporation to see attacks that are \nnot yet known. They can also find attacks that bypass the \nIDS/IPSs and help prove to customers and government \nregulators whether or not the valuable data was actually \nstolen from the network. \n Intrusion Detection and Prevention \nSystems (IDS/IPS) \n Intrusion detection and prevention systems can detect \nand block attacks on a network. Intrusion prevention \nsystems are usually inline and can block attacks. A few \nwarnings about IPS devices: \n ● Their number-one goal is to not bring down the \nnetwork. \n ● Their number-two goal is to not block legitimate \ntraffic. \n Time after time attacks can slip by these systems. \nThey will block the low-hanging fruit, but a sophis-\nticated attacker can trivially bypass IDS/IPS devices. \nCommercial tools include TippingPoint, Sourcefire, ISS, \nand Fortinet. Open-source tools include Snort and Bro. \n Network Traffic Pattern Behavior Analysis \n Intrusion detection systems and vulnerability-scanning \nsystems are only able to detect well-known vulnerabili-\nties. A majority of enterprises have these systems as well \nas log aggregation systems but are unable to detect 0-day \nthreats and other previously compromised machines. The \nanswer to this problem is NetFlow data. NetFlow data \nshows all connections into and out of the network. There \nare commercial and open-source tools. Commercial tools \nare Arbor Networks and Mazu Networks. Open-source \ntools include nfdump and Argus. \n Full Network Traffic Capture and Replay \n Full packet capture tools allow security engineers to \nrecord and play back all the traffic on the network. This \n 7 Information Security Policy Made Easy, www.informationshield.\n 8 PCI Security Standards, https://www.pcisecuritystandards.org/ . \n 9 DIACAP Certifi cation and Accreditation standard, http://iase.disa.\nmil/ditscap/ditscap-to-diacap.html . \n" }, { "page_number": 630, "text": "Chapter | 34 Storage Area Networking Security Devices\n597\n allows for validation of IDS/IPS alerts and validation of \nitems that NetFlow or log data is showing. Commercial \ntools include Niksun, NetWitness, and NetScout. Open-\nsource tools include Wireshark and tcpdump. \n Secure Network and Management tools \n It is important to secure the network and management \ntools. If physical separation is not possible, then at a very \nminimum logical separation must occur. For example: \n ● Separate the management network with a firewall. \n ● Ensure user space and management interfaces are on \ndifferent subnets/VLANs. \n ● Use strong communication protocols such as SSH, \nSSL, and VPNs to connect to and communicate with \nthe management interfaces. \n ● Avoid using out-of-band modems if possible. If \nabsolutely necessary, use the callback feature on the \nmodems. \n ● Have a local technician or datacenter operators \nconnect the line only when remote dial-in access is \nneeded, and then disconnect when done. \n ● Log all external maintenance access. \n Restrict Remote Support \n Best practice is to not allow remote support; however, most \nSANs have a “ call home ” feature that allows them to call \nback to the manufacturer for support. Managed network \nand security services are commonplace. If remote access \nfor vendors is mandatory, take extreme care. Here are \nsome things that can help make access to the SAN safe: \n ● Disable the remote “ call home ” feature in the SAN \nuntil needed. \n ● Do not open a port in the firewall and give direct \nexternal access to the SAN management station. \n ● If outsourcing management of a device, ensure that \nthere is a VPN set up and verify that the data is \ntransmitted encrypted. \n ● On mission-critical systems, do not allow external \nconnections. Have internal engineers connect to the \nsystems and use a tool such as WebEx or GoToAssist \nto allow the vendor to view while the trusted \nengineer controls the mouse and keyboard. \n Attempt to Minimize User Error \n It is not uncommon for a misconfiguration to cause \na major outage. Not following proper procedure can \ncause major problems. Not all compromises are due to \nmalicious behaviors; some may be due to mistakes made \nby trusted personnel. \n Establish Proper Patch Management \nProcedures \n Corporations today are struggling to keep up with all \nthe vulnerabilities and patches for all the platforms they \nmanage. With all the different technologies and operat-\ning systems it can be a daunting task. Mission-critical \nstorage management gear and network gear cannot be \npatched on a whim whenever the administrator feels like \nit. There are Web sites dedicated to patch management \nsoftware. Microsoft Windows Software Update Services \n(WSUS) is a free tool that only works with Windows. \nOther commercial tools can assist with cross-platform \npatch management deployment and automation: \n ● Schedule updates. \n ● Live within the change window. \n ● Establish a rollback procedure. \n ● Test patches in a lab if at all possible. Use virtual \nservers if possible, to save cost. \n ● Purchase identical lab gear if possible. Many vendors \nwill sell “ nonproduction ” lab gear at more than 50% \ndiscount. This allows for test scenarios and patching \nin a nonproduction environment without rolling into \nproduction. \n ● After applying patches or firmware, validate to make \nsure that the equipment was actually correctly updated. \n Use Configuration Management Tools \n Many large organizations have invested large amounts of \nmoney in network and software configuration manage-\nment tools to manage hundreds or thousands of devices \naround the network. These tools store network device and \nsoftware configurations in a database format and allow \nfor robust configuration management capabilities. An \nexample is HP’s Network Automation System, 10 which \ncan do the following: \n ● Reduce costs by automating time-consuming manual \ncompliance checks and configuration tasks. \n ● Pass audit and compliance requirements easily with \nproactive policy enforcement and out-of-the-box \naudit and compliance reports (IT Infrastructure \nLibrary (ITIL), Cardholder Information Security \nProgram (CISP), HIPAA, SOX, GLBA, and others). \n ● Improve network security by recognizing and \nfixing security vulnerabilities before they affect the \nnetwork, using an integrated security alert service. \n 10 HP Network Automation System, https://h10078.www1.hp.com/cda/\nhpms/display/main/hpms_content.jsp?zn\u0003bto & cp\u00031-11-271-\n273_4000_100__ . \n" }, { "page_number": 631, "text": "PART | V Storage Security\n598\n ● Increase network stability and uptime by preventing \nthe inconsistencies and misconfigurations that are at \nthe root of most problems. \n ● Use process-powered automation to deliver \napplication integrations, which deliver full IT life-\ncycle workflow automation without scripting. \n ● Support SNMPv3 and IPv6, including dual-\nstack IPv4 and IPv6 support. HP Network \nAutomation supports both of these technologies \nto provide flexibility in your protocol strategy and \nimplementation. \n ● Use automated software image management to \ndeploy wide-scale image updates quickly with audit \nand rollback capabilities. \n Set Baseline Configurations \n If a commercial tool is not available, there are still steps \nthat can be taken. Use templates such as the ones pro-\nvided by the Center for Internet Security or the National \nSecurity Agency. They offer security templates for mul-\ntiple operating systems, software packages, and network \ndevices. They are free of charge and can be modified to \nfit the needs of the organization. In addition: \n ● Create a base configuration for all production \ndevices. \n ● Check with the vendor to see if they have baseline \nsecurity guides. Many of them do internally and will \nprovide them on request. \n ● Audit the baseline configurations. \n ● Script and automate as much as possible. \n Center for Internet Security 11 \n The Center for Internet Security (CIS) is a not-for-profit \norganization that helps enterprises reduce the risk of \nbusiness and ecommerce disruptions resulting from inad-\nequate technical security controls and provides enter-\nprises with resources for measuring information security \nstatus and making rational security investment decisions. \n National Security Agency 12 \n NSA initiatives in enhancing software security cover both \nproprietary and open-source software, and we have suc-\ncessfully used both proprietary and open-source models in \nour research activities. NSA’s work to enhance the secu-\nrity of software is motivated by one simple consideration: \nUse our resources as efficiently as possible to give NSA’s \ncustomers the best possible security options in the most \nwidely employed products. The objective of the NSA \nresearch program is to develop technologic advances that \ncan be shared with the software development community \nthrough a variety of transfer mechanisms. The NSA does \nnot favor or promote any specific software product or busi-\nness model. Rather, it promotes enhanced security. \n Vulnerability Scanning \n PCI requirements include both internal and external \nvulnerability scanning. An area that is commonly over-\nlooked when performing vulnerability scans is the pro-\nprietary devices and appliances that manage the SAN \nand network. Many of these have Web interfaces and run \nWeb applications on board. \n Vulnerability-scanning considerations: \n ● Use the Change Management/Change Control process \nto schedule the scans. Even trained security \nprofessionals who are good at not causing network \nproblems sometimes cause network problems. \n ● Know exactly what will be scanned. \n ● Perform both internal and external vulnerability \nscans. \n ● Scan the Web application and appliances that \nmanage the SAN and the network. \n ● Use more than one tool to scan. \n ● Document results and define metrics to know \nwhether vulnerabilities are increasing or decreasing. \n ● Set up a scanning routine and scan regularly with \nupdated tools. \n System Hardening \n System hardening is an important part of SAN security. \nHardening includes all the SAN devices and any machines \nthat connect to it as well as management tools. There are \nmultiple organizations that provide hardening guides for \nfree that can be used as a baseline and modified to fit the \nneeds of the organization: \n ● Do not use shared accounts. If all engineers use the \nsame account, there is no way to determine who \nlogged in and when. \n ● Remove manufacturers ’ default passwords. \n ● If supported, use central authentication such as \nRADIUS. \n ● Use the principle of least privilege. Do not give all users \non the device administrative credentials unless they \nabsolutely need them. A user just working on storage \ndoes not need the ability to reconfigure the SAN switch. \n 11 Center for Internet Security, www.cisecurity.org . \n 12 National Security Agency security templates, www.nsa.gov/snac/\nindex.cfm . \n" }, { "page_number": 632, "text": "Chapter | 34 Storage Area Networking Security Devices\n599\n Management Tools \n It is common for management applications to have vulner-\nabilities that the vendor will refuse to fix or deny that they \nare vulnerabilities. They usually surface after a vulnerabil-\nity scan or penetration test. When vulnerabilities are found, \nthere are steps that can be taken to mitigate the risk: \n ● Contact the vendor regardless. The vendor needs to \nknow that there are vulnerabilities and they should \ncorrect them. \n ● Verify if they have a hardening guide or any steps \nthat can be taken to mitigate the risk. \n ● Physically or logically segregate the tools and apply \nstrict ACLs or firewall rules. \n ● Place it behind an intrusion prevention device. \n ● Place behind a Web application firewall, if a Web \napplication. \n ● Audit and log access very closely. \n ● Set up alerts for logins that occur outside normal hours. \n ● Use strong authentication if available. \n ● Review the logs. \n Separate Areas of the SAN \n In the world of security, a defense-in-depth strategy is \noften employed with an objective of aligning the security \nmeasures with the risks involved. This means that there \nmust be security controls implemented at each layer that \nmay create an exposure to the SAN system. Most organi-\nzations are motivated to protect sensitive (and business/\nmission-critical) data, which typically represents a small \nfraction of the total data. This narrow focus on the most \nimportant data can be leveraged as the starting point \nfor data classification and a way to prioritize protection \nactivities. The best way to be sure that there is a layered \napproach to security is to address each aspect of a SAN \none by one and determine the best strategy to implement \nphysical, logical, virtual, and access controls. \n Physical \n Segregating the production of some systems from other sys-\ntem classes is crucial to proper data classification and secu-\nrity. For example, if it is possible to physically segregate the \nquality assurance data from the research and development \ndata, there is a smaller likelihood of data leakage between \ndepartments and therefore out to the rest of the world. \n Logical \n When a SAN is implemented, segregating storage traffic \nfrom normal server traffic is quite important because there \nis no need for the data to travel on the same switches as \nyour end users browsing the Internet, for example. Logical \nUnit Numbers (LUN) Masking, Fibre Channel Zoning, \nand IP VLANs can assist in separating data. \n Virtual \n One of the most prevalent uses recently for storage area \nnetworks is the storing of full-blown virtual machines \nthat run from the SAN itself. With this newest of uses \nfor SANs, the movement of virtual servers from one \ndata store to another is something that is required in \nmany scenarios and one that should be studied to iden-\ntify potential risks. \n Penetration Testing \n Penetration testing like vulnerability scanning is becom-\ning a regulatory requirement. Now people can go to jail \nfor losing data and not complying with these regulations. \nPenetration-testing the SAN may be difficult due to the \nhigh cost of entry, as noted earlier. Most people don’t \nhave a SAN in their lab to practice pen testing. \n Environments with custom applications and devices \ncan be sensitive to heavy scans and attacks. Inexperienced \npeople could inadvertently bring down critical systems. \nThe security engineers who have experience working in \nthese environments choose tools depending on the envi-\nronment. They also tread lightly so that critical systems \nare not brought down. Boutique security firms might not \nhave $100k to purchase a SAN so that their professional \nservices personnel can do penetration tests on SANs. \nWith the lack of skilled SAN technicians currently in the \nfield, it is not likely that SAN engineers will be rapidly \nmoving into the security arena. Depending on the size \nof the organization, there are things that can be done to \nfacilitate successful penetration testing. An internal pen-\netration testing team does the following: \n ● Have personnel cross-train and certify on the SAN \nplatform in use. \n ● Provide the team access to the lab and establish a \nregular procedure to perform a pen test. \n ● Have a member of the SAN group as part of the \npen-test team. \n ● Follow practices such as the OWASP guide for \nWeb application testing and the OSSTMM for \npenetration-testing methodologies. \n OWASP \n The Open Web Application Security Project (OWASP; \n www.owasp.org ) is a worldwide free and open commu-\nnity focused on improving the security of application \nsoftware. Our mission is to make application security \n" }, { "page_number": 633, "text": "PART | V Storage Security\n600\n “ visible ” so that people and organizations can make \ninformed decisions about application security risks. \n OSSTMM \n The Open Source Security Testing Methodology Manual \n(OSSTMM; www.isecom.org/osstmm/ ) is a peer-reviewed \nmethodology for performing security tests and metrics. The \nOSSTMM test cases are divided into five channels (sec-\ntions), which collectively test information and data con-\ntrols, personnel security awareness levels, fraud and social \nengineering control levels, computer and telecommunica-\ntions networks, wireless devices, mobile devices, physical \nsecurity access controls, security processes, and physical \nlocations such as buildings, perimeters, and military bases. \nThe external penetration testing team does the following: \n ● Validates SAN testing experience through references \nand certification \n ● Avoids firms that do not have access to SAN storage \ngear \n ● Asks to see a sanitized report of a previous pene-\ntration test that included a SAN \n Whether an internal or external penetration-testing \ngroup, it is a good idea to belong to one of the profes-\nsional security associations in the area, such as the \nInformation Systems Security Association (ISSA) or \nInformation Systems Audit and Control Association \n(ISACA). \n ISSA \n ISSA ( www.issa.org ) is a not-for-profit, international \norganization of information security professionals and \npractitioners. It provides educational forums, publica-\ntions, and peer interaction opportunities that enhance the \nknowledge, skill, and professional growth of its members. \n ISACA \n ISACA ( www.isaca.org ) got its start in 1967 when a small \ngroup of individuals with similar jobs — auditing controls \nin the computer systems that were becoming increasingly \ncritical to the operations of their organizations — sat down \nto discuss the need for a centralized source of information \nand guidance in the field. In 1969 the group formalized, \nincorporating as the EDP Auditors Association. In 1976 \nthe association formed an education foundation to under-\ntake large-scale research efforts to expand the knowledge \nand value of the IT governance and control field. \n Encryption \n Encryption is the conversion of data into a form called \n ciphertext that cannot be easily understood by unauthorized \npeople. Decryption is the process of converting \nencrypted data back into its original form so that it can \nbe understood. \n Confidentiality \n Confidentiality is the property whereby information is \nnot disclosed to unauthorized parties. Secrecy is a term \nthat is often used synonymously with confidentiality. \nConfidentiality is achieved using encryption to render the \ninformation unintelligible except by authorized entities. \n The information may become intelligible again by \nusing decryption. For encryption to provide confidential-\nity, the cryptographic algorithm and mode of operation \nmust be designed and implemented so that an unauthor-\nized party cannot determine the secret or private keys \nassociated with the encryption or be able to derive the \nplaintext directly without deriving any keys. 13 \n Data encryption can save a company time, money, and \nembarrassment. There are countless examples of lost and \nstolen media, especially hard drives and tape drives. A mis-\nplacement or theft can cause major headaches for an organ-\nization. Take, for example, the University of Miami 14 : \n A private off-site storage company used by the University \nof Miami has notified the University that a container carry-\ning computer back-up tapes of patient information was sto-\nlen. The tapes were in a transport case that was stolen from \na vehicle contracted by the storage company on March 17 \nin downtown Coral Gables, the company reported. Law \nenforcement is investigating the incident as one of a series \nof petty thefts in the area. \n Shortly after learning of the incident, the University \ndetermined it would be unlikely that a thief would be able \nto access the back-up tapes because of the complex and \nproprietary format in which they were written. Even so, \nthe University engaged leading computer security experts \nat Terremark Worldwide 15 to independently ascertain the \nfeasibility of accessing and extracting data from a similar \nset of back-up tapes. \n Anyone who has been a patient of a University of Miami \nphysician or visited a UM facility since January 1, 1999, \nis likely included on the tapes. The data included names, \naddresses, Social Security numbers, or health information. \nThe University will be notifying by mail the 47,000 patients \nwhose data may have included credit card or other finan-\ncial information regarding bill payment. \n 13 NIST Special Publication 800-57, Recommendation for Key \nManagement Part 1, http://csrc.nist.gov/publications/nistpubs/800-57/\nSP800-57-Part1.pdf . \n 14 Data Loss Notifi cation from the University of Miami, www6.\nmiami.edu/dataincident/index.htm . \n 15 Terremark Worldwide, www.terremark.com \n" }, { "page_number": 634, "text": "Chapter | 34 Storage Area Networking Security Devices\n601\n Even thought it was unlikely that the person who \nstole the tapes had access to the data or could read the \ndata, the university still had to notify 47,000 people that \ntheir data may have been compromised. Had the tape \ndrives been encrypted, they would not have been in the \nnews at all and no one would have had to worry about \npersonal data being compromised. \n Deciding What to Encrypt \n Deciding what type of data to encrypt and how best to \ndo it can be a challenge. It depends on the type of data \nthat is stored on the SAN. Encrypt backup tapes as well. \n There are two main types of encryption to focus on: \ndata in transit and data at rest. SNIA put out a white paper \ncalled Encryption of Data At-Rest: Step-by-Step Checklist, \nwhich outlines nine steps for encrypting data at rest: 16 \n 1. Understand confidentiality drivers. \n 2. Classify the data assets. \n 3. Inventory data assets. \n 4. Perform data flow analysis. \n 5. Determine the appropriate points of encryption. \n 6. Design encryption solution. \n 7. Begin data realignment. \n 8. Implement solution. \n 9. Activate encryption. \n Many of the vendors implement encryption in dif-\nferent ways. NIST SP 800-57 contains best practices for \nkey management and information about various crypto-\ngraphic ciphers. \n The following are the recommended minimum symmet-\nric security levels, defined as bits of strength (not key size): \n ● 80 bits of security until 2010 (128-bit AES and \n1024-bit RSA) \n ● 112 bits of security through 2030 (3DES, 128-AES \nand 2048-bit RSA) \n ● 128 bits of security beyond 2030 (128-AES and \n3072-bit RSA) \n Type of Encryption to Use \n The type of encryption used should contain a strong \nalgorithm and be publicly known. Algorithms such as \nASE, RSA, SHA, and Twofish are known and tested. \nAll the aforementioned encryption algorithms have been \ntested and proven to be strong if properly implemented. \nOrganizations should be wary of vendors saying that they \nhave their own “ unknown ” encryption algorithm. Many \ntimes it is just data compression or a weak algorithm \nthat the vendor wrote by itself. Though it sounds good in \ntheory, the thousands of mathematicians employed by the \nNSA spend years and loads of computer power trying to \nbreak well-known encryption algorithms. \n Proving That Data Is Encrypted \n A well-architected encryption plan should be transparent \nto the end user of the data. The only way to know for sure \nthat the data is encrypted is to verify the data. Data at rest \ncan be verified using forensic tools such as dd for Unix or \nthe free FTK 17 imager for Windows. Data in transit can be \nverified by network monitoring tools such as Wireshark. \n Turn on event logging for any encryption hardware \nor software. Make sure it is logging when it turns on or \noff. Have a documented way to verify that the encryp-\ntion was turned on while it had the sensitive data on the \nsystem (see Figure 34.2 ). \n Encryption Challenges and Other Issues \n No method of defense is perfect. Human error and com-\nputer vulnerabilities do pose encryption challenges (see \n Figure 34.3 ). A large financial firm had personal infor-\nmation on its network, including 34,000 credit cards with \nnames and account numbers. The network administrator \nhad left the decryption key on the server. After targeting \nthe server for a year and a half, the attacker was able to \nget the decryption key and finally able to directly query \nthe fully encrypted database and pull out 34,000 cards. \n Logging \n Logging is an important consideration when it comes to \nSAN security. There are all sorts of events that can be \nlogged. When a security incident happens, having proper \nlog information can mean the difference between solving \nthe problem and not knowing whether your data was \ncompromised. NIST has an excellent guide to Security \nLog Management. The SANS Institute has a guide on \nthe top five most essential log reports. \n There are multiple commercial vendors as well as open-\nsource products for log management. Log management \nhas evolved from standalone syslog servers to complex \narchitectures for Security Event/Information Management. \nAcronyms used for these blend together as SEM, SIM, and \nSEIM. In addition to log data, they can take in data from \nIDSs, vulnerability assessment products, and many other \nsecurity tools to centralize and speed up the analysis and \n 16 www.snia.org/forums/ssif/knowledge_center/white_papers . \n 17 Access Data Forensic Toolkit Imager, www.accessdata.com/\ndownloads.html . \n" }, { "page_number": 635, "text": "PART | V Storage Security\n602\n processing of huge amounts of logs. More of a difference is \nbeing made between Security Event Management and audit \nlogging. The former is geared toward looking at events of \ninterest on which to take action; the latter is geared to com-\npliance. In today’s legal and compliance environment an \nauditor will ask an enterprise to immediately provide logs \nfor a particular device for a time period such as the previ-\nous 90 days. With a solid log management infrastructure, \nthis request becomes trivial and a powerful tool to help \nsolve problems. NIST Special Publication 800-92 18 makes \nthe following recommendations: \n ● Organizations should establish policies and proce-\ndures for log management. \n ● Organizations should prioritize log management \nappropriately throughout the organization. \n ● Organizations should create and maintain a log \nmanagement infrastructure. \n ● Organizations should provide proper support for all \nstaff with log management responsibilities. \n ● Organizations should establish standard log \nmanagement operational processes. \n Policies and Procedures \n To establish and maintain successful log management \nactivities, an organization should develop standard proc-\nesses for performing log management. As part of the \nplanning process, an organization should define its log-\nging requirements and goals. \n Prioritize Log Management \n After an organization defines its requirements and goals \nfor the log management process, it should then priori-\ntize the requirements and goals based on the organiza-\ntion’s perceived reduction of risk and the expected time \nand resources needed to perform log management \nfunctions. \n Create and Maintain a Log Management \nInfrastructure \n A log management infrastructure consists of the hardware, \nsoftware, networks, and media used to generate, transmit, \nstore, analyze, and dispose of log data. Log management \n FIGURE 34.2 Notice the clear, legible text on the right. \n FIGURE 34.3 Notice Encrypted Data on the right-hand side. \n 18 NIST SP 800-92 http://csrc.nist.gov/publications/nistpubs/800-92/\nSP800-92.pdf . \n" }, { "page_number": 636, "text": "Chapter | 34 Storage Area Networking Security Devices\n603\n infrastructures typically perform several functions that \nsupport the analysis and security of log data. \n Provide Support for Staff With Log Management \nResponsibilities \n To ensure that log management for individual systems is \nperformed effectively throughout the organization, the \nadministrators of those systems should receive adequate \nsupport. \n Establish a Log Management Operational Process \n The major log management operational processes typi-\ncally include configuring log sources, performing log \nanalysis, initiating responses to identified events, and \nmanaging long-term storage. \n What Events Should Be Logged for SANs? \n For storage networks the same type of data should be \ncollected as for other network devices, with focus on \nthe storage management systems and any infrastructure \nthat supports the SAN, such as the switches and servers. \nAccording to the SANS Institute, the top five most essen-\ntial log reports 19 are as follows. \n Attempts to Gain Access Through Existing Accounts \n Failed authentication attempts can be an indication of \na malicious user or process attempting to gain network \naccess by performing password guessing. It can also be \nan indication that a local user account is attempting to \ngain a higher level of permissions to a system. \n Failed File or Resource Access Attempts \n Failed file or resource access attempts is a broad cate-\ngory that can impact many different job descriptions. \nIn short, failed access attempts are an indication that \nsomeone is attempting to gain access to either a nonex-\nistent resource or a resource to which they have not been \ngranted the correct permissions. \n Unauthorized Changes to Users, Groups and \nServices \n The modification of user and group accounts, as well \nas system services, can be an indication that a system \nhas become compromised. Clearly, modifications to all \nthree will occur legitimately in an evolving network, but \nthey warrant special attention because they can be a final \nindication that all other defenses have been breached and \nan intrusion has occurred. \n Systems Most Vulnerable to Attack \n As indicated in the original SANS Top 10 Critical \nVulnerabilities list as well as the current Top 20, one of \nthe most important steps you can take in securing your \nnetwork is to stay up to date on patches. In an ideal \nworld all systems would remain completely up to date \non the latest patches; time management, legacy software, \navailability of resources, and so on can result in a less \nthan ideal posture. A report that identifies the level of \ncompliance of each network resource can be extremely \nhelpful in setting priorities. \n Suspicious or Unauthorized Network \nTraffic Patterns \n Suspect traffic patterns can be described as unusual or \nunexpected traffic patterns on the local network. This not \nonly includes traffic entering the local network but traffic \nleaving the network as well. This report option requires \na certain level of familiarity with what is “ normal ” for \nthe local network. With this in mind, administrators need \nto be knowledgeable of local traffic patterns to make the \nbest use of these reports. With that said, there are some \ntypical traffic patterns that can be considered to be highly \nsuspect in nearly all environments. \n 6. CONCLUSION \n The financial and IT resource benefits of consolidating infor-\nmation into a storage area network are compelling, and our \ndependence on this technology will continue to grow as our \ndata storage needs grow exponentially. With this concentra-\ntion and consolidation of critical information come security \nchallenges and risks that must be recognized and appropri-\nately addressed. In this chapter we covered these risks as \nwell as the controls and processes that should be employed \nto protect the information stored on a SAN. Finally, we have \nemphasized why encryption of data at rest and in flight is \na critical protection method that must be employed by the \nprofessional SAN administrator. Our intention is for you to \nunderstand all these risks to your SAN and to use the meth-\nods and controls described here to prevent you or your com-\npany from becoming a data loss statistic. \n 19 SANS Institute, www.sans.org/free_resources.php . \n" }, { "page_number": 637, "text": "This page intentionally left blank\n" }, { "page_number": 638, "text": "605\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Risk Management \n Sokratis K. Katsikas \n University of Piraeus \n Chapter 35 \n Integrating security measures with the operational frame-\nwork of an organization is neither a trivial nor an easy \ntask. This explains to a large extent the low degree of \nsecurity that information systems operating in contempo-\nrary businesses and organizations enjoy. Some of the most \nimportant difficulties that security professionals face when \nconfronted with the task of introducing security measures \nin businesses and organizations are: \n ● The difficulty to justify the cost of the security \nmeasures \n ● The difficulty to establish communication between \ntechnical and administrative personnel \n ● The difficulty to assure active participation of users \nin the effort to secure the information system and \nto commit higher management to continuously \nsupporting the effort \n ● The widely accepted erroneous perception of \ninformation systems security as a purely technical \nissue \n ● The difficulty to develop an integrated, efficient and \neffective information systems security plan \n ● The identification and assessment of the organizational \nimpact that the implementation of a security plan \nentails \n The difficulty in justifying the cost of the security \nmeasures, particularly those of a procedural and adminis-\ntrative nature, stems from the very nature of security itself. \nIndeed, the justification of the need for a security meas-\nure can only be proved “ after the (unfortunate) event, ” \nwhereas, at the same time, there is no way to prove that \nalready implemented measures can adequately cope with \na potential new threat. This cost does not only pertain to \nacquiring and installing mechanisms and tools for protec-\ntion. It also includes the cost of human resources, the cost \nof educating and making users aware, and the cost for \ncarrying out tasks and procedures relevant to security. \n The difficulty in expressing the cost of the security \nmeasures in monetary terms is one of the fundamental fac-\ntors that make the communication between technical and \nadministrative personnel difficult. An immediate conse-\nquence of this is the difficulty in securing the continuous \ncommitment of higher management to supporting the secu-\nrity enhancement effort. This becomes even more difficult \nwhen organizational and procedural security measures \nare proposed. Both management and users are concerned \nabout the impact of these measures in their usual practice, \nparticularly when the widely accepted concept that secu-\nrity is purely a technical issue is put into doubt. \n Moreover, protecting an information system calls for \nan integrated, holistic study that will answer questions \nsuch as: Which elements of the information system do we \nwant to protect? Which, among these, are the most impor-\ntant ones? What threats is the information system facing? \nWhat are its vulnerabilities? What security measures must \nbe put in place? Answering these questions gives a good \npicture of the current state of the information system with \nregard to its security. As research 1 has shown, developing \ntechniques and measures for security is not enough, since \nthe most vulnerable point in any information system is the \nhuman user, operator, designer, or other human. Therefore, \nthe development and operation of secure information sys-\ntems must equally consider and take account of both tech-\nnical and human factors. At the same time, the threats that \nan information system faces are characterized by vari-\nety, diversity, complexity, and continuous variation. As \nthe technological and societal environment continuously \nevolves, threats change and evolve, too. Furthermore, both \ninformation systems and threats against them are dynamic; \nhence the need for continuous monitoring and managing \nof the information system security plan. \n1 E. A. Kiountouzis and S. A. Kokolakis, “An analyst’s view of infor-\nmation systems security”, in Information Systems Security: Facing the \nInformation Society of the 21st Century, Katsikas, S. K., and Gritzalis, D. \n(eds.), Chapman & Hall, 1996.\n" }, { "page_number": 639, "text": "PART | V Storage Security\n606\n The most widely used methodology that aims at deal-\ning with these issues is the information systems risk \nmanagement methodology. This methodology adopts the \nconcept of risk that originates in financial management, \nand substitutes the unachievable and immeasurable goal \nof fully securing the information system with the achiev-\nable and measurable goal of reducing the risk that the \ninformation system faces to within acceptable limits. \n 1. THE CONCEPT OF RISK \n The concept of risk originated in the 17th century with the \nmathematics associated with gambling. At that time, risk \nreferred to a combination of probability and magnitude of \npotential gains and losses. During the 18th century, risk, \nseen as a neutral concept, still considered both gains and \nlosses and was employed in the marine insurance busi-\nness. In the 19th century, risk emerged in the study of eco-\nnomics. The concept of risk, then, seen more negatively, \ncaused entrepreneurs to call for special incentives to take \nthe risk involved in investment. By the 20th century a total \nnegative connotation was made when referring to out-\ncomes of risk in engineering and science, with particular \nreference to the hazards posed by modern technological \ndevelopments. 2 , 3 \n Within the field of IT security, the risk R is calculated \nas the product of P , the probability of an exposure occur-\nring a given number of times per year times C , the cost or \nloss attributed to such an exposure, that is, R \u0003 P x C . 4 \n The most recent standardized definition of risk comes \nfrom the ISO, 5 voted for on April 19, 2008, where the \ninformation security risk is defined as “ the potential that a \ngiven threat will exploit vulnerabilities of an asset or group \nof assets and thereby cause harm to the organization. ” \n To complete this definition, definitions of the terms \n threat, vulnerability, and asset are in order. These are as \nfollows: A threat is “ a potential cause of an incident, that \nmay result in harm to system or organization. ” A vulner-\nability is “ a weakness of an asset or group of assets that \ncan be exploited by one or more threats. ” An asset is “ any-\nthing that has value to the organization, its business opera-\ntions and their continuity, including information resources \nthat support the organization’s mission. ” 6 Additionally, \nharm results in impact, which, is “ an adverse change to \nthe level of business objectives achieved. ” 7 The relation-\nships among these basic concepts are pictorially depicted \nin Figure 35.1 . \n 2. EXPRESSING AND MEASURING RISK \n As noted, risk “ is measured in terms of a combination of \nthe likelihood of an event and its consequence. ” 8 Because \nwe are interested in events related to information security, \nwe define an information security event as “ an identified \noccurrence of a system, service or network state indicat-\ning a possible breach of information security policy or \nfailure of safeguards, or a previously unknown situation \nthat may be security relevant. ” 9 Additionally, an informa-\ntion security incident is “ indicated by a single or a series \nof unwanted information security events that have a sig-\nnificant probability of compromising business operations \nand threatening information security. ” 10 These definitions \nactually invert the investment assessment model, where \nan investment is considered worth making when its cost \nis less than the product of the expected profit times the \nlikelihood of the profit occurring. In our case, the risk R \nis defined as the product of the likelihood L of a security \nincident occurring times the impact I that will be incurred \nto the organization due to the incident, that is, R \u0003 L x I. 11 \n To measure risk, we adopt the fundamental principles \nand the scientific background of statistics and probability \ntheory, particularly of the area known as Bayesian statis-\ntics, after the mathematician Thomas Bayes (1702 – 1761), \nwho formalized the namesake theorem. Bayesian statis-\ntics is based on the view that the likelihood of an event \nhappening in the future is measurable. This likelihood \ncan be calculated if the factors affecting it are analyzed. \nFor example, we are able to compute the probability \n2 M. Gerber and R. von Solms, “Management of risk in the informa-\ntion age,” Computers & Security, Vol. 24, pp. 16–30, 2005.\n3 M. Douglas, “Risk as a forensic resource,” Daedalus, Vol. 119, Issue 4, \npp. 1–17, 1990.\n4 R. Courtney, “Security risk assessment in electronic data processing,” \nin the AFIPS Conference Proceedings of the National Computer \nConference 46, AFIPS, Arlington, pp. 97–104, 1977.\n5 ISO/IEC, “Information technology—Security techniques—information \nsecurity risk management,” ISO/IEC FDIS 27005:2008 (E).\n6 British Standards Institute, “Information technology—Security tech-\nniques—Management of information and communications technology \nsecurity—Part 1: Concepts and models for information and communi-\ncations technology security management,” BS ISO/IEC 13335-1:2004.\n7 ISO/IEC, “Information technology—security techniques–information \nsecurity risk management,” ISO/IEC FDIS 27005:2008 (E).\n8 ISO/IEC, “Information technology—security techniques–information \nsecurity risk management,” ISO/IEC FDIS 27005:2008 (E).\n9 British Standards Institute, “Information technology—Security \ntechniques—Information security incident management,” BS ISO/IEC \nTR 18044:2004.\n11 R. Baskerville, “Information systems security design methods: \nimplications for information systems development,” ACM Computing \nSurveys, Vol. 25, No. 4, pp. 375–414, 1993.\n10 British Standards Institute, “Information technology—Security \ntechniques—Information security incident management,” BS ISO/IEC \nTR 18044:2004.\n" }, { "page_number": 640, "text": "Chapter | 35 Risk Management\n607\nof our data to be stolen as a function of the probability an \nintruder will attempt to intrude into our system and of the \nprobability that he will succeed. In risk analysis terms, \nthe former probability corresponds to the likelihood of \nthe threat occurring and the latter corresponds to the like-\nlihood of the vulnerability being successfully exploited. \nThus, risk analysis assesses the likelihood that a security \nincident will happen by analyzing and assessing the fac-\ntors that are related to its occurrence, that is, the threats \nand the vulnerabilities. Subsequently, it combines this \nlikelihood with the impact resulting from the incident \noccurring to calculate the system risk. Risk analysis is \na necessary prerequisite for subsequently treating risk. \nRisk treatment pertains to controlling the risk so that it \nremains within acceptable levels. Risk can be reduced by \napplying security measures; it can be transferred, perhaps \nby insuring; it can be avoided; or it can be accepted, in \nthe sense that the organization accepts the likely impact \nof a security incident. \n The likelihood of a security incident occurring is \na function of the likelihood that a threat appears and of \nthe likelihood that the threat can successfully exploit the \nrelevant system vulnerabilities. The consequences of \nthe occurrence of a security incident are a function of the \nlikely impact that the incident will have to the organiza-\ntion as a result of the harm that the organization assets \nwill sustain. Harm, in turn, is a function of the value of \nthe assets to the organization. Thus, the risk R is a func-\ntion of four elements: (a) V , the value of the assets; (b) T , \nthe severity and likelihood of appearance of the threats; \n(c) V , the nature and the extent of the vulnerabilities and \nthe likelihood that a threat can successfully exploit them; \nand (d) I , the likely impact of the harm should the threat \nsucceed, that is, R \u0003 f(A, T, V, I) . \n If the impact is expressed in monetary terms, the \nlikelihood being dimensionless, then risk can be also \nexpressed in monetary terms. This approach has the \nadvantage of making the risk directly comparable to the \ncost of acquiring and installing security measures. Since \nsecurity is often one of several competing alternatives for \ncapital investment, the existence of a cost/benefit analy-\nsis that would offer proof that security will produce ben-\nefits that equal or exceed its cost is of great interest to the \nmanagement of the organization. Of even more interest to \nmanagement is the analysis of the investment opportunity \ncosts, that is, its comparison to other capital investment \noptions. 12 However, expressing risk in monetary terms \nis not always possible or desirable, since harm to some \nkinds of assets (e.g., human life) cannot (and should not) \nbe assessed in monetary terms . This is why risk is usually \nexpressed in nonmonetary terms, on a simple dimension-\nless scale. \n Assets in an organization are usually quite diverse. \nBecause of this diversity, it is likely that some assets that \nhave a known monetary value (e.g., hardware) can be val-\nued in the local currency, whereas others of a more quali-\ntative nature (e.g., data or information) may be assigned a \nRISK\nVULNERABILITIES\nASSETS\nTHREATS\nIMPACT\nHARM\nhave\nexploit\ncause\nleads to\nare the\nelements\nof\n FIGURE 35.1 Risk and its related concepts. \n12 R. Baskerville, “Risk analysis as a source of professional knowl-\nedge,” Computers & Security, Vol. 10, pp. 749–764, 1991.\n" }, { "page_number": 641, "text": "PART | V Storage Security\n608\nnumerical value based on the organization’s perception of \ntheir value. This value is assessed in terms of the assets ’ \nimportance to the organization or their potential value \nin different business opportunities. The legal and busi-\nness requirements are also taken into account, as are the \nimpacts to the asset itself and to the related business inter-\nests resulting from a loss of one or more of the information \nsecurity attributes (confidentiality, integrity, availability). \nOne way to express asset values is to use the business \nimpacts that unwanted incidents, such as disclosure, mod-\nification, nonavailability, and/or destruction, would have \nto the asset and the related business interests that would \nbe directly or indirectly damaged. An information security \nincident can impact more than one asset or only a part of \nan asset. Impact is related to the degree of success of the \nincident. Impact is considered as having either an imme-\ndiate (operational) effect or a future (business) effect that \nincludes financial and market consequences. Immediate \n(operational) impact is either direct or indirect. \n Direct impact may result because of the financial \nreplacement value of lost (part of) asset or the cost of \nacquisition, configuration and installation of the new \nasset or backup, or the cost of suspended operations due \nto the incident until the service provided by the asset(s) \nis restored. Indirect impact may result because financial \nresources needed to replace or repair an asset would have \nbeen used elsewhere (opportunity cost) or from the cost of \ninterrupted operations or due to potential misuse of infor-\nmation obtained through a security breach or because of \nviolation of statutory or regulatory obligations or of ethi-\ncal codes of conduct. 13 \n These considerations should be reflected in the asset \nvalues. This is why asset valuation (particularly of intan-\ngible assets) is usually done through impact assessment. \nThus, impact valuation is not performed separately but is \nrather embedded within the asset valuation process. \n The responsibility for identifying a suitable asset val-\nuation scale lies with the organization. Usually, a three-\nvalue scale (low, medium, and high) or a five-value scale \n(negligible, low, medium, high, and very high) is used. 14 \n Threats can be classified as deliberate or acciden-\ntal. The likelihood of deliberate threats depends on the \nmotivation, knowledge, capacity, and resources available \nto possible attackers and the attractiveness of assets to \nsophisticated attacks. On the other hand, the likelihood \nof accidental threats can be estimated using statistics and \nexperience. The likelihood of these threats might also be \nrelated to the organization’s proximity to sources of dan-\nger, such as major roads or rail routes, and factories deal-\ning with dangerous material such as chemical materials or \npetroleum. Also the organization’s geographical location \nwill affect the possibility of extreme weather conditions. \nThe likelihood of human errors (one of the most common \naccidental threats) and equipment malfunction should also \nbe estimated. 15 As already noted, the responsibility for \nidentifying a suitable threat valuation scale lies with the \norganization. What is important here is that the interpreta-\ntion of the levels is consistent throughout the organization \nand clearly conveys the differences between the levels to \nthose responsible for providing input to the threat valua-\ntion process. For example, if a three-value scale is used, \nthe value low can be interpreted to mean that it is not \nlikely that the threat will occur, there are no incidents, \nstatistics, or motives that indicate that this is likely to hap-\npen. The value medium can be interpreted to mean that \nit is possible that the threat will occur, there have been \nincidents in the past or statistics or other information that \nindicate that this or similar threats have occurred some-\ntime before, or there is an indication that there might be \nsome reasons for an attacker to carry out such action. \nFinally, the value high can be interpreted to mean that the \nthreat is expected to occur, there are incidents, statistics, \nor other information that indicate that the threat is likely \nto occur, or there might be strong reasons or motives for \nan attacker to carry out such action. 16 \n Vulnerabilities can be related to the physical envi-\nronment of the system, to the personnel, management, \nand administration procedures and security measures \nwithin the organization, to the business operations and \nservice delivery or to the hardware, software, or com-\nmunications equipment and facilities. Vulnerabilities \nare reduced by installed security measures. The nature \nand extent as well as the likelihood of a threat suc-\ncessfully exploiting the three former classes of vulner-\nabilities can be estimated based on information on past \nincidents, on new developments and trends, and on expe-\nrience. The nature and extent as well as the likelihood \nof a threat successfully exploiting the latter class, often \ntermed technical vulnerabilities, can be estimated using \nautomated vulnerability-scanning tools, security testing \nand evaluation, penetration testing, or code review. 17 \n13 ISO/IEC, “Information technology—security techniques–informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n14 British Standards Institute, “ISMSs—Part 3: Guidelines for infor-\nmation security risk management,” BS 7799-3:2006.\n15 British Standards Institute, “ISMSs—Part 3: Guidelines for infor-\nmation security risk management,” BS 7799-3:2006.\n16 British Standards Institute, “ISMSs—Part 3: Guidelines for infor-\nmation security risk management,” BS 7799-3:2006.\n17 ISO/IEC, “Information technology—security techniques–informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n" }, { "page_number": 642, "text": "Chapter | 35 Risk Management\n609\nAs in the case of threats, the responsibility for identify-\ning a suitable vulnerability valuation scale lies with the \norganization. If a three-value scale is used, the value low \ncan be interpreted to mean that the vulnerability is hard \nto exploit and the protection in place is good. The value \n medium can be interpreted to mean that the vulnerability \nmight be exploited, but some protection is in place. The \nvalue high can be interpreted to mean that it is easy to \nexploit the vulnerability and there is little or no protec-\ntion in place. 18 \n 3. THE RISK MANAGEMENT \nMETHODOLOGY \n The term methodology means an organized set of prin-\nciples and rules that drives action in a particular field of \nknowledge. A method is a systematic and orderly proce-\ndure or process for attaining some objective. A tool is any \ninstrument or apparatus that is necessary to the perform-\nance of some task. Thus, methodology is the study or \ndescription of methods. 19 A methodology is instantiated \nand materializes by a set of methods, techniques, and \ntools. A methodology does not describe specific meth-\nods; nevertheless, it does specify several processes that \nneed to be followed. These processes constitute a generic \nframework. They may be broken down into subprocesses, \nthey may be combined, or their sequence may change. \nHowever, every risk management exercise must carry out \nthese processes in some form or another. \n Risk management consists of six processes, namely \ncontext establishment, risk assessment, risk treatment, risk \nacceptance, risk communication, and risk monitoring and \nreview. 20 This is more or less in line with the approach \nwhere four processes are identified as the constituents of \nrisk management, namely, putting information security \nrisks in the organizational context, risk assessment, risk \ntreatment, and management decision-making and ongo-\ning risk management activities. 21 Alternatively, risk man-\nagement is seen to comprise three processes, namely risk \nassessment, risk mitigation, and evaluation and assess-\nment. 22 Table 35.1 depicts the relationships among these \nprocesses. 23 , 24 \n Context Establishment \n The context establishment process receives as input all \nrelevant information about the organization. Establishing \nthe context for information security risk management \ndetermines the purpose of the process. It involves setting \nthe basic criteria to be used in the process, defining the \nscope and boundaries of the process, and establishing an \nappropriate organization operating the process. The out-\nput of context establishment process is the specification \nof these parameters. \n The purpose may be to support an information secu-\nrity management system (ISMS); to comply with legal \nrequirements and to provide evidence of due diligence; \n18 British Standards Institute, “ISMSs—Part 3: Guidelines for infor-\nmation security risk management,” BS 7799-3:2006.\n19 R. Baskerville, “Risk analysis as a source of professional knowl-\nedge,” Computers & Security, Vol. 10, pp. 749–764, 1991.\n20 ISO/IEC, “Information technology—security techniques–informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n21 British Standards Institute, “ISMSs—Part 3: Guidelines for infor-\nmation security risk management,” BS 7799-3:2006.\n22 G. Stoneburner, A. Goguen and A. Feringa, Risk Management \nguide for information technology systems, National Institute of \nStandards and Technology, Special Publication SP 800-30, 2002.\n23 ISO/IEC, “Information technology—security techniques–informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n24 British Standards Institute, “ISMSs—Part 3: Guidelines for infor-\nmation security risk management,” BS 7799-3:2006.\n TABLE 35.1 Risk management constituent processes \n ISO/IEC FDIS 27005:2008 (E) \n BS 7799-3:2006 \n SP 800-30 \n Context establishment \n Organizational context \n \n Risk assessment \n Risk assessment \n Risk assessment \n Risk treatment \n Risk treatment and management \ndecision making \n Risk mitigation \n Risk acceptance \n \n \n Risk communication \n Ongoing risk management \nactivities \n \n Risk monitoring and review \n \n Evaluation and assessment \n" }, { "page_number": 643, "text": "PART | V Storage Security\n610\nto prepare for a business continuity plan; to prepare for \nan incident reporting plan; or to describe the information \nsecurity requirements for a product, a service, or a mech-\nanism. Combinations of these purposes are also possible. \n The basic criteria include risk evaluation criteria, \nimpact criteria, and risk acceptance criteria. When setting \nrisk evaluation criteria the organization should consider \nthe strategic value of the business information process; \nthe criticality of the information assets involved; legal \nand regulatory requirements and contractual obligations; \noperational and business importance of the attributes of \ninformation security; and stakeholders expectations and \nperceptions, and negative consequences for goodwill and \nreputation. The impact criteria specify the degree of dam-\nage or costs to the organization caused by an information \nsecurity event. Developing impact criteria involves con-\nsidering the level of classification of the impacted infor-\nmation asset; breaches of information security; impaired \noperations; loss of business and financial value; disrup-\ntion of plans and deadlines; damage of reputation; and \nbreaches of legal, regulatory or contractual requirements. \nThe risk acceptance criteria depend on the organization’s \npolicies, goals, objectives and the interest of its stake-\nholders. When developing risk acceptance criteria the \norganization should consider business criteria; legal and \nregulatory aspects; operations; technology; finance; and \nsocial and humanitarian factors. 25 \n The scope of the process needs to be defined to ensure \nthat all relevant assets are taken into account in the sub-\nsequent risk assessment. Any exclusion from the scope \nneeds to be justified. Additionally, the boundaries need \nto be identified to address those risks that might arise \nthrough these boundaries. When defining the scope and \nboundaries, the organization needs to consider its strategic \nbusiness objectives, strategies, and policies; its business \nprocesses; its functions and structure; applicable legal, \nregulatory, and contractual requirements; its information \nsecurity policy; its overall approach to risk management; \nits information assets; its locations and their geographi-\ncal characteristics; constraints that affect it; expectations \nof its stakeholders; its socio-cultural environment; and its \ninformation exchange with its environment. This involves \nstudying the organization (i.e., its main purpose, its busi-\nness; its mission; its values; its structure; its organiza-\ntional chart; and its strategy). It also involves identifying \nits constraints. These may be of a political, cultural, or \nstrategic nature; they may be territorial, organizational, \nstructural, functional, personnel, budgetary, technical, or \nenvironmental constraints; or they could be constraints \narising from preexisting processes. Finally, it entails iden-\ntifying legislation, regulations, and contracts. 26 \n Setting up and maintaining the organization for infor-\nmation security risk management fulfills part of the \nrequirement to determine and provide the resources \nneeded to establish, implement, operate, monitor, review, \nmaintain, and improve an ISMS. 27 The organization to be \ndeveloped will bear responsibility for the development of \nthe information security risk management process suit-\nable for the organization; for the identification and analy-\nsis of the stakeholders; for the definition of roles and \nresponsibilities of all parties, both external and internal \nto the organization; for the establishment of the required \nrelationships between the organization and stakeholders, \ninterfaces to the organization’s high-level risk manage-\nment functions, as well as interfaces to other relevant \nprojects or activities; for the definition of decision escala-\ntion paths; and for the specification of records to be kept. \nKey roles in this organization are the senior management; \nthe chief information officer (CIO); the system and infor-\nmation owners; the business and functional managers; \nthe information systems security officers (ISSO); the IT \nsecurity practitioners; and the security awareness trainers \n(security/subject matter professionals). 28 Additional roles \nthat can be explicitly defined are those of the risk assessor \nand of the security risk manager. 29 \n Risk Assessment \n This process comprises two subprocesses, namely risk \nanalysis and risk evaluation. Risk analysis, in turn, com-\nprises risk identification and risk estimation. The process \nreceives as input the output of the context establishment \nprocess. It identifies, quantifies or qualitatively describes \nrisks and prioritizes them against the risk evaluation \ncriteria established within the course of the context \nestablishment process and according to objectives rel-\nevant to the organization. It is often conducted in more \nthan one iteration, the first being a high-level assessment \naiming at identifying potentially high risks that warrant \nfurther assessment, whereas the second and possibly \n25 ISO/IEC, “Information technology—security techniques—informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n26 ISO/IEC, “Information technology—security techniques—informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n27 ISO/IEC, “Information security management—specifi cation with \nguidance for use,” ISO 27001.\n28 G. Stoneburner, A. Goguen and A. Feringa, Risk Management guide \nfor information technology systems, National Institute of Standards and \nTechnology, Special Publication SP 800-30, 2002.\n29 British Standards Institute, “ISMSs—Part 3: Guidelines for infor-\nmation security risk management,” BS 7799-3:2006.\n" }, { "page_number": 644, "text": "Chapter | 35 Risk Management\n611\nsubsequent iterations entail further in-depth examination \nof potentially high risks revealed in the first iteration. \nThe output of the process is a list of assessed risks pri-\noritized according to risk evaluation criteria. 30 \n Risk identification seeks to determine what could \nhappen to cause a potential loss and to gain insight into \nhow, where, and why the loss might happen. It involves a \nnumber of steps, namely identification of assets; identifi-\ncation of threats; identification of existing security meas-\nures; identification of vulnerabilities; and identification \nof consequences. Input to the subprocess is the scope and \nboundaries for the risk assessment to be conducted, an \nasset inventory, information on possible threats, documen-\ntation of existing security measures, possibly preexisting \nrisk treatment implementation plans, and the list of busi-\nness processes. The output of the subprocess is a list of \nassets to be risk-managed together with a list of business \nprocesses related to these assets; a list of threats on these \nassets; a list of existing and planned security measures, \ntheir implementation and usage status; a list of vulnerabil-\nities related to assets, threats and already installed security \nmeasures; a list of vulnerabilities that do not relate to any \nidentified threat; and a list of incident scenarios with their \nconsequences, related to assets and business processes. 31 \n Two kinds of assets can be distinguished, namely \n primary assets , which include business processes and \nactivities and information, and supporting assets , which \ninclude hardware, software, network, personnel, site, and \nthe organization’s structure. Hardware assets comprise \ndata-processing equipment (transportable and fixed), \nperipherals, and media. Software assets comprise the oper-\nating system; service, maintenance or administration soft-\nware; and application software. Network assets comprise \nmedium and supports, passive or active relays, and com-\nmunication interfaces. Personnel assets comprise decision \nmakers, users, operation/maintenance staff, and developers. \nThe site assets comprise the location (and its external envi-\nronment, premises, zone, essential services, communica-\ntion and utilities characteristics) and the organization (and \nits authorities, structure, the project or system organization \nand its subcontractors, suppliers and manufacturers). 32 \n Threats are classified according to their type and to \ntheir origin. Threat types are physical damage (e.g., fire, \nwater, pollution); natural events (e.g., climatic phenome-\nnon, seismic phenomenon, volcanic phenomenon); loss of \nessential services (e.g., failure of air-conditioning, loss of \npower supply, failure of telecommunication equipment); \ndisturbance due to radiation (electromagnetic radiation, \nthermal radiation, electromagnetic pulses); compromise \nof information (eavesdropping, theft of media or docu-\nments, retrieval of discarded or recycled media); technical \nfailures (equipment failure, software malfunction, satura-\ntion of the information system); unauthorized actions \n(fraudulent copying of software, corruption of data, unau-\nthorized use of equipment); and compromise of functions \n(error in use, abuse of rights, denial of actions). 33 Threats \nare classified according to origin into deliberate, acci-\ndental or environmental. A deliberate threat is an action \naiming at information assets (e.g., remote spying, ille-\ngal processing of data); an accidental threat is an action \nthat can accidentally damage information assets (equip-\nment failure, software malfunction); and an environmen-\ntal threat is any threat that is not based on human action \n(a natural event, loss of power supply). Note that a threat \ntype may have multiple origins. \n Vulnerabilities are classified according to the asset \nclass they relate to. Therefore, vulnerabilities are clas-\nsified as hardware (e.g., susceptibility to humidity, dust, \nsoiling; unprotected storage); software (no or insufficient \nsoftware testing, lack of audit trail); network (unpro-\ntected communication lines, insecure network architec-\nture); personnel (inadequate recruitment processes, lack \nof security awareness); site (location in an area suscepti-\nble to flood, unstable power grid); and organization (lack \nof regular audits, lack of continuity plans). 34 \n Risk estimation is done either quantitatively or quali-\ntatively. Qualitative estimation uses a scale of qualifying \nattributes to describe the magnitude of potential conse-\nquences (e.g., low, medium or high) and the likelihood \nthat these consequences will occur. Quantitative estima-\ntion uses a scale with numerical values for both conse-\nquences and likelihood. In practice, qualitative estimation \nis used first, to obtain a general indication of the level of \nrisk and to reveal the major risks. It is then followed by a \nquantitative estimation on the major risks identified. \n Risk estimation involves a number of steps, namely \nassessment of consequences (through valuation of assets); \nassessment of incident likelihood (through threat and \nvulnerability valuation); and assigning values to the like-\nlihood and the consequences of a risk. We discussed valu-\nation of assets, threats, and vulnerabilities in an earlier \n30 ISO/IEC, “Information technology—security techniques—informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n31 ISO/IEC, “Information technology—security techniques—informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n32 ISO/IEC, “Information technology—security techniques—informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n33 ISO/IEC, “Information technology—security techniques—informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n34 ISO/IEC, “Information technology—security techniques—informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n" }, { "page_number": 645, "text": "PART | V Storage Security\n612\nsection. Input to the subprocess is the output of the risk \nidentification subprocess. Its output is a list of risks with \nvalue levels assigned. \n Having valuated assets, threats, and vulnerabilities, we \nshould be able to calculate the resulting risk, if the func-\ntion relating these to risk is known. Establishing an ana-\nlytic function for this purpose is probably impossible and \ncertainly ineffective. This is why, in practice, an empirical \nmatrix is used for this purpose. Such a matrix, an exam-\nple of which is shown in Table 35.2 , links asset values \nand threat and vulnerability levels to the resulting risk. In \nthis example, asset values are expressed on a 0 – 4 scale, \nwhereas threat and vulnerability levels are expressed on \na Low-Medium-High scale. The risk values are expressed \non a scale of 1 to 8. When linking the asset values and the \nthreats and vulnerabilities, consideration needs to be given \nto whether the threat/vulnerability combination could \ncause problems to confidentiality, integrity, and/or avail-\nability. Depending on the results of these considerations, \nthe appropriate asset value(s) should be chosen, that is, \nthe one that has been selected to express the impact of a \nloss of confidentiality, or the one that has been selected to \nexpress the loss of integrity, or the one chosen to express \nthe loss of availability. Using this method can lead to mul-\ntiple risks for each of the assets, depending on the particu-\nlar threat/vulnerability combination considered. 35 \n Finally, the risk evaluation process receives as input \nthe output of the risk analysis process. It compares the \nlevels of risk against the risk evaluation criteria and \nrisk acceptance criteria that were established within the \ncontext establishment process. The process uses the \nunderstanding of risk obtained by the risk assessment \nprocess to make decisions about future actions. These \ndecisions include whether an activity should be under-\ntaken and setting priorities for risk treatment. The out-\nput of the process is a list of risks prioritized according \nto the risk evaluation criteria, in relation to the incident \nscenarios that lead to those risks. \n Risk Treatment \n When the risk is calculated, the risk assessment process \nfinishes. However, our actual ultimate goal is treating the \nrisk. The risk treatment process aims at selecting security \nmeasures to reduce, retain, avoid, or transfer the risks and \nat defining a risk treatment plan. The process receives as \ninput the output of the risk assessment process and pro-\nduces as output the risk treatment plan and the residual \nrisks subject to the acceptance decision by the manage-\nment of the organization. \n The options available to treat risk are to reduce it, to \naccept it, to avoid it, or to transfer it. Combinations of these \noptions are also possible. The factors that might influence \nthe decision are the cost each time the incident related to \nthe risk happens; how frequently it is expected to happen; \nthe organization’s attitude toward risk; the ease of imple-\nmentation of the security measures required to treat the risk; \nthe resources available; the current business/technology pri-\norities; and organizational and management politics. 37 \n For all those risks where the option to reduce the risk \nhas been chosen, appropriate security measures should be \n35 British Standards Institute, “ISMSs—Part 3: Guidelines for infor-\nmation security risk management,” BS 7799-3:2006.\n37 ISO/IEC, “Information technology—security techniques—informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n TABLE 35.2 Example Risk Calculation Matrix 36 \n Asset \nValue \n Level of Threat \n \n Low \n Medium \n High \n \n Level of Vulnerability \n \n L \n M \n H \n L \n M \n H \n L \n M \n H \n 0 \n 0 \n 1 \n 2 \n 1 \n 2 \n 3 \n 2 \n 3 \n 4 \n 1 \n 1 \n 2 \n 3 \n 2 \n 3 \n 4 \n 3 \n 4 \n 5 \n 2 \n 2 \n 3 \n 4 \n 3 \n 4 \n 5 \n 4 \n 5 \n 6 \n 3 \n 3 \n 4 \n 5 \n 4 \n 5 \n 6 \n 5 \n 6 \n 7 \n 4 \n 4 \n 5 \n 6 \n 5 \n 6 \n 7 \n 6 \n 7 \n 8 \n36 British Standards Institute, “ISMSs—Part 3: Guidelines for informa-\ntion security risk management,” BS 7799-3:2006.\n" }, { "page_number": 646, "text": "Chapter | 35 Risk Management\n613\nimplemented to reduce the risks to the level that has been \nidentified as acceptable, or at least as much as is feasible \ntoward that level. These questions then arise: How much \ncan we reduce the risk? Is it possible to achieve zero risk? \n Zero risk is possible when either the cost of an inci-\ndent is zero or when the likelihood of the incident occur-\nring is zero. The cost of an incident is zero when the \nvalue of the implicated asset is zero or when the impact \nto the organization is zero. Therefore, if one or more of \nthese conditions are found to hold during the risk assess-\nment process, it is meaningless to take security meas-\nures. On the other hand, the likelihood of an incident \noccurring being zero is not possible, because the threats \nfaced by an open system operating in a dynamic, hence \nhighly variable, environment, as contemporary informa-\ntion systems do, and the causes that generate them are \nextremely complex; human behavior, which is extremely \ndifficult to predict and model, plays a very important role \nin securing information systems; and the resources that a \nbusiness or organization has at its disposal are finite. \n When faced with a nonzero risk, our interest focuses \non reducing the risk to acceptable levels. Because risk is \na nondecreasing function in all its constituents, security \nmeasures can reduce it by reducing these constituents. \nSince the asset value cannot be directly reduced, 38 it is pos-\nsible to reduce risk by reducing the likelihood of the threat \noccurring or the likelihood of the vulnerability being suc-\ncessfully exploited or the impact should the threat succeed. \nWhich of these ways (or a combination of them) an organ-\nization chooses to adopt to protect its assets is a business \ndecision and depends on the business requirements, the \nenvironment, and the circumstances in which the organi-\nzation needs to operate. There is no universal or common \napproach to the selection of security measures. A possibil-\nity is to assign numerical values to the efficiency of each \nsecurity measure, on a scale that matches that in which \nrisks are expressed, and select all security measures that \nare relevant to the particular risk and have an efficiency \nscore of at least equal to the value of the risk. Several \nsources provide lists of potential security measures. 39 , 40 \nDocumenting the selected security measures is impor-\ntant in supporting certification and enables the organiza-\ntion to track the implementation of the selected security \nmeasures. \n When considering reducing a risk, several constraints \nmay appear. These may be related to the timeframe; to \nfinancial or technical issues; to the way the organiza-\ntion operates or to its culture; to the environment within \nwhich the organization operates; to the applicable legal \nframework or to ethics; to the ease of use of the appro-\npriate security measures; to the availability and suitability \nof personnel; or to the difficulties of integrating new and \nexisting security measures. Due to the existence of these \nconstraints, it is likely that some risks will exist for which \neither the organization cannot install appropriate security \nmeasures or for which the cost of implementing appro-\npriate measures outweighs the potential loss through \nthe incident related to the risk occurring. In these cases, \na decision may be made to accept the risk and live with \nthe consequences if the incident related to the risk occurs. \nThese decisions must be documented so that management \nis aware of its risk position and can knowingly accept \nthe risk. The importance of this documentation has led \nrisk acceptance to be identified as a separate process. 41 \nA special case where particular attention must be paid is \nwhen an incident related to a risk is deemed to be highly \nunlikely to occur but, if it occurred, the organization \nwould not survive. If such a risk is deemed to be unac-\nceptable but too costly to reduce, the organization could \ndecide to transfer it. \n Risk transfer is an option whereby it is difficult for \nthe organization to reduce the risk to an acceptable level \nor the risk can be more economically transferred to a \nthird party. Risks can be transferred using insurance. In \nthis case, the question of what is a fair premium arises. 42 \nAnother possibility is to use third parties or outsourcing \npartners to handle critical business assets or processes if \nthey are suitably equipped for doing so. Combining both \noptions is also possible; in this case, the fair premium \nmay be determined. 43 \n38 As we will see later, it is possible to indirectly reduce the value of \nan asset. For example, if sensitive personal data are stored and the cost of \nprotecting them is high, it is possible to decide that such data are too costly \nto continue storing. This constitutes a form of risk avoidance. As another \nexample, we may decide that the cost for protecting our equipment is too \nhigh and to resort to outsourcing. This is a form of risk transfer.\n39 ISO/IEC, “Information security management—specifi cation with \nguidance for use,” ISO 27001.\n40 British Standards Institute, “Information technology—Security \ntechniques—Information security incident management,” BS ISO/IEC \n17799:2005.\n41 ISO/IEC, “Information technology—security techniques—informa-\ntion security risk management,” ISO/IEC FDIS 27005:2008 (E).\n42 C. Lambrinoudakis, S. Gritzalis, P. Hatzopoulos, A. N. Yannacopoulos, \nand S. K. Katsikas, “A formal model for pricing information systems \ninsurance contracts,” Computer Standards and Interfaces, Vol. 27, \npp. 521–532, 2005.\n43 S. Gritzalis, A. N. Yannacopoulos, C. Lambrinoudakis, P. Hatzopoulos, \nS. K. Katsikas, “A probabilistic model for optimal insurance contracts \nagainst security risks and privacy violation in IT outsourcing envi-\nronments,” International Journal of Information Security, Vol. 6, pp. \n197–211, 2007.\n" }, { "page_number": 647, "text": "PART | V Storage Security\n614\n Risk avoidance describes any action where the busi-\nness activities or ways to conduct business are changed \nto avoid any risk occurring. For example, risk avoid-\nance can be achieved by not conducting certain business \nactivities, by moving assets away from an area of risk, \nor by deciding not to process particularly sensitive infor-\nmation. Risk avoidance entails that the organization con-\nsciously accepts the impact likely to occur if an incident \noccurs. However, the organization chooses not to install \nthe required security measures to reduce the risk. There \nare several cases where this option is exercised, particu-\nlarly when the required measures contradict the culture \nand/or the policy of the organization. \n After the risk treatment decision(s) have been taken, \nthere will always be risks remaining. These are called \n residual risks . Residual risks can be difficult to assess, \nbut at least an estimate should be made to ensure that suf-\nficient protection is achieved. If the residual risk is unac-\nceptable, the risk treatment process may be repeated. \n Once the risk treatment decisions have been taken, \nthe activities to implement these decisions need to be \nidentified and planned. The risk treatment plan needs \nto identify limiting factors and dependencies, priorities, \ndeadlines and milestones, resources, including any nec-\nessary approvals for their allocation, and the critical path \nof the implementation. \n Risk Communication \n Risk communication is a horizontal process that interacts \nbidirectionally with all other processes of risk manage-\nment. Its purpose is to establish a common understanding \nof all aspects of risk among all the organization’s stake-\nholders. Common understanding does not come auto-\nmatically, since it is likely that perceptions of risk vary \nwidely due to differences in assumptions, needs, con-\ncepts, and concerns. Establishing a common understand-\ning is important, since it influences decisions to be taken \nand the ways in which such decisions are implemented. \nRisk communication must be made according to a well-\ndefined plan that should include provisions for risk com-\nmunication under both normal and emergency conditions. \n Risk Monitoring and Review \n Risk management is an ongoing, never-ending proc-\ness that is assigned to an individual, a team, or an out-\nsourced third party, depending on the organization’s \nsize and operational characteristics. Within this process, \nimplemented security measures are regularly monitored \nand reviewed to ensure that they function correctly and \neffectively and that changes in the environment have not \nrendered them ineffective. Because over time there is a \ntendency for the performance of any service or mecha-\nnism to deteriorate, monitoring is intended to detect this \ndeterioration and initiate corrective action. Maintenance \nof security measures should be planned and performed \non a regular, scheduled basis. \n The results from an original security risk assessment \nexercise need to be regularly reviewed for change, since \nthere are several factors that could change the originally \nassessed risks. Such factors may be the introduction \nof new business functions, a change in business objec-\ntives and/or processes, a review of the correctness and \neffectiveness of the implemented security measures, the \nappearance of new or changed threats and/or vulnerabil-\nities, or changes external to the organization. After all \nthese different changes have been taken into account, the \nrisk should be recalculated and necessary changes to the \nrisk treatment decisions and security measures identified \nand documented. \n Regular internal audits should be scheduled and should \nbe conducted by an independent party that does not need \nto be from outside the organization. Internal auditors \nshould not be under the supervision or control of those \nresponsible for the implementation or daily management \nof the ISMS. Additionally, audits by an external body are \nnot only useful, they are essential for certification. \n Finally, complete, accessible, and correct documen-\ntation and a controlled process to manage documents \nare necessary to support the ISMS, although the scope \nand detail will vary from organization to organization. \nAligning these documentation details with the documen-\ntation requirements of other management systems, such \nas ISO 9001, is certainly possible and constitutes good \npractice. Figure 35.2 pictorially summarizes the differ-\nent processes within the risk management methodology, \nas we discussed earlier. \n Integrating Risk Management into the \nSystem Development Life Cycle \n Risk management must be totally integrated into the sys-\ntem development life cycle. This cycle consists of five \nphases: initiation; development or acquisition; implemen-\ntation; operation or maintenance; and disposal. Within the \ninitiation phase, identified risks are used to support the \ndevelopment of the system requirements, including secu-\nrity requirements and a security concept of operations. In \nthe development or acquisition phase, the risks identified \ncan be used to support the security analyses of the system \nthat may lead to architecture and design tradeoffs dur-\ning system development. In the implementation phase, \nthe risk management process supports the assessment of \n" }, { "page_number": 648, "text": "Chapter | 35 Risk Management\n615\nthe system implementation against its requirements and \nwithin its modeled operational environment. In the oper-\nation or maintenance phase, risk management activities \nare whenever major changes are made to a system in its \noperational environment. Finally, in the disposal phase, \nrisk management activities are performed for system \ncomponents that will be disposed of or replaced to ensure \nthat the hardware and software are properly disposed of, \nthat residual data is appropriately handled, and that sys-\ntem migration is conducted in a secure and systematic \nmanner. 44 \n Critique of Risk Management as a \nMethodology \n Risk management as a scientific methodology has been \ncriticized as being shallow. The main reason for this \nrather strong and probably unfair criticism is that risk \nmanagement does not provide for feedback of the results \nof the selected security measures or of the risk treat-\nment decisions. In most cases, even though the trend has \nalready changed, information systems security is a low-\npriorityproject for management, until some security inci-\ndent happens. Then, and only then, does management \nseriously engage in an effort to improve security meas-\nures. However, after a while, the problem stops being the \ncenter of interest, and what remains is a number of secu-\nrity measures, some specialized hardware and software, \nand an operationally more complex system. Unless an \nincident happens again, there is no way for management \nto know whether their efforts were really worthwhile. \nAfter all, in many cases the information system had \noperated for years in the past without problems, without \nthe security improvements that the security professionals \nrecommended. \n The risk management methodology, as has already \nbeen stated, is based on the scientific foundations of sta-\ntistical decision making. The Bayes Theorem, on which \nthe theory is based, pertains to the statistical revision of \n a priori probabilities, providing a posteriori probabili-\nties, and is applied when a decision is sought based on \nimperfect information. In risk management, the deci-\nsion for quantifying an event may be a function of addi-\ntional factors, other than the probability of the event \nitself occurring. For example, the probability for a riot \nto occur may be related to the stability of the political \nsystem. Thus, the calculation of this probability should \ninvolve quantified information relevant to the stability \nof the political system. The overall model accepts the \npossibility that any information (e.g., “ The political sys-\ntem is stable ” ) may be inaccurate. In comparison to this \nformal framework, risk management, as applied by the \nsecurity professionals, is simplistic. Indeed, by avoiding \nthe complexity that accompanies the formal probabilis-\ntic modeling of risks and uncertainty, risk management \nlooks more like a process that attempts to guess rather \nthan formally predict the future on the basis of statistical \nevidence. \n Finally, the risk management methodology is highly \nsubjective in assessing the value of assets, the likelihood \nof threats occurring, the likelihood of vulnerabilities being \nsuccessfully exploited by threats, and the significance of \nthe impact. This subjectivity is frequently obscured by the \nformality of the underlying mathematical-probabilistic \nmodels, the systematic way in which most risk analysis \nmethods work, and the objectivity of the tools that sup-\nport these methods. \n If indeed the skepticism about the scientific sound-\nness of the risk management methodology is justified, \nthe question: Why, then, has risk management as a prac-\ntice survived so long? becomes crucial. There are several \nanswers to this question. \n Risk management is a very important instrument in \ndesigning, implementing, and operating secure infor-\nmation systems, because it systematically classifies and \ndrives the process of deciding how to treat risks. In doing \nContext Establishment\nRisk Assessment\nRisk Treatment\nR\ni\ns\nk\nM\no\nn\ni\nt\no\nr\ni\nn\ng\na\nn\nd\nR\ne\nv\ni\ne\nw\nR\ni\ns\nk\nC\no\nm\nm\nu\nn\ni\nc\na\nt\ni\no\nn\n FIGURE 35.2 The risk management methodology. \n44 G. Stoneburner, A. Goguen and A. Feringa, Risk Management Guide \nfor Information Technology Systems, National Institute of Standards and \nTechnology, Special Publication SP 800-30, 2002.\n" }, { "page_number": 649, "text": "PART | V Storage Security\n616\nso, it facilitates better understanding of the nature and \nthe operation of the information system, thus constitut-\ning a means for documenting and analyzing the system. \nTherefore, it is necessary for supporting the efforts of the \norganization’s management to design, implement, and \noperate secure information systems. \n Traditionally, risk management has been seen by \nsecurity professionals as a means to justify to manage-\nment the cost of security measures. Nowadays, it does \nnot only that, but it also fulfills legislative and/or regu-\nlatory provisions that exist in several countries, which \ndemand information systems to be protected in a manner \ncommensurate with the threats they face. \n Risk management constitutes an efficient means of \ncommunication between technical and administrative \npersonnel, as well as management, because it allows us to \nexpress the security problem in a language comprehensi-\nble by management, by viewing security as an investment \nthat can be assessed in terms of cost/benefit analysis. 45 \nAdditionally, it is quite flexible, so it can fit in several \nscientific frameworks and be applied either by itself or \nin combination with other methodologies. It is the most \nwidely used methodology for designing and managing \ninformation systems security and has been successfully \napplied in many cases. \n Finally, an answer frequently offered by security pro-\nfessionals is that there simply is no other efficient way \nto carry out the tasks that risk management does. Indeed, \nit has been proved in practice that by simply using meth-\nods of management science, law, and accounting, it is \nnot possible to reach conclusions that can adequately \njustify risk treatment decisions. \n Risk Management Methods \n Many methods for risk management are available today. \nMost of them are supported by software tools. Selecting \nthe most suitable method for a specific business envi-\nronment and the needs of a specific organization is very \nimportant, albeit quite difficult, for a number of reasons 46 : \n ● There is a lack of a complete inventory of all availa-\nble methods, with all their individual characteristics. \n ● There exists no commonly accepted set of evaluation \ncriteria for risk management methods. \n ● Some methods only cover parts of the whole risk \nmanagement process. For example, some methods \nonly calculate the risk, without covering the risk \ntreatment process. Some others focus on a small \npart of the whole process (e.g., disaster recovery \nplanning). Some focus on auditing the security \nmeasures, and so on. \n ● Risk management methods differ widely in the \nanalysis level that they use. Some use high-level \ndescriptions of the information system under study; \nothers call for detailed descriptions. \n ● Some methods are not freely available to the market, \na fact that makes their evaluation very difficult, if at \nall possible. \n The National Institute of Standards and Technology \n(NIST) compiled, in 1991, a comprehensive report on \nrisk management methods and tools. 47 The European \nCommission, recognizing the need for a homogenized and \nsoftware-supported risk management methodology for \nuse by European businesses and organizations, assigned, \nin 1993, a similar project to a group of companies. Part of \nthis project’s results were the creation of an inventory and \nthe evaluation of all available risk management methods at \nthe time. 48 In 2006 the European Network and Information \nSecurity Agency (ENISA) recently repeated the endeavor. \nEven though preliminary results toward an inventory of \nrisk management/risk assessment methods have been made \navailable, 49 the process is still ongoing. 50 Some of the \nmost widely used risk management methods 51 are briefly \ndescribed in the sequel. \n CRAMM (CCTA Risk Analysis and Management \nMethodology) 52 is a method developed by the British gov-\nernment organization CCTA (Central Communication and \nTelecommunication Agency), now renamed the Office \nof Government Commerce (OGC). CRAMM was first \n45 R. Baskerville, “Information systems security design methods: \nimplications for information systems development,” ACM Computing \nSurveys, Vol. 25, No. 4, pp. 375–414, 1993.\n46 R. Moses, “A European standard for risk analysis”, in Proceedings, \n10th World Conference on Computer Security, Audit and Control, \nElsevier Advanced Technology, pp. 527–541, 1993.\n47 NIST, Description of automated risk management packages that \nNIST/NCSC risk management research laboratory have examined, \nMarch 1991, available at http://w2.eff.org/Privacy/Newin/New_nist/\nrisktool.txt, accessed April 28, 2008.\n48 INFOSEC 1992, Project S2014—Risk Analysis, Risk Analysis \nMethods Database, January 1993.\n49 ENISA Technical department (Section Risk Management), Risk \nManagement: Implementation principles and Inventories for Risk \nManagement/Risk Assessment methods and tools, available at www.enisa.\neuropa.eu/rmra/files/D1_Inventory_of_Methods_Risk_Management_\nFinal.pdf, June 2006, accessed April 28, 2008.\n50 www.enisa.europa.eu/rmra/rm_home_01.html, accessed April 28, \n2008.\n51 ENISA Technical department (Section Risk Management), Risk \nManagement: Implementation principles and Inventories for Risk \nManagement/Risk Assessment methods and tools, available at www.enisa.\neuropa.eu/rmra/files/D1_Inventory_of_Methods_Risk_Management_\nFinal.pdf, June 2006, accessed April 28, 2008.\n52 www.cramm.com, accessed April 28, 2008.\n" }, { "page_number": 650, "text": "Chapter | 35 Risk Management\n617\nreleased in 1985. At present CRAMM is the U.K. gov-\nernment’s preferred risk analysis method, but CRAMM \nis also used in many countries outside the U.K. CRAMM \nis especially appropriate for large organizations, such as \ngovernment bodies and industry. CRAMM provides a \nstaged and disciplined approach embracing both techni-\ncal and nontechnical aspects of security. To assess these \ncomponents, CRAMM is divided into three stages: asset \nidentification and valuation; threat and vulnerability \nassessment; and countermeasure selection and recom-\nmendation. CRAMM enables the reviewer to identify the \nphysical, software, data, and location assets that make up \nthe information system. Each of these assets can be val-\nued. Physical assets are valued in terms of their replace-\nment cost. Data and software assets are valued in terms of \nthe impact that would result if the information were to be \nunavailable, destroyed, disclosed, or modified. CRAMM \ncovers the full range of deliberate and accidental threats \nthat may affect information systems. This stage concludes \nby calculating the level of risk. CRAMM contains a very \nlarge countermeasure library consisting of over 3000 \ndetailed countermeasures organized into over 70 logical \ngroupings. The CRAMM software, developed by Insight \nConsulting, 53 uses the measures of risks determined dur-\ning the previous stage and compares them against the \nsecurity level (a threshold level associated with each \ncountermeasure) to identify whether the risks are suffi-\nciently great to justify the installation of a particular coun-\ntermeasure. CRAMM provides a series of help facilities, \nincluding backtracking, what-if scenarios, prioritization \nfunctions, and reporting tools, to assist with the imple-\nmentation of countermeasures and the active manage-\nment of the identified risks. CRAMM is ISO/IEC 17799, \nGramm-Leach-Bliley Act (GLBA), and Health Insurance \nPortability and Accountability Act (HIPAA) compliant. \n The methodological approach offered by EBIOS \n(Expression des Besoins et Identification des Objectifs de \nS é curit é ) 54 provides a global and consistent view of infor-\nmation systems security. It was first released in 1995. The \nmethod takes into account all technical entities and non-\ntechnical entities. It allows all personnel using the infor-\nmation system to be involved in security issues and offers \na dynamic approach that encourages interaction among \nthe organization’s various jobs and functions by examin-\ning the complete life cycle of the system. Promoted by the \nDCSSI (Direction Centrale de la S é curit é des Syst è mes d ’ \nInformation) of the French government and recognized by \nthe French administrations, EBIOS is also a reference in \nthe private sector and abroad. It is compliant with major \nIT security standards. The EBIOS approach consists of \nfive phases. Phase 1 deals with context analysis in terms \nof global business process dependency on the informa-\ntion system. Security needs analysis and threat analysis \nare conducted in Phases 2 and 3. Phases 4 and 5 yield an \nobjective diagnostic on risks. The necessary and sufficient \nsecurity objectives (and further security requirements) are \nthen stated, proof of coverage is furnished, and residual \nrisks made explicit. Local standard bases (e.g., German \nIT Grundschutz) are easily added on to its internal knowl-\nedge bases and catalogues of best practices. EBIOS \nis supported by a software tool developed by Central \nInformation Systems Security Division (France). The tool \nhelps the user to produce all risk analysis and manage-\nment steps according to the EBIOS method and allows all \nthe study results to be recorded and the required summary \ndocuments to be produced. EBIOS is compliant with ISO/\nIEC 27001, ISO/IEC 13335 (GMITS), ISO/IEC 15408 \n(Common Criteria), ISO/IEC 17799, and ISO/IEC 21827. \n The Information Security Forum’s (ISF) Standard of \nGood Practice 55 provides a set of high-level principles \nand objectives for information security together with \nassociated statements of good practice. The Standard of \nGood Practice is split into five distinct aspects, each of \nwhich covers a particular type of environment. These \nare security management; critical business applica-\ntions; computer installations; networks; and systems \ndevelopment. FIRM (Fundamental Information Risk \nManagement) is a detailed method for monitoring and \ncontrolling information risk at the enterprise level. It \nhas been developed as a practical approach to monitor-\ning the effectiveness of information security. As such, it \nenables information risk to be managed systematically \nacross enterprises of all sizes. It includes comprehensive \nimplementation guidelines, which explain how to gain \nsupport for the approach and get it up and running. The \nInformation Risk Scorecard is an integral part of FIRM. \nThe Scorecard is a form used to collect a range of import-\nant details about a particular information resource such as \nthe name of the owner, criticality, level of threat, business \nimpact, and vulnerability. The ISF’s Information Security \nStatus Survey is a comprehensive risk management tool \nthat evaluates a wide range of security measures used \nby organizations to control the business risks associated \nwith their IT-based information systems. SARA (Simple \n53 www.insight.co.uk/, accessed April 28, 2008.\n54 www.ssi.gouv.fr/en/confi dence/ebiospresentation.html, \naccessed \nApril 28, 2008.\n55 Information Security Forum, The standard of good practice for \ninformation security, 2007 (available at https://www.isfsecuritystan-\ndard.com/SOGP07/index.htm, accessed April 28, 2008).\n" }, { "page_number": 651, "text": "PART | V Storage Security\n618\nto Apply Risk Analysis) is a detailed method for ana-\nlyzing information risk in critical information systems. \nSPRINT (Simplified Process for Risk Identification) is \na relatively quick and easy-to-use method for assessing \nbusiness impact and for analyzing information risk in \nimportant but not critical information systems. The full \nSPRINT method is intended for application to impor-\ntant, but not critical, systems. It complements the SARA \nmethod, which is better suited to analyzing the risks asso-\nciated with critical business systems. SPRINT first helps \ndecide the level of risk associated with a system. After \nthe risks are fully understood, SPRINT helps determine \nhow to proceed and, if the SPRINT process continues, \nculminates in the production of an agreed plan of action \nfor keeping risks within acceptable limits. SPRINT can \nhelp identify the vulnerabilities of existing systems and \nthe safeguards needed to protect against them; and define \nthe security requirements for systems under development \nand the security measures needed to satisfy them. The \nmethod is compliant to ISO/IEC 17799. \n IT-Grundschutz 56 provides a method for an organiza-\ntion to establish an ISMS. It was first released in 1994. It \ncomprises both generic IT security recommendations for \nestablishing an applicable IT security process and detailed \ntechnical recommendations to achieve the necessary IT \nsecurity level for a specific domain. The IT security proc-\ness suggested by IT-Grundschutz consists of the following \nsteps: initialization of the process; definition of IT secu-\nrity goals and business environment; establishment of an \norganizational structure for IT security; provision of nec-\nessary resources; creation of the IT security concept; IT \nstructure analysis; assessment of protection requirements; \nmodeling; IT security check; supplementary security \nanalysis; implementation planning and fulfillment; main-\ntenance, monitoring, and improvement of the process; and \nIT-Grundschutz Certification (optional). The key approach \nin IT-Grundschutz is to provide a framework for IT secu-\nrity management, offering information for commonly \nused IT components (modules). IT-Grundschutz modules \ninclude lists of relevant threats and required countermeas-\nures in a relatively technical level. These elements can be \nexpanded, complemented, or adapted to the needs of an \norganization. IT-Grundschutz is supported by a software \ntool named Gstool that has been developed by the Federal \nOffice for Information Security (BSI). The method is \ncompliant with ISO/IEC 17799 and ISO/IEC 27001. \n MEHARI (M é thode Harmonis é e d’Analyse de Ris-\nques Informatiques) 57 is a method designed by security \nexperts of the CLUSIF (Club de la S é curit é Informatique \nFran ç ais) that replaced the earlier CLUSIF-sponsored \nMARION and MELISA methods. It was first released \nin 1996. It proposes an approach for defining risk reduc-\ntion measures suited to the organization objectives. \nMEHARI provides a risk assessment model and modu-\nlar components and processes. It enhances the ability \nto discover vulnerabilities through audit and to analyze \nrisk situations. MEHARI includes formulas facilitat-\ning threat identification and threat characterization and \noptimal selection of corrective actions. MEHARI allows \nfor providing accurate indications for building security \nplans, based on a complete list of vulnerability control \npoints and an accurate monitoring process in a continual \nimprovement cycle. It is compliant with ISO/IEC 17799 \nand ISO/IEC 13335. \n The OCTAVE (Operationally Critical Threat, Asset, \nand Vulnerability Evaluation) 58 method, developed by \nthe Software Engineering Institute of Carnegie-Mellon \nUniversity, defines a risk-based strategic assessment and \nplanning technique for security. It was first released in \n1999. OCTAVE is self-directed in the sense that a small \nteam of people from the operational (or business) units \nand the IT department work together to address the \nsecurity needs of the organization. The team draws on \nthe knowledge of many employees to define the cur-\nrent state of security, identify risks to critical assets, and \nset a security strategy. OCTAVE is different from typi-\ncal technology-focused assessments in that it focuses on \norganizational risk and strategic, practice-related issues, \nbalancing operational risk, security practices, and tech-\nnology. The OCTAVE method is driven by operational \nrisk and security practices. Technology is examined only \nin relation to security practices. OCTAVE-S is a varia-\ntion of the method tailored to the limited means and \nunique constraints typically found in small organizations \n(less than 100 people). OCTAVE Allegro is tailored \nfor organizations focused on information assets and a \nstreamlined approach. The Octave Automated Tool has \nbeen implemented by Advanced Technology Institute \n(ATI) to help users with the implementation of the \nOCTAVE method. \n Callio Secura 17799 59 is a product from Callio \nTechnologies. It was first released in 2001. It is a multi-\nuser Web application with database support that lets the \nuser implement and certify an ISMS and guides the user \nthrough each of the steps leading to ISO 27001 / 17799 \ncompliance and BS 7799-2 certification. Moreover, it \n56 www.bsi.de/english/gshb/index.htm, accessed April 28, 2008.\n57 https://www.clusif.asso.fr/en/production/mehari/, accessed April 28, \n2008.\n58 www.cert.org/octave/, accessed April 28, 2008.\n59 www.callio.com/secura.php, accessed April 28, 2008.\n" }, { "page_number": 652, "text": "Chapter | 35 Risk Management\n619\nprovides document management functionality as well \nas customization of the tool’s databases. It also allows \ncarrying out audits for other standards, such as COBIT, \nHIPAA, and Sarbanes-Oxley, by importing the user’s \nown questionnaires. Callio Secura is compliant with \nISO/IEC 17799 and ISO/IEC 27001. \n COBRA 60 is a standalone application for risk manage-\nment from C & A Systems Security. It is a questionnaire-\nbased Windows PC tool, using expert system principles and \na set of extensive knowledge bases. It has also embraced \nthe functionality to optionally deliver other security serv-\nices, such as checking compliance with the ISO 17799 \nsecurity standard or with an organization’s own security \npolicies. It can be used for identification of threats and \nvulnerabilities; it measures the degree of actual risk for \neach area or aspect of a system and directly links this to \nthe potential business impact. It offers detailed solutions \nand recommendations to reduce the risks and provides \nbusiness as well as technical reports. It is compliant with \nISO/IEC 17799. \n Allion’s product CounterMeasures 61 performs risk \nmanagement based on the US-NIST 800 series and OMB \nCircular A-130 USA standards. The user standardizes the \nevaluation criteria and, using a “ tailor-made ” assessment \nchecklist, the software provides objective evaluation cri-\nteria for determining security posture and/or compliance. \nCounterMeasures is available in both networked and \ndesktop configurations. It is compliant with the NIST \n800 series and OMB Circular A-130 USA standards. \n Proteus 62 is a product suite from InfoGov. It was first \nreleased in 1999. Through its components the user can \nperform gap analysis against standards such as ISO 17799 \nor create and manage an ISMS according to ISO 27001 \n(BS 7799-2). Proteus Enterprise is a fully integrated \nWeb-based Information Risk Management, Compliance \nand Security solution that is fully scalable. Using Proteus \nEnterprise, companies can perform any number of \nonline compliance audits against any standard and com-\npare between them. They can then assess how deficient \ncompliance security measures affect the company both \nfinancially and operationally by mapping them onto its \ncritical business processes. Proteus then identifies risks \nand mitigates those risks by formulating a work plan, \nmaintains a current and demonstrable compliance status \nto the regulators and senior management alike. The sys-\ntem works with the company’s existing infrastructure and \nuses RiskView to bridge the gap between the technical/\nregulatory community and senior management. Proteus is \na comprehensive system that includes online compliance \nand gap analysis, business impact, risk assessment, busi-\nness continuity, incident management, asset management, \norganization roles, policy repository, and action plans. Its \ncompliance engine supports any standard (international, \nindustry, and corporate specific) and is supplied with a \nchoice of comprehensive template questionnaires. The \nsystem is fully scalable and can size from a single user up \nto the largest of multinational organizations. The product \nmaintains a full audit trail. It can perform online audits \nfor both internal departments and external suppliers. It is \ncompliant with ISO/IEC 17799 and ISO/IEC 27001. \n RA2 art of risk 63 is the new risk assessment tool \nfrom AEXIS, the originators of the RA Software Tool. \nIt was first released in 2000. It is designed to help busi-\nnesses to develop an ISMS in compliance with ISO/IEC \n27001:2005 (previously BS 7799 Part 2:2002), and the \ncode of practice ISO/IEC 27002. It covers a number of \nsecurity processes that direct businesses toward design-\ning and implementing an ISMS. RA2 art of risk can be \ncustomized to meet the requirements of the organization. \nThis includes the assessment of assets, threats, and vul-\nnerabilities applicable to the organization, and the pos-\nsibilities to include security measures additional to the \nones in ISO/IEC 27002 in the assessment. It also includes \na set of editable questions that can be used to assess the \ncompliance with ISO/IEC 27002. RA2 Information \nCollection Device, a component that is distributed along \nwith the tool, can be installed anywhere in the organiza-\ntion as needed to collect and feed back information into \nthe risk assessment process. It is compliant with ISO/IEC \n17799 and ISO/IEC 27001. \n RiskWatch for Information Systems & ISO 17799 64 \nis the RiskWatch company’s solution for information sys-\ntem risk management. Other relevant products in the same \nsuite are RiskWatch for Financial Institutions, RiskWatch \nfor HIPAA Security, RiskWatch for Physical & Homeland \nSecurity, RiskWatch for University and School Security, \nand RiskWatch for NERC (North American Electric \nReliability Corporation) and C-TPAT-Supply Chain. The \nRiskWatch for Information Systems & ISO 17799 tool \nconducts automated risk analysis and vulnerability assess-\nments of information systems. All RiskWatch software is \nfully customizable by the user. It can be tailored to reflect \nany corporate or government policy, including incorpora-\ntion of unique standards, incident report data, penetration \n60 www.riskworld.net/method.htm, accessed April 28, 2008.\n61 www.countermeasures.com, accessed April 28, 2008.\n62 www.infogov.co.uk/proteus/, accessed April 28, 2008.\n63 www.aexis.de/RA2ToolPage.htm, accessed April 28, 2008.\n64 www.riskwatch.com/index.php?option\u0003com_content&task\n\u0003view&id\u000322&Itemid\u000334, accessed April 28, 2008.\n" }, { "page_number": 653, "text": "PART | V Storage Security\n620\ntest data, observation, and country-specific threat data. \nEvery product includes both information security as well \nas physical security. Project plans and a simple workflow \nmake it easy to create accurate and supportable risk assess-\nments. The tool includes security measures from the ISO \n17799 and USNIST 800-26 standards, with which it is \ncompliant. \n The SBA (Security by Analysis) method 65 is a con-\ncept that’s existed since the beginning of the 1980s. It is \nmore of a way of looking at analysis and security work in \ncomputerized businesses than a fully developed method. \nIt could be called the “ human model ” concerning risk \nand vulnerability analyses. The human model implies \na very strong confidence in knowledge among staff and \nindividuals within the analyzed business or organizations. \nIt is based on the fact that it is those who are working \nwith the everyday problems, regardless of position, who \nhave got the greatest possibilities to pinpoint the most \nimportant problems and to suggest the solutions. SBA is \nsupported by three software tools. Every tool has its own \nspecial method, but they are based on the same concept: \ngathering a group of people who represent the necessary \nbreadth of knowledge. SBA Check is primarily a tool \nfor anyone working with or responsible for information \nsecurity issues. The role of analysis leader is central to \nthe use of SBA Check. The analysis leader is in charge \nof ensuring that the analysis participants ’ knowledge of \nthe operation is brought to bear during the analysis proc-\ness in a way that is relevant, so that the description of \nthe current situation and opportunities for improvement \nretain their consistent quality. SBA Scenario is a tool that \nhelps evaluate business risks methodically through quan-\ntitative risk analysis. The tool also helps evaluate which \nactions are correct and financially motivated through \nrisk management. SBA Project is an IT support tool and \na method that helps identify conceivable problems in a \nproject as well as providing suggestions for conceivable \nmeasures to deal with those problems. The analysis par-\nticipants ’ views and knowledge are used as a basis for \nproviding a good picture of the risk in the project. \n 4. RISK MANAGEMENT LAWS \nAND REGULATIONS \n Many nations have adopted laws and regulations contain-\ning clauses that, directly or indirectly, pertain to aspects \nof information systems risk management. Similarly, a \nlarge number of international laws and regulations exist. \nIn the following, a brief description of such documents \nwith an international scope, directly relevant to informa-\ntion systems risk management, 66 is given. \n The “ Regulation (EC) No 45/2001 of the European \nParliament and of the Council of 18 December 2000 on \nthe protection of individuals with regard to the process-\ning of personal data by the Community institutions and \nbodies and on the free movement of such data ” 67 requires \nthat any personal data processing activity by Community \ninstitutions undergoes a prior risk analysis to determine \nthe privacy implications of the activity and to deter-\nmine the appropriate legal, technical, and organizational \nmeasures to protect such activities. It also stipulates that \nsuch activity is effectively protected by measures, which \nmust be state of the art, keeping into account the sensi-\ntivity and privacy implications of the activity. When a \nthird party is charged with the processing task, its activi-\nties are governed by suitable and enforced agreements. \nFurthermore, the regulation requires the European \nUnion’s (EU) institutions and bodies to take similar pre-\ncautions with regard to their telecommunications infra-\nstructure, and to properly inform the users of any specific \nrisks of security breaches. 68 \n The European Commission’s Directive on Data Pro-\ntection went into effect in October 1998 and prohibits \nthe transfer of personal data to non-EU nations that do \nnot meet the European “ adequacy ” standard for privacy \nprotection. The United States takes a different approach \nto privacy from that taken by the EU; it uses a sectoral \napproach that relies on a mix of legislation, regulation, \nand self-regulation. The EU, however, relies on compre-\nhensive legislation that, for example, requires creation of \ngovernment data protection agencies, registration of data \nbases with those agencies, and in some instances prior \napproval before personal data processing may begin. The \nSafe Harbor Privacy Principles 69 aim at bridging this gap \nby providing that an EU-based entity self-certifies its \ncompliance with them. \n The “ Commission Decision of 15 June 2001 on \nstandard contractual clauses for the transfer of personal \n65 www.thesbamethod.com, accessed April 28, 2008.\n66 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n67 http://europa.eu.int/smartapi/cgi/sga_doc?smartapi!celexapi!prod!\nCELEXnumdoc&lg\u0003EN&numdoc\u000332001R0045&model\u0003guichett, \naccessed April 29, 2008.\n68 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n69 www.export.gov/safeharbor/SH_Documents.asp, accessed April 29, \n2008.\n" }, { "page_number": 654, "text": "Chapter | 35 Risk Management\n621\ndata to third countries, under Directive 95/46/EC ” and \nthe “ Commission Decision of 27 December 2004 amend-\ning Decision 2001/497/EC as regards the introduction of \nan alternative set of standard contractual clauses for the \ntransfer of personal data to third countries ” 70 provide a \nset of voluntary model clauses that can be used to export \npersonal data from a data controller who is subject to EU \ndata protection rules to a data processor outside the EU \nwho is not subject to these rules or to a similar set of ade-\nquate rules. Upon acceptance of the model clauses, the \ndata controller must warrant that the appropriate legal, \ntechnical, and organizational measures to ensure the \nprotection of the personal data are taken. Furthermore, \nthe data processor must agree to permit auditing of its \nsecurity practices to ensure compliance with applicable \nEuropean data protection rules. 71 \n The Health Insurance Portability and Accountability \nAct of 1996 72 is a U.S. law with regard to health insur-\nance coverage, electronic health, and requirements for \nthe security and privacy of health data. Title II of HIPAA, \nknown as the Administrative Simplification (AS) provi-\nsions, requires the establishment of national standards for \nelectronic health care transactions and national identifiers \nfor providers, health-insurance plans, and employers. Per \nthe requirements of Title II, the Department of Health \nand Human Services has promulgated five rules regard-\ning Administrative Simplification: the Privacy Rule, the \nTransactions and Code Sets Rule, the Security Rule, \nthe Unique Identifiers Rule, and the Enforcement Rule. \nThe standards are meant to improve the efficiency and \neffectiveness of the U.S. health care system by encourag-\ning the widespread use of electronic data interchange. \n The “ Directive 2002/58/EC of the European Parlia-\nment and of the Council of 12 July 2002 concerning the \nprocessing of personal data and the protection of privacy \nin the electronic communications sector (Directive on pri-\nvacy and electronic communications) ” 73 requires that any \nprovider of publicly available electronic communications \nservices takes the appropriate legal, technical and organi-\nzational measures to ensure the security of its services; \ninforms his subscribers of any particular risks of security \nbreaches; and takes the necessary measures to prevent \nsuch breaches, and indicates the likely costs of security \nbreaches to the subscribers. 74 \n The “ Directive 2006/24/EC of the European Parlia-\nment and of the Council of 15 March 2006 on the reten-\ntion of data generated or processed in connection with \nthe provision of publicly available electronic communi-\ncations services or of public communications networks \nand amending Directive 2002/58/EC ” 75 requires the \naffected providers of publicly accessible electronic tele-\ncommunications networks to retain certain communica-\ntions data to be specified in their national regulations, for \na specific amount of time, under secured circumstances \nin compliance with applicable privacy regulations; to \nprovide access to this data to competent national authori-\nties; to ensure data quality and security through appro-\npriate technical and organizational measures, shielding \nit from access by unauthorized individuals; to ensure its \ndestruction when it is no longer required; and to ensure \nthat stored data can be promptly delivered on request \nfrom the competent authorities. 76 \n The “ Regulation (EC) No 1907/2006 of the European \nParliament and of the Council of 18 December 2006 \nconcerning the Registration, Evaluation, Authorisation \nand Restriction of Chemicals (REACH), establish-\ning a European Chemicals Agency, amending Directive \n1999/45/EC and repealing Council Regulation (EEC) No \n793/93 and Commission Regulation (EC) No 1488/94 as \nwell as Council Directive 76/769/EEC and Commission \nDirectives 91/155/EEC, 93/67/EEC, 93/105/EC and \n2000/21/EC ” 77 implants risk management obligations by \nimposing a reporting obligation on producers and import-\ners of articles covered by the regulation, with regard \nto the qualities of certain chemical substances, which \nincludes a risk assessment and obligation to examine \nhow such risks can be managed. This information is to \nbe registered in a central database. It also stipulates that \na Committee for Risk Assessment within the European \nChemicals Agency established by the Regulation is estab-\nlished and requires that the information provided is kept \nup to date with regard to potential risks to human health \n70 http://ec.europa.eu/justice_home/fsj/privacy/modelcontracts/index_\nen.htm, accessed April 29, 2008.\n71 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n72 www.legalarchiver.org/hipaa.htm, accessed April 29, 2008.\n73 http://europa.eu.int/smartapi/cgi/sga_doc?smartapi!celexapi!prod\n!CELEXnumdoc&lg\u0003en&numdoc\u000332002L0058&model\u0003guichett, \naccessed April 29, 2008.\n74 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n75 http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri\u0003CEL\nEX:32006L0024:EN:NOT, accessed April 29, 2008.\n76 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n77 http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri\u0003OJ:\nL:2006:396:0001:0849:EN:PDF, accessed April 29, 2008.\n" }, { "page_number": 655, "text": "PART | V Storage Security\n622\nor the environment, and that such risks are adequately \nmanaged. 78 \n The “ Council Framework Decision 2005/222/JHA \nof 24 February 2005 on attacks against information sys-\ntems ” 79 contains the conditions under which legal liability \ncan be imposed on legal entities for conduct of certain nat-\nural persons of authority within the legal entity. Thus, the \nFramework decision requires that the conduct of such fig-\nures within an organization is adequately monitored, also \nbecause the decision states that a legal entity can be held \nliable for acts of omission in this regard. Additionally, the \ndecision defines a series of criteria under which jurisdic-\ntional competence can be established. These include the \ncompetence of a jurisdiction when a criminal act is con-\nducted against an information system within its borders. 80 \n The “ OECD Guidelines for the Security of \nInformation Systems and Networks: Towards a Culture \nof Security ” (25 July 2002) 81 aim to promote a culture of \nsecurity; to raise awareness about the risk to information \nsystems and networks (including the policies, practices, \nmeasures, and procedures available to address those risks \nand the need for their adoption and implementation); \nto foster greater confidence in information systems and \nnetworks and the way in which they are provided and \nused; to create a general frame of reference; to promote \ncooperation and information sharing; and to promote the \nconsideration of security as an important objective. The \nguidelines state nine basic principles underpinning risk \nmanagement and information security practices. No part \nof the text is legally binding, but noncompliance with any \nof the principles is indicative of a breach of risk manage-\nment good practices that can potentially incur liability. 82 \n The “ Basel Committee on Banking Supervision —\n Risk Management Principles for Electronic Banking ” 83 \nidentifies 14 Risk Management Principles for Electronic \nBanking to help banking institutions expand their existing \nrisk oversight policies and processes to cover their eban-\nking activities. The Risk Management Principles fall into \nthree broad, and often overlapping, categories of issues \nthat are grouped to provide clarity: board and management \noversight; security controls; and legal and reputational risk \nmanagement. The Risk Management Principles are not \nput forth as absolute requirements or even “ best practice, ” \nnor do they attempt to set specific technical solutions or \nstandards relating to ebanking. Consequently, the Risk \nManagement Principles and sound practices are expected \nto be used as tools by national supervisors and to be \nimplemented with adaptations to reflect specific national \nrequirements and individual risk profiles where necessary. \n The “ Commission Recommendation 87/598/EEC of \n8 December 1987, concerning a European code of con-\nduct relating to electronic payments ” 84 provides a \nnumber of general nonbinding recommendations, includ-\ning an obligation to ensure that privacy is respected and \nthat the system is transparent with regard to potential \nsecurity or confidentiality risks, which must obviously \nbe mitigated by all reasonable means. 85 \n The “ Public Company Accounting Reform and \nInvestor Protection Act of 30 July 2002 ” (commonly \nreferred to as Sarbanes-Oxley and often abbreviated to \n SOX or Sarbox ), 86 even though indirectly relevant to risk \nmanagement, is discussed here due to its importance. The \nAct is a U.S. federal law passed in response to a number \nof major corporate and accounting scandals including \nthose affecting Enron, Tyco International, and WorldCom \n(now MCI). These scandals resulted in a decline of pub-\nlic trust in accounting and reporting practices. The legis-\nlation is wide ranging and establishes new or enhanced \nstandards for all U.S. public company boards, manage-\nment, and public accounting firms. Its provisions range \nfrom additional Corporate Board responsibilities to crimi-\nnal penalties, and require the Securities and Exchange \nCommission (SEC) to implement rulings on requirements \nto comply with the new law. The first and most important \npart of the Act establishes a new quasi-public agency, \nthe Public Company Accounting Oversight Board \n( www.pcaobus.org ), which is charged with overseeing, \nregulating, inspecting, and disciplining accounting firms \nin their roles as auditors of public companies. The Act \n78 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n79 http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri \u0003 CELEX:\n32005F0222:EN:NOT, accessed April 29, 2008.\n80 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n81 www.oecd.org/dataoecd/16/22/15582260.pdf, accessed April 29, \n2008.\n82 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n83 www.bis.org/publ/bcbs98.pdf, accessed April 29, 2008.\n84 http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri\u0003CEL\nEX:31987H0598:EN:HTML, accessed April 29, 2008.\n85 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n86 http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname\u0003107_\ncong_bills&docid\u0003f:h3763enr.tst.pdf, accessed April 29, 2008.\n" }, { "page_number": 656, "text": "Chapter | 35 Risk Management\n623\nalso covers issues such as auditor independence, corpo-\nrate governance and enhanced financial disclosure. \n The “ Office of the Comptroller of the Currency \n(OCC) – Electronic Banking Guidance ” 87 is fairly high \nlevel and should be indicative of the subject matter to be \nanalyzed and assessed by banking institutions, rather than \nserving as a yardstick to identify actual problems. 88 \n The “ Payment Card Industry (PCI) Security Standards \nCouncil — Data Security Standard (DSS) ” 89 provides cen-\ntral guidance allowing financial service providers relying \non payment cards to implement the necessary policies, \nprocedures, and infrastructure to adequately safeguard \ntheir customer account data. PCI DSS has no formal \nbinding legal power. Nevertheless, considering its ori-\ngins and the key participants, it holds significant moral \nauthority, and noncompliance with the PCI DSS by a \npayment card service provider may be indicative of inad-\nequate risk management practices. 90 \n The “ Directive 2002/65/EC of the European Parlia-\nment and of the Council of 23 September 2002 con-\ncerning the distance marketing of consumer financial \nservices and amending Council Directive 90/619/EEC and \nDirectives 97/7/EC and 98/27/EC (the ‘ Financial Distance \nMarketing Directive ’ ) ” 91 requires that, as a part of the \nminimum information to be provided to a consumer prior \nto concluding a distance financial services contract, the \nconsumer must be clearly and comprehensibly informed \nof any specific risks related to the service concerned. 92 \n 5. RISK MANAGEMENT STANDARDS \n Various national and international, de jure, and de facto \nstandards exist that are related, directly or indirectly, to \ninformation systems risk management. In the following, \nwe briefly describe the most important international \nstandards that are directly related to risk management. \n The “ ISO/IEC 13335-1:2004 — Information technol-\nogy — Security techniques — Management of information \nand communications technology security — Part 1: Con-\ncepts and models for information and communications \ntechnology security management ” standard 93 defines and \ndescribes the concepts associated with the management \nof IT security. It also identifies the relationships between \nthe management of IT security and management of IT in \ngeneral. Further, it presents several models, which can be \nused to explain IT security, and provides general guidance \non the management of IT security. It is a commonly used \ncode of practice and serves as a resource for the imple-\nmentation of security management practices and as a \nyardstick for auditing such practices. 94 \n The \n “ BS \n25999-1:2006 — Business \ncontinuity \nmanagement — Part 1: Code of Practice ” standard 95 is a \ncode of practice that takes the form of guidance and rec-\nommendations. It establishes the process, principles, and \nterminology of business continuity management (BCM), \nproviding a basis for understanding, developing, and \nimplementing business continuity within an organization \nand to provide confidence in business-to-business and \nbusiness-to-customer dealings. In addition, it provides a \ncomprehensive set of security measures based on BCM \nbest practice and covers the whole BCM life cycle. 96 \n The \n “ ISO/IEC \nTR \n15443-1:2005 — Information \ntechnology — Security techniques — A framework for IT \nsecurity assurance ” 97 technical report introduces, relates, \nand categorizes security assurance methods to a generic \nlife-cycle model in a manner enabling an increased level \nof confidence to be obtained in the security functional-\nity of a deliverable. It allows security professionals to \ndetermine a suitable methodology for assessing a security \nservice, product, or environmental factor (a deliverable). \nFollowing this technical report, it can be determined \nwhich level of security assurance a deliverable is intended \n87 www.occ.treas.gov/netbank/ebguide.htm, accessed April 29, 2008.\n88 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n89 https://www.pcisecuritystandards.org/tech/download_the_pci_dss.\nhtm, accessed April 29, 2008.\n90 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n91 http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri\u0003CELEX:\n32002L0065:EN:NOT, accessed April 29, 2008.\n92 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n93 British Standards Institute, “Information technology—Security \ntechniques—Management of information and communications tech-\nnology security—Part 1: Concepts and models for information and \ncommunications technology security management,” BS ISO/IEC \n13335-1:2004.\n94 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n95 British Standards Institute, “Business continuity management—\nCode of practice,” BS 25999-1:2006.\n96 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n97 ISO/IEC, “Information technology—security techniques—a frame-\nwork for IT security assurance—overview and framework,” ISO/IEC \nTR 15443-1:2005.\n" }, { "page_number": 657, "text": "PART | V Storage Security\n624\nto meet and whether this threshold is actually met by the \ndeliverable. \n The “ ISO/IEC 17799:2005 — Information technology —\n Security techniques — Code of practice for information \nsecurity management ” standard 98 establishes guidelines \nand general principles for initiating, implementing, main-\ntaining, and improving information security management \nin an organization. It provides general guidance on the \ncommonly accepted goals of information security man-\nagement. The standard contains best practices of security \nmeasures in many areas of information security manage-\nment. These are intended to be implemented to meet the \nrequirements identified by a risk assessment. The standard \nis intended as a common basis and practical guideline for \ndeveloping organizational security standards and effective \nsecurity management practices, and to help build confi-\ndence in interorganizational activities. \n The “ ISO/IEC 18028:2006 — Information techno-\nlogy — Security \ntechniques — IT \nnetwork \nsecurity ” \nstandard 99 extends the security management guidelines \nprovided in ISO/IEC TR 13335 and ISO/IEC 17799 by \ndetailing the specific operations and mechanisms needed \nto implement network security measures in a wider range \nof network environments, providing a bridge between \ngeneral IT security management issues and network \nsecurity technical implementations. The standard pro-\nvides detailed guidance on the security aspects of the \nmanagement, operation and use of IT networks, and their \ninterconnections. It defines and describes the concepts \nassociated with, and provides management guidance on, \nnetwork security, including on how to identify and ana-\nlyze the communications-related factors to be taken into \naccount to establish network security requirements, with \nan introduction to the possible areas where security mea-\nsures can be applied and the specific technical areas. \n The “ ISO/IEC 27001:2005 — Information technol-\nogy — Security techniques — Information security man-\nagement systems — Requirements ” standard 100 is designed \nto ensure the selection of adequate and proportionate \nsecurity measures that protect information assets and give \nconfidence to interested parties. The standard covers all \ntypes of organizations (e.g., commercial enterprises, gov-\nernment agencies, not-for-profit organizations) and speci-\nfies the requirements for establishing, implementing, \noperating, monitoring, reviewing, maintaining, and \nimproving a documented ISMS within the context of the \norganization’s overall business risks. Further, it specifies \nrequirements for the implementation of security measures \ncustomized to the needs of individual organizations or \nparts thereof. Its application in practice is often combined \nwith related standards, such as BS 7799-3:2006 which \nprovides additional guidance to support the requirements \ngiven in ISO/IEC 27001:2005. \n The “ BS 7799-3:2006 — Information security manage-\nment systems — Guidelines for information security risk \nmanagement ” standard 101 gives guidance to support the \nrequirements given in BS ISO/IEC 27001:2005 regard-\ning all aspects of an ISMS risk management cycle and is \ntherefore typically applied in conjunction with this stand-\nard in risk assessment practices. This includes assessing \nand evaluating the risks, implementing security measures \nto treat the risks, monitoring and reviewing the risks, and \nmaintaining and improving the system of risk treatment. \nThe focus of this standard is effective information secu-\nrity through an ongoing program of risk management \nactivities. This focus is targeted at information security in \nthe context of an organization’s business risks. \n The “ ISO/IEC TR 18044:2004 — Information technol-\nogy — Security techniques — Information security incident \nmanagement ” standard 102 provides advice and guidance \non information security incident management for infor-\nmation security managers, and information system, serv-\nice, and network managers. It is a high-level resource \nintroducing basic concepts and considerations in the field \nof incident response. As such, it is mostly useful as a cat-\nalyst to awareness raising initiatives in this regard. \n The “ ISF Standard of Good Practice ” standard 103 is a \ncommonly quoted source of good practices and serves as \na resource for the implementation of information security \npolicies and as a yardstick for auditing such systems and/\nor the surrounding practices. 104 The standard covers six \ndistinct aspects of information security, each of which \nrelates to a particular type of environment. The standard \nfocuses on how information security supports an organi-\nzation’s key business processes. \n98 British Standards Institute, “Information technology—Security \ntechniques—Code of practice for information security management,” \nBS ISO/IEC 17799:2005.\n99 ISO/IEC, “Information technology—security techniques—IT net-\nwork security—part 1: network security management,” ISO/IEC TR \n18028:2006.\n100 ISO/IEC, “Information security management—specifi cation with \nguidance for use,” ISO 27001.\n101 British Standards Institute, “ISMSs—Part 3: Guidelines for infor-\nmation security risk management,” BS 7799-3:2006.\n102 British Standards Institute, “Information technology—Security \ntechniques—Information security incident management,” BS ISO/IEC \nTR 18044:2004.\n103 Information Security Forum, The standard of good practice for \ninformation security, 2007 (available at https://www.isfsecuritystan-\ndard.com/SOGP07/index.htm, accessed April 28, 2008).\n104 J. Dumortier and H. Graux, “Risk management/risk assessment in \nEuropean regulation, international guidelines and codes of practice,” \nENISA, June 2007 (available at www.enisa.europa.eu/rmra/fi les/rmra_\nregulation.pdf).\n" }, { "page_number": 658, "text": "Chapter | 35 Risk Management\n625\n The “ ISO/TR 13569:2005 — Financial services —\n Information security guidelines ” standard 105 provides \nguidelines on the development of an information security \nprogram for institutions in the financial services industry. \nIt includes discussion of the policies, organization and \nthe structural, legal and regulatory components of such a \nprogram. Considerations for the selection and implemen-\ntation of security measures, and the elements required to \nmanage information security risk within a modern finan-\ncial services institution are discussed. Recommendations \nare given that are based on consideration of the institu-\ntion’s business environment, practices, and procedures. \nIncluded in this guidance is a discussion of legal and \nregulatory compliance issues, which should be consid-\nered in the design and implementation of the program. \n The U.S. General Accounting Office “ Information \nsecurity risk assessment: practices of leading organiza-\ntions ” guide 106 is intended to help federal managers imple-\nment an ongoing information security risk assessment \nprocess by providing examples, or case studies, of practi-\ncal risk assessment procedures that have been successfully \nadopted by four organizations known for their efforts to \nimplement good risk assessment practices. More impor-\ntant, it identifies, based on the case studies, factors that are \nimportant to the success of any risk assessment program, \nregardless of the specific methodology employed. \n The U.S. NIST SP 800-30 “ Risk management guide \nfor information technology systems ” 107 developed by \nNIST in 2002, provides a common foundation for expe-\nrienced and inexperienced, technical, and nontechnical \npersonnel who support or use the risk management proc-\ness for their IT systems. Its guidelines are for use by fed-\neral organizations which process sensitive information \nand are consistent with the requirements of OMB Circular \nA-130, Appendix III. The guidelines may also be used by \nnongovernmental organizations on a voluntary basis, even \nthough they are not mandatory and binding standards. \n Finally, the U.S. NIST SP 800-39 “ Managing Risk \nfrom Information Systems: An Organizational Perspective ” \nstandard 108 provides guidelines for managing risk to organi-\nzational operations, organizational assets, individuals, other \norganizations, and the nation resulting from the operation \nand use of information systems. It provides a structured yet \nflexible approach for managing that portion of risk result-\ning from the incorporation of information systems into the \nmission and business processes of organizations. \n 6. SUMMARY \n The information systems risk management methodology \nwas developed with an eye to guiding the design and the \nmanagement of security of an information system within \nthe framework of an organization. It aims at analyzing and \nassessing the factors that affect risk, to subsequently treat \nthe risk, and to continuously monitor and review the secu-\nrity plan. Central concepts of the methodology are those of \nthe threat, the vulnerability, the asset, the impact, and the \nrisk. The operational relationship of these concepts mate-\nrializes when a threat exploits one or more vulnerabilities \nto harm assets, an event that will impact the organization. \nOnce the risks are identified and assessed, they must be \ntreated, that is, transferred, avoided, or accepted. Treating \nthe risks is done on the basis of a carefully designed secu-\nrity plan, which must be continuously monitored, reviewed, \nand amended as necessary. Many methods implementing \nthe whole or parts of the risk management methodology \nhave been developed. Even though most of them follow \nclosely the methodology as described in pertinent inter-\nnational standards, they differ considerably in both their \nunderlying philosophy and in their specific steps. The risk \nmanagement methodology has been and is being applied \ninternationally with considerable success and enjoys uni-\nversal acceptance. However, it does suffer several disad-\nvantages that should be seriously considered in the process \nof applying it. Particular attention must be paid to the sub-\njectivity of its estimates, which is often obscured by the \nformality of the underlying probabilistic models and by the \nsystematic nature of most of the risk management meth-\nods. Subjectivity in applying the methodology is unavoid-\nable and should be accepted and consciously managed. A \nnumber of international laws and regulations contain provi-\nsions for information system risk management, in addition \nto national provisions. The risk management methodology \nis standardized by international organizations. \n105 ISO, Financial services-Information security guidelines, ISO/TR \n13569:2005.\n106 U.S. General Accounting Offi ce, Information security risk assess-\nment: practices of leading organizations, 1999.\n107 G. Stoneburner, A. Goguen and A. Feringa, Risk Management \nguide for information technology systems, National Institute of \nStandards and Technology, Special Publication SP 800-30, 2002.\n108 R. Ross, S. Katzke, A. Johnson, M. Swanson and G. Stoneburner, \nManaging Risk from Information Systems: An Organizational \nPerspective, US NIST SP 800-39, 2008 (available at http://csrc.nist.gov/\npublications/PubsDrafts.html#SP-800-39, accessed April 29, 2008).\n" }, { "page_number": 659, "text": "This page intentionally left blank\n" }, { "page_number": 660, "text": " Physical Security \n Part VI \n CHAPTER 36 Physical Security Essentials \n William Stallings \n CHAPTER 37 Biometrics \n Luther Martin \n CHAPTER 38 Homeland Security \n Rahul Bhaskar and Bhushan Kapoor \n CHAPTER 39 Information Warfare \n Jan Eloff and Anna Granova \n" }, { "page_number": 661, "text": "This page intentionally left blank\n" }, { "page_number": 662, "text": "629\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Physical Security Essentials \n William Stallings \n Independent consultant \n Chapter 36 \n Platt 1 distinguishes three elements of information system \n(IS) security: \n ● Logical security. Protects computer-based data from \nsoftware-based and communication-based threats. \n ● Physical security. Also called infrastructure security . \nProtects the information systems that house data \nand the people who use, operate, and maintain the \nsystems. Physical security must also prevent any type \nof physical access or intrusion that can compromise \nlogical security. \n ● Premises security. Also known as corporate or facilities \nsecurity . Protects the people and property within \nan entire area, facility, or building(s) and is usually \nrequired by laws, regulations, and fiduciary obligations. \nPremises security provides perimeter security, access \ncontrol, smoke and fire detection, fire suppression, \nsome environmental protection, and usually \nsurveillance systems, alarms, and guards. \n This chapter is concerned with physical security and \nwith some overlapping areas of premises security. We \nbegin by looking at physical security threats and then \nconsider physical security prevention measures. \n 1. OVERVIEW \n For information systems, the role of physical security is \nto protect the physical assets that support the storage and \nprocessing of information. Physical security involves \ntwo complementary requirements. First, physical secu-\nrity must prevent damage to the physical infrastructure \nthat sustains the information system. In broad terms, that \ninfrastructure includes the following: \n ● Information system hardware . Including \ndata processing and storage equipment, \ntransmission and networking facilities, and offline \nstorage media. We can include in this category \nsupporting documentation. \n ● Physical facility . The buildings and other structures \nhousing the system and network components. \n ● Supporting facilities. These facilities underpin the \noperation of the information system. This category \nincludes electrical power, communication services, \nand environmental controls (heat, humidity, etc.). \n ● Personnel. Humans involved in the control, mainte-\nnance, and use of the information systems. \n Second, physical security must prevent misuse of the \nphysical infrastructure that leads to the misuse or damage \nof the protected information. The misuse of the physical \ninfrastructure can be accidental or malicious. It includes \nvandalism, theft of equipment, theft by copying, theft of \nservices, and unauthorized entry. \n Figure 36.1 , based on Bosworth and Kabay 2 , sug-\ngests the overall context in which physical security con-\ncerns arise. The central concern is the information assets \nof an organization. These information assets provide \nvalue to the organization that possesses them, as indi-\ncated by the upper four items in the figure. In turn, the \nphysical infrastructure is essential to providing for the \nstorage and processing of these assets. The lower four \nitems in the figure are the concern of physical security. \nNot shown is the role of logical security, which consists \nof software- and protocol-based measures for ensuring \ndata integrity, confidentiality, and so forth. \n The role of physical security is affected by the oper-\nating location of the information system, which can be \ncharacterized as static, mobile, or portable. Our concern \nin this chapter is primarily with static systems, which are \ninstalled at fixed locations. A mobile system is installed \nin a vehicle, which serves the function of a structure for \n 1 F. Platt, “ Physical threats to the information infrastructure, ” in \nS. Bosworth, and M. Kabay, (eds.), Computer Security Handbook, \nWiley, 2002. \n 2 S. Bosworth and M. Kabay (eds.), Computer Security Handbook, \nWiley, 2002. \n" }, { "page_number": 663, "text": "PART | VI Physical Security\n630\nthe system. Portable systems have no single installation \npoint but may operate in a variety of locations, including \nbuildings, vehicles, or in the open. The nature of the sys-\ntem’s installation determines the nature and severity of the \nthreats of various types, including fire, roof leaks, unau-\nthorized access, and so forth. \n 2. PHYSICAL SECURITY THREATS \n In this section, we first look at the types of physical situa-\ntions and occurrences that can constitute a threat to infor-\nmation systems. There are a number of ways in which \nsuch threats can be categorized. It is important to under-\nstand the spectrum of threats to information systems so \nthat responsible administrators can ensure that prevention \nmeasures are comprehensive. We organize the threats into \nthe following categories: \n ● Environmental threats \n ● Technical threats \n ● Human-caused threats \n We begin with a discussion of natural disasters, which \nare a prime, but not the only, source of environmental \nthreats. Then we look specifically at environmental \nthreats, followed by technical and human-caused threats. \n Natural Disasters \n Natural disasters are the source of a wide range of envi-\nronmental threats to datacenters, other information \nprocessing facilities, and their personnel. It is possible to \nassess the risk of various types of natural disasters and \ntake suitable precautions so that catastrophic loss from \nnatural disaster is prevented. \n Table 36.1 lists six categories of natural disasters, the \ntypical warning time for each event, whether or not per-\nsonnel evacuation is indicated or possible, and the typical \nduration of each event. We comment briefly on the poten-\ntial consequences of each type of disaster. \n A tornado can generate winds that exceed hurri-\ncane strength in a narrow band along the tornado’s path. \nThere is substantial potential for structural damage, roof \ndamage, and loss of outside equipment. There may be \ndamage from wind and flying debris. Off site, a tornado \nmay cause a temporary loss of local utility and commu-\nnications. Offsite damage is typically followed by quick \nrestoration of services. \n A hurricane, depending on its strength, may also \ncause significant structural damage and damage to out-\nside equipment. Off site, there is the potential for severe \nregionwide damage to public infrastructure, utilities, and \ncommunications. If onsite operation must continue, then \nemergency supplies for personnel as well as a backup \ngenerator are needed. Further, the responsible site man-\nager may need to mobilize private post-storm security \nmeasures, such as armed guards. \n A major earthquake has the potential for the great-\nest damage and occurs without warning. A facility near \nthe epicenter may suffer catastrophic, even complete, \ndestruction, with significant and long-lasting damage to \ndatacenters and other IS facilities. Examples of inside \ndamage include the toppling of unbraced computer hard-\nware and site infrastructure equipment, including the col-\nlapse of raised floors. Personnel are at risk from broken \nglass and other flying debris. Off site, near the epicenter \nof a major earthquake, the damage equals and often \nexceeds that of a major hurricane. Structures that can \nwithstand a hurricane, such as roads and bridges, may be \ndamaged or destroyed, preventing the movement of fuel \nand other supplies. \n An ice storm or blizzard can cause some disruption \nof or damage to IS facilities if outside equipment and \nthe building are not designed to survive severe ice and \nsnow accumulation. Off site, there may be widespread \nBuildings with controlled access and environmental conditions\nElectricity, software, communications service, humans\nMachines to read, write, send, receive data\nStorage media and transmission media\nInformation Assets (1s and 0s)\nActivities dependent on information\nSupported by\ninformation\nassets\nSupport\ninformation\nassets\nObligations to provide services and products\nReputation for trustworthy services and quality products\n“The Bottom Line” and ultimate survival of an organization\n FIGURE 36.1 A context for information assets. \n" }, { "page_number": 664, "text": "Chapter | 36 Physical Security Essentials\n631\ndisruption of utilities and communications and roads \nmay be dangerous or impassable. \n The consequences of lightning strikes can range \nfrom no impact to disaster. The effects depend on the \nproximity of the strike and the efficacy of grounding and \nsurge protector measures in place. Off site, there can be \ndisruption of electrical power and there is the potential \nfor fires. \n Flooding is a concern in areas that are subject to \nflooding and for facilities that are in severe flood areas at \nlow elevation. Damage can be severe, with long-lasting \neffects and the need for a major cleanup operation. \n Environmental Threats \n This category encompasses conditions in the environ-\nment that can damage or interrupt the service of informa-\ntion systems and the data they house. Off site, there may \nbe severe regionwide damage to the public infrastructure \nand, in the case of severe hurricanes, it may take days, \nweeks, or even years to recover from the event. \n Inappropriate Temperature and Humidity \n Computers and related equipment are designed to operate \nwithin a certain temperature range. Most computer sys-\ntems should be kept between 10 and 32 degrees Celsius \n(50 and 90 degrees Fahrenheit). Outside this range, \nresources might continue to operate but produce undesira-\nble results. If the ambient temperature around a computer \ngets too high, the computer cannot adequately cool itself, \nand internal components can be damaged. If the tempera-\nture gets too cold, the system can undergo thermal shock \nwhen it is turned on, causing circuit boards or integrated \ncircuits to crack. Table 36.2 indicates the point at which \npermanent damage from excessive heat begins. \n Another temperature-related concern is the internal \ntemperature of equipment, which can be significantly \nhigher than room temperature. Computer-related equip-\nment comes with its own temperature dissipation and \ncooling mechanisms, but these may rely on, or be affected \nby, external conditions. Such conditions include excessive \nambient temperature, interruption of supply of power or \nheating, ventilation, and air-conditioning (HVAC) serv-\nices, and vent blockage. \n High humidity also poses a threat to electrical and \nelectronic equipment. Long-term exposure to high humi-\ndity can result in corrosion. Condensation can threaten \n TABLE 36.1 Characteristics of Natural Disasters \n \n Warning \n Evacuation \n Duration \n Tornado \n Advance warning of \npotential; not site specific \n Remain at site \n Brief but intense \n Hurricane \n Significant advance \nwarning \n May require evacuation \n Hours to a few days \n Earthquake \n No warning \n May be unable to \nevacuate \n Brief duration; threat of \ncontinued aftershocks \n Ice storm/ \nblizzard \n Several days warning \ngenerally expected \n May be unable to \nevacuate \n May last several days \n Lightning \n Sensors may provide \nminutes of warning \n May require evacuation \n Brief but may recur \n Flood \n Several days warning \ngenerally expected \n May be unable to evacuate \n Site may be isolated \nfor extended period \n Source: ComputerSite Engineering, Inc.\n TABLE 36.2 Temperature Thresholds for Damage to \nComputing Resources \n Component or Medium \n Sustained Ambient \nTemperature at which \nDamage May Begin \n Flexible disks, magnetic \ntapes, etc. \n 38 ° C (100 ° F) \n Optical media \n 49 ° C (120 ° F) \n Hard disk media \n 66 ° C (150 ° F) \n Computer equipment \n 79 ° C (175 ° F) \n Thermoplastic insulation \non wires carrying hazardous \nvoltage \n 125 ° C (257 ° F) \n Paper products \n 177 ° C (350 ° F) \n Source: Data taken from National Fire Protection Association. \n" }, { "page_number": 665, "text": "PART | VI Physical Security\n632\nmagnetic and optical storage media. Condensation can \nalso cause a short circuit, which in turn can damage cir-\ncuit boards. High humidity can also cause a galvanic \neffect that results in electroplating, in which metal from \none connector slowly migrates to the mating connector, \nbonding the two together. \n Very low humidity can also be a concern. Under pro-\nlonged conditions of low humidity, some materials may \nchange shape and performance may be affected. Static \nelectricity also becomes a concern. A person or object \nthat becomes statically charged can damage electronic \nequipment by an electric discharge. Static electricity dis-\ncharges as low as 10 volts can damage particularly sensi-\ntive electronic circuits, and discharges in the hundreds of \nvolts can create significant damage to a variety of elec-\ntronic circuits. Discharges from humans can reach into \nthe thousands of volts, so this is a nontrivial threat. \n In general, relative humidity should be maintained \nbetween 40% and 60% to avoid the threats from both low \nand high humidity. \n Fire and Smoke \n Perhaps the most frightening physical threat is fire. It is \na threat to human life and property. The threat is not only \nfrom the direct flame but also from heat, release of toxic \nfumes, water damage from fire suppression, and smoke \ndamage. Further, fire can disrupt utilities, especially \nelectricity. \n The temperature due to fire increases with time, and \nin a typical building, fire effects follow the curve shown \nin Figure 36.2 . The scale on the right side of the figure \nshows the temperature at which various items melt or are \ndamaged and therefore indicates how long after the fire \nis started such damage occurs. \n Smoke damage related to fires can also be extensive. \nSmoke is an abrasive. It collects on the heads of unsealed \nmagnetic disks, optical disks, and tape drives. Electrical \nfires can produce an acrid smoke that may damage other \nequipment and may be poisonous or carcinogenic. \n The most common fire threat is from fires that origi-\nnate within a facility, and, as discussed subsequently, \nthere are a number of preventive and mitigating measures \nthat can be taken. A more uncontrollable threat is faced \nfrom wildfires, which are a plausible concern in the west-\nern United States, portions of Australia (where the term \n bushfire is used), and a number of other countries. \n Water Damage \n Water and other stored liquids in proximity to computer \nequipment pose an obvious threat. The primary danger \nis an electrical short, which can happen if water bridges \nbetween a circuit board trace carrying voltage and a trace \ncarrying ground. Moving water, such as in plumbing, \nand weather-created water from rain, snow, and ice also \npose threats. A pipe may burst from a fault in the line \nor from freezing. Sprinkler systems, despite their secu-\nrity function, are a major threat to computer equipment \nand paper and electronic storage media. The system may \nbe set off by a faulty temperature sensor, or a burst pipe \nmay cause water to enter the computer room. For a large \ncomputer installation, an effort should be made to avoid \nany sources of water from one or two floors above. An \nexample of a hazard from this direction is an overflow-\ning toilet. \n Less common but more catastrophic is floodwater. \nMuch of the damage comes from the suspended material \nin the water. Floodwater leaves a muddy residue that is \nextraordinarily difficult to clean up. \n Chemical, Radiological, and Biological \nHazards \n Chemical, radiological, and biological hazards pose a \ngrowing threat, both from intentional attack and from \naccidental discharge. None of these hazardous agents \nshould be present in an information system environment, \nbut either accidental or intentional intrusion is possible. \nNearby discharges (e.g., from an overturned truck carry-\ning hazardous materials) can be introduced through the \nventilation system or open windows and, in the case of \nradiation, through perimeter walls. In addition, discharges \nin the vicinity can disrupt work by causing evacuations \nto be ordered. Flooding can also introduce biological or \nchemical contaminants. \n In general, the primary risk of these hazards is to \npersonnel. Radiation and chemical agents can also cause \ndamage to electronic equipment. \n Dust \n Dust is a prevalent concern that is often overlooked. Even \nfibers from fabric and paper are abrasive and mildly con-\nductive, although generally equipment is resistant to such \ncontaminants. Larger influxes of dust can result from \na number of incidents, such as a controlled explosion of a \nnearby building and a windstorm carrying debris from a \nwildfire. A more likely source of influx comes from dust \nsurges that originate within the building due to construc-\ntion or maintenance work. \n Equipment with moving parts, such as rotating stor-\nage media and computer fans, are the most vulnerable to \n" }, { "page_number": 666, "text": "Chapter | 36 Physical Security Essentials\n633\ndamage from dust. Dust can also block ventilation and \nreduce radiational cooling. \n Infestation \n One of the less pleasant physical threats is infestation, \nwhich covers a broad range of living organisms, includ-\ning mold, insects, and rodents. High-humidity conditions \ncan lead to the growth of mold and mildew, which can be \nharmful to both personnel and equipment. Insects, particu-\nlarly those that attack wood and paper, are also a common \nthreat. \n Technical Threats \n This category encompasses threats related to electrical \npower and electromagnetic emission. \n Electrical Power \n Electrical power is essential to the operation of an infor-\nmation system. All the electrical and electronic devices \nin the system require power, and most require uninter-\nrupted utility power. \n Power utility problems can be broadly grouped into \nthree categories: undervoltage, overvoltage, and noise. \nAn uninsulated steel\nfile tends to buckle and\nexpose its contents\nSteel loses about 70%\nof its supporting strength\nFlexible disks,\nmagnetic tapes, etc.\nOptical media\nHard disk media\nComputer equipment\nWater boils\nThermoplastic insulation\non wires carrying\nhazardous voltage\nPaper products\n0\n1\n2\n3\nDuration of fire (hrs.)\nAmbient temperature (°C) \n4\n5\n6\n100\n200\n300\n400\n500\n600\n700\n800\n900\n1000\n1100\n0\n100\n200\n300\n400\n500\n600\n700\n800\n900\n1000\n1100\n1200\nPlastic canisters ignite\nWood ignites (approx.)\nLead melts\nZinc melts\nAluminum melts\nGlass softens\nBrass melts\nSilver melts\nGold melts\nCopper melts\nCast iron melts\n FIGURE 36.2 Fire effects. \n" }, { "page_number": 667, "text": "PART | VI Physical Security\n634\n An undervoltage occurs when the IS equipment \nreceives less voltage than is required for normal opera-\ntion. Undervoltage events range from temporary dips in \nthe voltage supply to brownouts (prolonged undervoltage) \nand power outages. Most computers are designed to with-\nstand prolonged voltage reductions of about 20% without \nshutting down and without operational error. Deeper dips \nor blackouts lasting more than a few milliseconds trigger \na system shutdown. Generally, no damage is done, but \nservice is interrupted. \n Far more serious is an overvoltage . A surge of voltage \ncan be caused by a utility company supply anomaly, by \nsome internal (to the building) wiring fault, or by light-\nning. Damage is a function of intensity and duration and \nthe effectiveness of any surge protectors between your \nequipment and the source of the surge. A sufficient surge \ncan destroy silicon-based components, including proces-\nsors and memories. \n Power lines can also be a conduit for noise . In many \ncases, these spurious signals can endure through the fil-\ntering circuitry of the power supply and interfere with \nsignals inside electronic devices, causing logical errors. \n Electromagnetic Interference \n Noise along a power supply line is only one source of \nelectromagnetic interference (EMI). Motors, fans, heavy \nequipment, and even other computers generate electri-\ncal noise that can cause intermittent problems with the \ncomputer you are using. This noise can be transmitted \nthrough space as well as nearby power lines. \n Another source of EMI is high-intensity emissions \nfrom nearby commercial radio stations and microwave \nrelay antennas. Even low-intensity devices, such as cel-\nlular telephones, can interfere with sensitive electronic \nequipment. \n Human-Caused Physical Threats \n Human-caused threats are more difficult to deal with \nthan the environmental and technical threats discussed so \nfar. Human-caused threats are less predictable than other \ntypes of physical threats. Worse, human-caused threats \nare specifically designed to overcome prevention meas-\nures and/or seek the most vulnerable point of attack. We \ncan group such threats into the following categories: \n ● Unauthorized physical access . Those who are not \nemployees should not be in the building or building \ncomplex at all unless accompanied by an authori zed \nindividual. Not counting PCs and workstations, \ninformation system assets, such as servers, \nmainframe computers, network equipment, and \nstorage networks, are generally housed in restricted \nareas. Access to such areas is usually restricted to \nonly a certain number of employees. Unauthorized \nphysical access can lead to other threats, such as \ntheft, vandalism, or misuse. \n ● Theft. This threat includes theft of equipment \nand theft of data by copying. Eavesdropping and \nwiretapping also fall into this category. Theft can \nbe at the hands of an outsider who has gained \nunauthorized access or by an insider. \n ● Vandalism . This threat includes destruction of \nequipment and destruction of data. \n ● Misuse . This category includes improper use of \nresources by those who are authorized to use them, \nas well as use of resources by individuals not \nauthorized to use the resources at all. \n 3. PHYSICAL SECURITY PREVENTION \nAND MITIGATION MEASURES \n In this section, we look at a range of techniques for pre-\nventing, or in some cases simply deterring, physical \nattacks. We begin with a survey of some of the techniques \nfor dealing with environmental and technical threats and \nthen move on to human-caused threats. \n Environmental Threats \n We discuss these threats in the same order. \n Inappropriate Temperature and Humidity \n Dealing with this problem is primarily a matter of having \nenvironmental-control equipment of appropriate capac-\nity and appropriate sensors to warn of thresholds being \nexceeded. Beyond that, the principal requirement is the \nmaintenance of a power supply, discussed subsequently. \n Fire and Smoke \n Dealing with fire involves a combination of alarms, pre-\nventive measures, and fire mitigation. Martin provides \nthe following list of necessary measures 3 : \n ● Choice of site to minimize likelihood of disaster. \nFew disastrous fires originate in a well-protected \ncomputer room or IS facility. The IS area should be \n 3 J. Martin, Security, Accuracy, and Privacy in Computer Systems, \nPren tice Hall, 1973. \n" }, { "page_number": 668, "text": "Chapter | 36 Physical Security Essentials\n635\nchosen to minimize fire, water, and smoke hazards \nfrom adjoining areas. Common walls with other \nactivities should have at least a one-hour fire-\nprotection rating. \n ● Air conditioning and other ducts designed so as not \nto spread fire. There are standard guidelines and \nspecifications for such designs. \n ● Positioning of equipment to minimize damage. \n ● Good housekeeping. Records and flammables must \nnot be stored in the IS area. Tidy installation of IS \nequipment is crucial. \n ● Hand-operated fire extinguishers readily available, \nclearly marked, and regularly tested. \n ● Automatic fire extinguishers installed. Installation \nshould be such that the extinguishers are unlikely to \ncause damage to equipment or danger to personnel. \n ● Fire detectors. The detectors sound alarms inside \nthe IS room and with external authorities, and start \nautomatic fire extinguishers after a delay to permit \nhuman intervention. \n ● Equipment power-off switch. This switch must be \nclearly marked and unobstructed. All personnel \nmust be familiar with power-off procedures. \n ● Emergency procedures posted. \n ● Personnel safety. Safety must be considered in \ndesigning the building layout and emergency \nprocedures. \n ● Important records stored in fireproof cabinets or \nvaults. \n ● Records needed for file reconstruction stored off \nthe premises. \n ● Up-to-date duplicate of all programs stored off \nthe premises. \n ● Contingency plan for use of equipment elsewhere \nshould the computers be destroyed. \n ● Insurance company and local fire department should \ninspect the facility. \n To deal with the threat of smoke, the responsible man-\nager should install smoke detectors in every room that \ncontains computer equipment as well as under raised \nfloors and over suspended ceilings. Smoking should not \nbe permitted in computer rooms. \n For wildfires, the available countermeasures are limi-\nted. Fire-resistant building techniques are costly and dif-\nficult to justify. \n Water Damage \n Prevention and mitigation measures for water threats \nmust encompass the range of such threats. For plumbing \nleaks, the cost of relocating threatening lines is generally \ndifficult to justify. With knowledge of the exact layout \nof water supply lines, measures can be taken to locate \nequipment sensibly. The location of all shutoff valves \nshould be clearly visible or at least clearly documented, \nand responsible personnel should know the procedures \nto follow in case of emergency. \n To deal with both plumbing leaks and other sources of \nwater, sensors are vital. Water sensors should be located \non the floor of computer rooms as well as under raised \nfloors and should cut off power automatically in the event \nof a flood. \n Other Environmental Threats \n For chemical, biological, and radiological threats, specific \ntechnical approaches are available, including infrastruc-\nture design, sensor design and placement, mitigation pro-\ncedures, personnel training, and so forth. Standards and \ntechniques in these areas continue to evolve. \n As for dust hazards, the obvious prevention method \nis to limit dust through the use and proper filter mainte-\nnance and regular IS room maintenance. \n For infestations, regular pest control procedures may \nbe needed, starting with maintaining a clean environment. \n Technical Threats \n To deal with brief power interruptions, an uninterruptible \npower supply (UPS) should be employed for each piece \nof critical equipment. The UPS is a battery backup unit \nthat can maintain power to processors, monitors, and \nother equipment for a period of minutes. UPS units can \nalso function as surge protectors, power noise filters, and \nautomatic shutdown devices when the battery runs low. \n For longer blackouts or brownouts, critical equipment \nshould be connected to an emergency power source, such \nas a generator. For reliable service, a range of issues need \nto be addressed by management, including product selec-\ntion, generator placement, personnel training, testing and \nmaintenance schedules, and so forth. \n To deal with electromagnetic interference, a combi-\nnation of filters and shielding can be used. The specific \ntechnical details will depend on the infrastructure design \nand the anticipated sources and nature of the interference. \n Human-Caused Physical Threats \n The general approach to human-caused physical threats \nis physical access control. Based on Michael, 4 we can \n 4 M. Michael, “ Physical security measures, ” In H. Bidgoli, (ed.), \n Handbook of Information Security , Wiley, 2006. \n" }, { "page_number": 669, "text": "PART | VI Physical Security\n636\nsuggest a spectrum of approaches that can be used to \nrestrict access to equipment. These methods can be used \nin combination. \n ● Physical contact with a resource is restricted by \nrestricting access to the building in which the \nresource is housed. This approach is intended to \ndeny access to outsiders but does not address the \nissue of unauthorized insiders or employees. \n ● Physical contact with a resource is restricted by \nputting the resource in a locked cabinet, safe, or \nroom. \n ● A machine may be accessed, but it is secured \n(perhaps permanently bolted) to an object that \nis difficult to move. This will deter theft but not \nvandalism, unauthorized access, or misuse. \n ● A security device controls the power switch. \n ● A movable resource is equipped with a tracking \ndevice so that a sensing portal can alert security \npersonnel or trigger an automated barrier to prevent \nthe object from being moved out of its proper \nsecurity area. \n ● A portable object is equipped with a tracking \ndevice so that its current position can be monitored \ncontinually. \n The first two of the preceding approaches isolate the \nequipment. Techniques that can be used for this type of \naccess control include controlled areas patrolled or guar-\nded by personnel, barriers that isolate each area, entry \npoints in the barrier (doors), and locks or screening mea-\nsures at each entry point. \n Physical access control should address not just com-\nputers and other IS equipment but also locations of wiring \nused to connect systems, the electrical power service, the \nHVAC equipment and distribution system, telephone and \ncommunications lines, backup media, and documents. \n In addition to physical and procedural barriers, an \neffective physical access control regime includes a variety \nof sensors and alarms to detect intruders and unauthorized \naccess or movement of equipment. Surveillance systems \nare frequently an integral part of building security, and \nspecial-purpose surveillance systems for the IS area are \ngenerally also warranted. Such systems should provide \nreal-time remote viewing as well as recording. \n 4. RECOVERY FROM PHYSICAL \nSECURITY BREACHES \n The most essential element of recovery from physical \nsecurity breaches is redundancy. Redundancy does not \nundo any breaches of confidentiality, such as the theft of \ndata or documents, but it does provide for recovery from \nloss of data. Ideally, all the important data in the system \nshould be available off site and updated as near to real \ntime as is warranted based on a cost/benefit tradeoff. With \nbroadband connections now almost universally available, \nbatch encrypted backups over private networks or the \nInternet are warranted and can be carried out on what-\never schedule is deemed appropriate by management. At \nthe extreme, a hotsite can be created off site that is ready \nto take over operation instantly and has available to it a \nnear-real-time copy of operational data. \n Recovery from physical damage to the equipment or \nthe site depends on the nature of the damage and, import-\nantly, the nature of the residue. Water, smoke, and fire \ndamage may leave behind hazardous materials that must \nbe meticulously removed from the site before normal \noperations and the normal equipment suite can be recon-\nstituted. In many cases, this requires bringing in disaster \nrecovery specialists from outside the organization to do \nthe cleanup. \n 5. THREAT ASSESSMENT, PLANNING, \nAND PLAN IMPLEMENTATION \n We have surveyed a number of threats to physical secu-\nrity and a number of approaches to prevention, mitiga-\ntion, and recovery. To implement a physical security \nprogram, an organization must conduct a threat assess-\nment to determine the amount of resources to devote to \nphysical security and the allocation of those resources \nagainst the various threats. This process also applies to \nlogical security. \n Threat Assessment \n In this subsection, we follow Platt 5 in outlining a typical \nsequence of steps that an organization should take: \n 1. Set up a steering committee . The threat assessment \nshould not be left only to a security officer or to IS \nmanagement. All those who have a stake in the \nsecurity of the IS assets, including all of the user \ncommunities, should be brought into the process. \n 2. Obtain information and assistance . Historical \ninformation concerning external threats, such as \nflood and fire is the best starting point. This \ninformation can often be obtained from \n 5 F. Platt, “ Physical threats to the information infrastructure, ” in \nS. Bosworth, and M. Kabay, (eds.), Computer Security Handbook, \nWiley, 2002. \n" }, { "page_number": 670, "text": "Chapter | 36 Physical Security Essentials\n637\ngovernment agencies and weather bureaus. In the \nUnited States, the Federal Emergency Management \nAgency (FEMA) can provide much useful informa-\ntion. FEMA has a number of publications available \nonline that provide specific guidance in a wide \nvariety of physical security areas (www.fema.gov/\nbusiness/index.shtm). The committee should also \nseek expert advice from vendors, suppliers, \nneighboring businesses, service and maintenance \npersonnel, consultants, and academics. \n 3. Identify all possible threats . List all possible threats, \nincluding those that are specific to IS operations as \nwell as those that are more general, covering the \nbuilding and the geographic area. \n 4. Determine the likelihood of each threat . This is \nclearly a difficult task. One approach is to use a \nscale of 1 (least likely) to 5 (most likely) so that \nthreats can be grouped to suggest where attention \nshould be directed. All the information from Step 2 \ncan be applied to this task. \n 5. Approximate the direct costs . For each threat, the \ncommittee must estimate not only the threat’s likeli-\nhood but also its severity in terms of consequences. \nAgain a relative scale of 1 (low) to 5 (high) in terms \nof costs and losses is a reasonable approach. For \nboth Steps 4 and 5, an attempt to use a finer-grained \nscale, or to assign specific probabilities and specific \ncosts, is likely to produce the impression of greater \nprecision and knowledge about future threats than is \npossible. \n 6. Consider cascading costs. Some threats can trigger \nconsequential threats that add still more impact costs. \nFor example, a fire can cause direct flame, heat, and \nsmoke damage as well as disrupt utilities and result \nin water damage. \n 7. Prioritize the threats. The goal here is to determine \nthe relative importance of the threats as a guide to \nfocusing resources on prevention. A simple formula \nyields a prioritized list: \n \nImportance\nLikelihood\n[Direct Cost\nSecondary Cost]\n\u0003\n\u0007\n\u0002\n \n \n where the scale values (1 through 5) are used in the \nformula. \n 8. Complete the threat assessment report. The \ncommittee can now prepare a report that includes \nthe prioritized list, with commentary on how the \nresults were achieved. This report serves as the \nreference source for the planning process that \nfollows. \n Planning and Implementation \n Once a threat assessment has been done, the steering \ncommittee, or another committee, can develop a plan for \nthreat prevention, mitigation, and recovery. The follow-\ning is a typical sequence of steps an organization could \ntake: \n 1. Assess internal and external resources. These \ninclude resources for prevention as well as response. \nA reasonable approach is again to use a relative scale \nfrom 1 (strong ability to prevent and respond) to 5 \n(weak ability to prevent and respond). This scale can \nbe combined with the threat priority score to focus \nresource planning. \n 2. Identify challenges and prioritize activities. \nDetermine specific goals and milestones. Make a \nlist of tasks to be performed, by whom and when. \nDetermine how you will address the problem areas \nand resource shortfalls that were identified in the \nvulnerability analysis. \n 3. Develop a plan . The plan should include prevention \nmeasures and equipment needed and emergency \nresponse procedures. The plan should include \nsupport documents, such as emergency call \nlists, building and site maps, and resource \nlists. \n 4. Implement the plan. Implementation includes \nacquiring new equipment, assigning responsibilities, \nconducting training, monitoring plan implementa-\ntion, and updating the plan regularly. \n 6. EXAMPLE: A CORPORATE PHYSICAL \nSECURITY POLICY \n To give the reader a feel for how organizations deal \nwith physical security, we provide a real-world example \nof a physical security policy. The company is a European \nUnion (EU)-based engineering consulting firm that \nspecializes in the provision of planning, design, and \nmanagement services for infrastructure development \nworldwide. With interests in transportation, water, mari-\ntime, and property, the company is undertaking com-\nmissions in over 70 countries from a network of more \nthan 70 offices. \n Figure 36.3 is extracted from the company’s security \nstandards document. For our purposes, we have changed \nthe name of the company to Company wherever it appears \nin the document. The company’s physical security pol-\nicy relies heavily on ISO 17799 ( Code of Practice for \nInformation Security Management ). \n" }, { "page_number": 671, "text": "PART | VI Physical Security\n638\n5. Physical and Environmental Security\n5.1. Secure Areas\n \n5.1.1. Physical Security Perimeter – Company shall use security perimeters to protect all non-public areas, \n \n \n \ncommensurate with the value of the assets therein. Business critical information processing facilities located in \n \n \nunattended buildings shall also be alarmed to a permanently manned remote alarm monitoring station.\n \n5.1.2. Physical Entry Controls – Secure areas shall be segregated and protected by appropriate entry controls to \n \n \nensure that only authorised personnel are allowed access. Similar controls are also required where the \n \n \n \nbuilding is shared with, or accessed by, non-Company staff and organisations not acting on behalf of \n \n \n \nCompany.\n \n5.1.3. Securing Offices, Rooms and Facilities – Secure areas shall be created in order to protect office, rooms and \n \n \nfacilities with special security requirements.\n \n5.1.4. Working in Secure Areas – Additional controls and guidelines for working in secure areas shall be used to \n \n \n \nenhance the security provided by the physical control protecting the secure areas.\n \n \n \nEmployees of Company should be aware that additional controls and guidelines for working in secure \n \n \n \n \nareas to enhance the security provided by the physical control protecting the secure areas might be in \n \n \n \n \nforce. For further clarification they should contact their Line Manager. \n \n5.1.5. Isolated Access Points – Isolated access points, additional to building main entrances (e.g. Delivery and \n \n \n \nLoading areas) shall be controlled and, if possible, isolated from secure areas to avoid unauthorised access.\n \n5.1.6. Sign Posting Of Computer Installations – Business critical computer installations sited within a building must \n \n \nnot be identified by the use of descriptive sign posts or other displays. Where such sign posts or other displays \n \n are used they must be worded in such a way so as not to highlight the business critical nature of the activity \n \n \n \ntaking place within the building.\n5.2. Equipment Security\n \n5.2.1. Equipment Sitting and Protection – Equipment shall be sited or protected to reduce the risk from \n \n \n \nenvironmental threats and hazards, and opportunity for unauthorised access.\n \n5.2.2. Power Supply – The equipment shall be protected from power failure and other electrical anomalies.\n \n5.2.3. Cabling Security – Power and telecommunication cabling carrying data or supporting information services \n \n \n \nshall be protected from interception or damage commensurate with the business criticality of the operations \n \n \n \nthey serve.\n \n5.2.4. Equipment Maintenance – Equipment shall be maintained in accordance with manufacturer’s instruction \n \n \n \nand/or documented procedures to ensure its continued availability and integrity.\n \n5.2.5. Security of Equipment off-premises – Security procedures and controls shall be used to secure equipment \n \n \nused outside any Company’s premises.\n \n \n \nEmployees are to note that there should be security procedures and controls to secure equipment used \n \n \n \noutside any Company premises. Advice on these procedures can be sought from the Group Security \n \n \n \n \nManager.\n \n5.2.6. Secure Disposal or Re-use of Equipment – Information shall be erased from equipment prior to disposal or \n \n \nreuse.\n \n \n \nFor further guidance contact the Group Security Manager.\n \n5.2.7. Security of the Access Network – Company shall implement access control measures, determined by a risk \n \n \nassessment, to ensure that only authorised people have access to the Access Network (including: cabinets, \n \n \n \ncabling, nodes etc.).\n \n5.2.8. Security of PCs – Every Company owned PC must have an owner who is responsible for its general \n \n \n \nmanagement and control. Users of PCs are personally responsible for the physical and logical security of any \n \n \nPC they use. Users of Company PCs are personally responsible for the physical and logical security of any PC \n \n they use, as defined within the Staff Handbook.\n \n5.2.9. Removal of “Captured Data” – Where any device (software or hardware based) has been introduced to the \n \n \nnetwork that captures data for analytical purposes, all data must be wiped off of this device prior to removal \n \n \n \nfrom the Company Site. The removal of this data from site for analysis can only be approved by the MIS \n \n \n \nTechnology Manager.\n FIGURE 36.3 The Company’s physical security policy. \n" }, { "page_number": 672, "text": "Chapter | 36 Physical Security Essentials\n639\n 7. INTEGRATION OF PHYSICAL AND \nLOGICAL SECURITY \n Physical security involves numerous detection devices, \nsuch as sensors and alarms, and numerous prevention \ndevices and measures, such as locks and physical barri-\ners. It should be clear that there is much scope for auto-\nmation and for the integration of various computerized \nand electronic devices. Clearly, physical security can be \nmade more effective if there is a central destination for \nall alerts and alarms and if there is central control of all \nautomated access control mechanisms, such as smart card \nentry sites. \n From the point of view of both effectiveness and cost, \nthere is increasing interest not only in integrating auto-\nmated physical security functions but in integrating, to \nthe extent possible, automated physical security and logi-\ncal security functions. The most promising area is that of \naccess control. Examples of ways to integrate physical \nand logical access control include the following: \n ● Use of a single ID card for physical and logical \naccess. This can be a simple magnetic-strip card or \na smart card. \n ● Single-step user/card enrollment and termination \nacross all identity and access control databases. \n5.3. General Controls\n \n5.3.1. Security Controls – Security Settings are to be utilised and configurations must be controlled\n \n \n \nNo security settings or software on Company systems are to be changed without authorisation from MIS \n \n \n \nSupport.\n \n5.3.2. Clear Screen Policy – Company shall have and implement clear-screen policy in order to reduce the risks of \n \n \nunauthorised access, loss of, and damage to information.\n \n \n \nThis will be implemented when all Users of the Company system have Windows XP operating system.\n \n \n \nWhen the User has the Windows XP system they are to carry out the following:\n \n \n \n \n• Select the Settings tab within the START area on the desktop screen.\n \n \n \n \n• Select Control Panel.\n \n \n \n \n• Select the icon called DISPLAY.\n \n \n \n \n• Select the Screensaver Tab.\n \n \n \n \n• Set a Screen saver.\n \n \n \n \n• Set the time for 15 Mins.\n \n \n \n \n• Tick the Password Protect box; remember this is the same password that\n \n \n \n \n you utilise to log on to the system.\n \n \n \nStaff are to lock their screens using the Ctrl-Alt-Del when they leave their desk.\n \n5.3.3. Clear Desk Policy – Staff shall ensure that they operate a Clear Desk Policy.\n \n \n \n \n \nEach member of staff is asked to take personal and active responsibility for maintaining a \"clear desk\" \n \n \n \n \npolicy whereby files and papers are filed or otherwise cleared away before leaving the office at the end of \n \n \n \neach day.\n \n5.3.4. Removal of Property – Equipment, information or software belonging to the organisation shall not be removed \n \n \nwithout authorisation.\n \n \n \nEquipment, information or software belonging to Company shall not be removed without authorisation \n \n \n \n \nfrom the Project Manager or Line Manager and the MIS Support.\n \n5.3.5. People Identification – All Company staff must have visible the appropriate identification whenever they are in \n \n \nCompany premises.\n \n5.3.6. Visitors – All Company premises will have a process for dealing with visitors. All Visitors must be sponsored \n \n \nand wear the appropriate identification whenever they are in Company premises.\n \n5.3.7. Legal Right of Entry – Entry must be permitted to official bodies when entry is demanded on production of a \n \n \ncourt order or when the person has other legal rights. Advice must be sought from management or the Group \n \n \nSecurity Manager as a matter of urgency.\nFIGURE 36.3 Continued.\n" }, { "page_number": 673, "text": "PART | VI Physical Security\n640\n ● A central ID-management system instead of multiple \ndisparate user directories and databases. \n ● Unified event monitoring and correlation. \n As an example of the utility of this integration, sup-\npose that an alert indicates that Bob has logged on to the \ncompany’s wireless network (an event generated by the \nlogical access control system) but did not enter the build-\ning (an event generated from the physical access control \nsystem). Combined, these two events suggest that some-\none is hijacking Bob’s wireless account. \n For the integration of physical and logical access con-\ntrol to be practical, a wide range of vendors must conform \nto standards that cover smart card protocols, authentica-\ntion and access control formats and protocols, database \nentries, message formats, and so on. An important step in \nthis direction is FIPS 201-1 ( Personal Identity Verification \n(PIV) of Federal Employees and Contractors ), issued in \n2006. The standard defines a reliable, governmentwide \nPIV system for use in applications such as access to fed-\nerally controlled facilities and information systems. The \nstandard specifies a PIV system within which common \nidentification credentials can be created and later used to \nverify a claimed identity. The standard also identifies fed-\neral governmentwide requirements for security levels that \nare dependent on risks to the facility or information being \nprotected. \n Figure 36.4 illustrates the major components of FIPS \n201-1 compliant systems. The PIV front end defines the \nphysical interface to a user who is requesting access to a \nfacility, which could be either physical access to a pro-\ntected physical area or logical access to an information \nsystem. The PIV front-end subsystem supports up to \nthree-factor authentication; the number of factors used \ndepends on the level of security required. The front end \nmakes use of a smart card, known as a PIV card , which \nis a dual-interface contact and contactless card. The card \nholds a cardholder photograph, X.509 certificates, cryp-\ntographic keys, biometric data, and the cardholder unique \nidentifier (CHUID). Certain cardholder information may \nbe read-protected and require a personal identification \nnumber (PIN) for read access by the card reader. The \nbiometric reader, in the current version of the standard, is \na fingerprint reader. \n The standard defines three assurance levels for veri-\nfication of the card and the encoded data stored on the \ncard, which in turn leads to verifying the authenticity of \nthe person holding the credential. A level of some con-\nfidence corresponds to use of the card reader and PIN. \nA level of high confidence adds a biometric comparison \nof a fingerprint captured and encoded on the card during \nthe card-issuing process and a fingerprint scanned at \nthe physical access point. A very high confidence level \nrequires that the process just described is completed at a \ncontrol point attended by an official observer. \n The other major component of the PIV system is the \n PIV card issuance and management subsystem . This sub-\nsystem includes the components responsible for identity \nproofing and registration, card and key issuance and man-\nagement, and the various repositories and services (e.g., \npublic key infrastructure [PKI] directory, certificate status \nservers) required as part of the verification infrastructure. \n The PIV system interacts with an access control sub-\nsystem , which includes components responsible for deter-\nmining a particular PIV cardholder’s access to a physical \nor logical resource. FIPS 201-1 standardizes data formats \nand protocols for interaction between the PIV system and \nthe access control system. \n Unlike the typical card number/facility code encoded \non most access control cards, the FIPS 201 CHUID \ntakes authentication to a new level, through the use of \nan expiration date (a required CHUID data field) and an \noptional CHUID digital signature. A digital signature \ncan be checked to ensure that the CHUID recorded on \nthe card was digitally signed by a trusted source and that \nthe CHUID data have not been altered since the card was \nsigned. The CHUID expiration date can be checked to \nverify that the card has not expired. This is independent \nof whatever expiration date is associated with cardholder \nprivileges. Reading and verifying the CHUID alone \nprovides only some assurance of identity because it \nauthenticates the card data, not the cardholder. The PIN \nand biometric factors provide identity verification of the \nindividual. \n Figure 36.5 , adapted from Forristal, 6 illustrates the \nconvergence of physical and logical access control using \nFIPS 201-1. The core of the system includes the PIV and \naccess control system as well as a certificate authority \nfor signing CHUIDs. The other elements of the figure \nprovide examples of the use of the system core for inte-\ngrating physical and logical access control. \n If the integration of physical and logical access con-\ntrol extends beyond a unified front end to an integration \nof system elements, a number of benefits accrue, includ-\ning the following 7 : \n ● Employees gain a single, unified access control \nauthentication device; this cuts down on misplaced \n 6 J. Forristal, “ Physical/logical convergence, ” Network Computing , \nNovember 23, 2006. \n 7 J. Forristal, “ Physical/logical convergence, ” Network Computing , \nNovember 23, 2006. \n" }, { "page_number": 674, "text": "Chapter | 36 Physical Security Essentials\n641\nIdentity Profiling\n& Registration\nCard Issuance\n& Maintenance\nKey\nManagement\nPKI Directory & \nCertificate Status\nResponder\nAuthorization\nData\nAuthorization\nData\nPhysical\nResource\nLogical\nResource\nPIV Card Issuance\nand Management\nAccess Control\nI&A\nAuthorization\nPhysical Access Control\nI&A = Identification and Authentication\nAuthorization\nLogical Access Control\nCard Reader\n/Writer\nPIN Input\nDevice\nBiometric\nReader\nPIV Card\nPIV Front End\nLEGEND\nI&A\nShading\nPIV system subsystem\nRelated subsystem\nShapes\nDirection of information flow\nProcesses\nComponents\n FIGURE 36.4 FIPS 201 PIV system model. \n" }, { "page_number": 675, "text": "PART | VI Physical Security\n642\nCertificate\nAuthority\nPIV\nSystem\nAccess\nControl\nSystem\nVending, ePurse, and\nOther Applications\nContactless\nSmart Card Reader\nPhysical Access Control\nSystem (PACS) Server\nCard Enrollment\nStation\nSmart Card\nReader\nSmart Card\nReader\nSmart Card and\nBiometric Middleware\nSmart Card\nProgrammer\nCamera\nCard\nPrinter\nOptional\nBiometric\nReader\nOptional\nBiometric\nReader\nOptional\nBiometric\nReader\nOther User Directories\nActive Directory\nHuman Resources\nDatabase\n FIGURE 36.5 Convergence Example (based on [FORR06]). \n" }, { "page_number": 676, "text": "Chapter | 36 Physical Security Essentials\n643\ntokens, reduces training and overhead, and allows \nseamless access. \n ● A single logical location for employee ID \nmanagement reduces duplicate data entry operations \nand allows for immediate and real-time authorization \nrevocation of all enterprise resources. \n ● Auditing and forensic groups have a central \nrepository for access control investigations. \n ● Hardware unification can reduce the number of \nvendor purchase-and-support contracts. \n ● Certificate-based access control systems can leverage \nuser ID certificates for other security applications, \nsuch as document esigning and data encryption. \n REFERENCES \n [1] H. Bidgoli (Ed.) , Handbook of Information Security , Wiley , 2006 . \n [2] S. Bosworth , M. Kabay (Eds.) , Computer Security Handbook , \n Wiley , 2002 . \n" }, { "page_number": 677, "text": "This page intentionally left blank\n" }, { "page_number": 678, "text": "645\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Biometrics \n Luther Martin \n Voltage Security \n Chapter 37 \n Biometrics is the analysis of biological observations and \nphenomena. People routinely use biometrics to recognize \nother people, commonly using the shape of a face or the \nsound of a voice to do so. Biometrics can also be used \nto create automated ways of recognizing a person based \non her physiological or behavioral characteristics. Using \nbiometrics as the basis of technologies that can be used \nto recognize people is not a new idea; there is evidence \nthat fingerprints were used to sign official contracts in \nChina as early as AD 700 and may have been used by \nancient Babylonian scribes to sign cuneiform tablets \nas early as 2000 BC. 1 In both of these cases, a finger-\nprint was pressed in clay to form a distinctive mark that \ncould characterize a particular person. It is likely that the \nsophistication of the techniques used to analyze biomet-\nric data has increased over the past 4000 years, but the \nprinciples have remained essentially the same. \n Using biometrics in security applications is certainly \nappealing. Determining a person’s identity through the \npresence of a physical object such as a key or access \ncard has the problem that the physical token can be lost \nor stolen. Shared secrets such as passwords can be for-\ngotten. Determining a person’s identity using biometrics \nseems an attractive alternative. It allows an identity to be \ndetermined directly from characteristics of the person. \nIt is generally impossible for people to lose or forget \ntheir biometric data, so many of the problems that other \nmeans of verifying an identity are essentially eliminated \nif biometrics can be used in this role. \n Not all biometric data is suitable for use in security \napplications, however. To be useful, such biometric data \nshould be as unique as possible (uniqueness), should \noccur in as many people as possible (universality), \nshould stay relatively constant over time (permanence), \nand should be able to be measured easily (measurabil-\nity) and without causing undue inconvenience or distress \nto a user (acceptability). Examples of technologies that \nseem to meet these criteria to varying degrees are those \nthat recognize a person based on his DNA, geometry of \nhis face, fingerprints, hand geometry, iris pattern, retina \npattern, handwriting, or voice. Many others are also pos-\nsible. Not all biometrics are equally suited for use in \nsecurity applications. Table 37.1 compares the properties \nof selected biometric technologies, rating each property \nas high, medium, or low. This table shows that there is \nno “ best ” biometric for security applications. Though \nthis is true, each biometric has a set of uses for which \nits particular properties make it more attractive than the \nalternatives. \n Biometrics systems can be used as a means of authen-\nticating a user. When they are used in this way, a user \npresents his biometric data along with his identity, and \nthe biometric system decides whether or not the biomet-\nric data presented is correct for that identity. Biometrics \nused as a method of authentication can be very useful, \nbut authentication systems based on biometrics also have \nvery different properties from other authentication tech-\nnologies, and these differences should be understood \nbefore biometrics are used as part of an information secu-\nrity system. \n Systems based on biometrics can also be used as a \nmeans of identification. When they are used in this way, \ncaptured biometric data is compared to entries in a data-\nbase, and the biometric system determines whether or not \nthe biometric data presented matches any of these exist-\ning entries. When biometrics are used for identification, \nthey have a property that many other identification sys-\ntems do not have. In particular, biometrics do not always \nrequire the active participation of a subject. While a user \nalways needs to enter her password when the password is \nused to authenticate her, it is possible to capture biometric \ndata without the user’s active involvement, perhaps even \n 1 R. Heindl, System und Praxis der Daktyloskopie und der sonstigen \ntechnischen Methoden der Kriminalopolizei , De Gruyter, 1922. \n" }, { "page_number": 679, "text": "PART | VI Physical Security\n646\nwithout her knowledge. This lets data be used in ways \nthat other systems cannot. It is possible to automatically \ncapture images of customers in a bank, for example, and \nto use the images to help identify people who are known \nto commit check fraud. Or it is possible to automatically \ncapture images of airline passengers in an airport and use \nthe images to help identify suspicious travelers. \n The use of biometrics for identification also has the \npotential to pose serious privacy issues. The interests of \ngovernments and individual citizens are often at odds. \nLaw enforcement agencies might want to be able to track \nthe movements of certain people, and the automated use \nof biometrics for identification can certainly support this \ngoal. On the other hand, it is unlikely that most people \nwould approve of law enforcement having a database that \ncontains detailed information about their travels. Similarly, \ntax authorities might want to track all business dealings to \nensure that they collect all the revenue they are due, but it \nseems unlikely that most people would approve of govern-\nment agencies having a database that tracks all merchants \nthey have had dealings with, even if no purchases were \nmade. Using some biometrics may also inherently provide \naccess to much more information that is needed to just \nidentify a person. DNA, for example, can used to identify \npeople, but it can also be used to determine information \nabout genetic conditions that are irrelevant to the identifi-\ncation process. But if a user needs to provide a DNA sam-\nple as a means of identification, the same DNA sample \ncould be used to determine genetic information that the \nuser might rather have kept private. \n Designing biometric systems is actually a very dif-\nficult problem. This problem has been made to look eas-\nier than it actually is by the way that the technology has \nbeen portrayed in movies and on television. Biometric \nsystems are typically depicted as being easy to use and \nsecure, whereas encryption that would actually take bil-\nlions of years of supercomputer time to defeat is often \ndepicted as being easily bypassed with minimal effort. \nThis portrayal of biometric systems may have increased \nexpectations well past what current technologies can \nactually deliver, and it is important to understand the \nlimitations of existing biometric technologies and to \nhave realistic expectations of the security that such sys-\ntems can provide in the real world. \n Designing biometric systems has been dubbed a “ grand \nchallenge ” by researchers, 2 indicating that a significant \nlevel of research will be required before it will be possi-\nble for real systems to approach the performance that is \nexpected of the technology, but one that also has the pos-\nsibility for broad scientific and economic impact when \ntechnology finally reaches that level. So, although biomet-\nric systems are useful today, we should expect to see them \nbecome even more useful in the future and for the tech-\nnology to eventually become fairly commonly used. \n 1. RELEVANT STANDARDS \n The American National Standard (ANS) X9.84, “ Biometric \nInformation Management and Security for the Financial \nServices Industry, ” is one of the leading standards that \nprovide an overview of biometrics and their use in infor-\nmation security systems. It is a good high-level discus-\nsion of biometric systems, and the description of the \ntechnology in this chapter roughly follows the framework \ndefined by this standard. This standard is particularly \n 2 A. Jain, et al., “ Biometrics: A grand challenge, ” Proceedings of the \n17th International Conference on Pattern Recognition , Cambridge, \nUK, August 2004, pp. 935 – 942. \n TABLE 37.1 Overview of Selected Biometric Technologies \n Biometric \n Uniqueness \n Universality \n Permanence \n Measurability \n Acceptability \n DNA \n High \n High \n High \n Low \n Low \n Face geometry \n Low \n High \n Medium \n High \n High \n Fingerprint \n High \n Medium \n High \n Medium \n Medium \n Hand geometry \n Medium \n Medium \n Medium \n High \n Medium \n Iris \n High \n High \n High \n Medium \n Low \n Retina \n High \n High \n Medium \n Low \n Low \n Signature dynamics \n Low \n Medium \n Low \n High \n High \n Voice \n Low \n Medium \n Low \n Medium \n High \n" }, { "page_number": 680, "text": "Chapter | 37 Biometrics\n647\nuseful to system architects and others concerned with a \nhigh-level view of security systems. On the other hand, \nthis standard does not provide many details of how to \nimplement such systems. \n There are also several international (ISO/IEC) stand-\nards that cover the details of biometric systems with \nmore detail than ANS X9.84 does. These are listed in \n Table 37.2 . These standards provide a good basis for \nimplementing biometric systems and may be useful to \nboth engineers and others who need to build a biometric \nsystem, and others who need the additional level of \ndetail that ANS X9.84 does not provide. Many other \nISO/IEC standards for biometric systems are currently \nunder development that address other aspects of such \nsystems, and in the next few years it is likely that the \nnumber of these standards that have been finalized will \nat least double from the number that are listed here. The \nJTC 1/SC 37 technical committee of the ISO is responsi-\nble for the development of these standards. \n 2. BIOMETRIC SYSTEM ARCHITECTURE \n All biometric systems have a number of common sub-\nsystems. These are the following: \n ● A data capture subsystem \n ● A signal processing subsystem \n ● A matching subsystem \n ● A data storage subsystem \n ● A decision subsystem \n An additional subsystem, the adaptation subsys-\ntem, may be present in some biometric systems but not \nothers. \n TABLE 37.2 Current ISO/IEC Standards for Biometric Systems \n Standard \n Title \n ISO/IEC 19784-1: 2006 \n Information technology – Biometric Application Programming Interface – Part 1: BioAPI Specification \n ISO/IEC 19784-2: 2007 \n Information technology – Biometric Application Programming Interface – Part 2: Biometric Archive \nFunction Provider Interface \n ISO/IEC 19785-1: 2006 \n Information technology – Common Biometric Exchange Formats Framework (CBEFF) – Part 1: Data \nElement Specification \n ISO/IEC 19785-2: 2006 \n Information technology – Common Biometric Exchange Formats Framework (CBEFF) – Part 2: \nProcedures for the Operation of the Biometric Registration Authority \n ISO/IEC 19794-1: 2006 \n Information technology – Biometric data interchange format – Part 1: Framework \n ISO/IEC 19794-2: 2005 \n Information technology – Biometric data interchange format – Part 2: Finger minutiae data \n ISO/IEC 19794-3: 2006 \n Information technology – Biometric data interchange format – Part 3: Finger pattern spectral data \n ISO/IEC 19794-4: 2005 \n Information technology – Biometric data interchange format – Part 4: Finger image data \n ISO/IEC 19794-5: 2005 \n Information technology – Biometric data interchange format – Part 5: Face image data \n ISO/IEC 19794-6: 2005 \n Information technology – Biometric data interchange format – Part 6: Iris image data \n ISO/IEC 19794-7: 2006 \n Information technology – Biometric data interchange format – Part 7: Signature/sign time series data \n ISO/IEC 19794-8: 2006 \n Information technology – Biometric data interchange format – Part 8: Finger pattern skeletal data \n ISO/IEC 19794-9: 2007 \n Information technology – Biometric data interchange format – Part 9: Vascular image data \n ISO/IEC 19795-1: 2006 \n Information technology – Biometric performance testing and reporting – Part 1: Principles and \nframework \n ISO/IEC 19795-2: 2007 \n Information technology – Biometric performance testing and reporting – Part 2: testing methodologies \nfor technology and scenario evaluation \n ISO/IEC 24709.1: 2007 \n BioAPI Conformance Testing – Part 1: Methods and Procedures \n ISO/IEC 24709.2: 2007 \n BioAPI Conformance Testing – Part 2: Test Assertions for Biometric Service Providers \n" }, { "page_number": 681, "text": "PART | VI Physical Security\n648\n Data Capture \n A data capture subsystem collects captured biometric \ndata from a user. To do this, it performs a measurement \nof some sort and creates machine-readable data from it. \nThis could be an image of a fingerprint, a signal from a \nmicrophone, or readings from a special pen that takes \nmeasurements while it is being used. In each case, the \ncaptured biometric data usually needs to be processed in \nsome way before it can be used in a decision algorithm. It \nis extremely rare for a biometric system to make a deci-\nsion using an image of a fingerprint, for example. Instead, \nfeatures that make fingerprints different from other fin-\ngerprints are extracted from such an image in the signal \nprocessing subsystem, and these features are then used in \nthe matching subsystem. The symbol that is used to indi-\ncate a data capture subsystem is shown in Figure 37.1 . \n The performance of a data capture subsystem is \ngreatly affected by the characteristics of the sensor that it \nuses. A signal processing subsystem may work very well \nwith one type of sensor, but much less well with another \ntype. Even if identical sensors are used in each data cap-\nture subsystem, the calibration of the sensors may need to \nbe consistent to ensure the collection of data that works \nwell in other subsystems. \n Environmental conditions can also significantly \naffect the operation of a data capture subsystem. Dirty \nsensors can result in images of fingerprints that are dis-\ntorted or incomplete. Background noise can result in the \ncollection of a data that makes it difficult for the signal \nprocessing subsystem to identify the features of a voice \nsignal. Lighting can also affect any biometric data that is \ncollected as an image so that an image collected against \na gray background might not work as well as an image \ncollected against a white background. \n Because environmental conditions affect the qual-\nity and usefulness of captured biometric data, they also \naffect the performance of all the subsystems that rely on \nit. This means that it is essential to carry out all testing \nof biometric systems under conditions that duplicate the \nconditions under which the system will normally operate. \nJust because a biometric system performs well in a testing \nlaboratory when operated by well-trained users does not \nmean that it will perform well in real-world conditions. \n Because the data capture subsystem is typically the \nonly one with which users directly interact, it is also the \none that may require training of users to ensure that it \nprovides useful data to the other subsystems. \n Signal Processing \n A signal processing subsystem takes the captured bio-\nmetric data from a data capture subsystem and trans-\nforms the data into a form suitable for use in the \nmatching subsystem. This transformed data is called a \n reference , or a template if it is stored in a data storage \nsubsystem. A template is a type of reference, and it rep-\nresents the average value that we expect to see for a par-\nticular user. \n A signal processing subsystem may also analyze \nthe quality of captured biometric data and reject data \nthat is not of high enough quality. An image of a fin-\ngerprint that is not oriented correctly might be rejected, \nor a sample of speech that was collected with too much \nbackground noise might be rejected. The symbol that is \nused to indicate a signal processing subsystem is shown \nin Figure 37.2 . \n If the captured biometric data is not rejected, the \nsignal processing subsystem then transforms the cap-\ntured biometric data into a reference. In the case of fin-\ngerprints, for example, the signal processing subsystem \nmay extract features such as the locations of branches \nand endpoints of the ridges that comprise a fingerprint. \nA biometric system that uses the speech of users to \ncharacterize them might convert the speech signal into \nfrequency components using a Fourier transform and \nthen look for patterns in the frequency components that \nuniquely characterize a particular speaker. A biometric \nthat uses an image of a person’s face might first look \nfor large features such as the eyes, nose, and mouth and \nthen look for distinctive features such as eyebrows or \nparts of the nose relative to the large ones, to uniquely \nidentify a particular user. In any case, the output of the \nsignal processing subsystem is the transformed data \nthat comprises a reference. Although a reference con-\ntains information that has been extracted from cap-\ntured biometric data, it may be possible to recover the \nSignal\nProcessing\n FIGURE 37.2 Symbol used to indicate a signal processing subsystem. \nData\nCapture\n FIGURE 37.1 Symbol used to indicate a data capture subsystem. \n" }, { "page_number": 682, "text": "Chapter | 37 Biometrics\n649\ncaptured biometric data, or a good approximation to it, \nfrom a template. 3 \n Note that though several standards exist that define \nthe format of biometric references for many technolo-\ngies, these standards do not describe how references are \nobtained from captured biometric data. This means that \nthere is still room for vendor innovation while remaining \nin compliance with existing standards. \n Matching \n A matching subsystem receives a reference from a signal \nprocessing subsystem and then compares the reference \nwith a template from a data storage subsystem. The out-\nput of the matching subsystem is a numeric value called \na comparison score that indicates how closely the two \nmatch. \n Random variations occur in a data capture subsystem \nwhen it is used. This means that the reference created \nfrom the captured data is different each time, even for the \nsame user. This makes the comparison score created for \na particular user different each time they use the system, \nwith random variations occurring around some average \nvalue. This concept is shown in Figure 37.3 , in which the \ndistribution of comparison scores that are calculated from \nrepeated captures of biometric data from a single user are \nrandom. Such random data tend to be close to an average \nvalue every time that they are calculated from captured \nbiometric data but not exactly the average value. \n This is much like the case we get in other situations \nwhere observed data has a random component. Suppose \nthat we flip a fair coin 100 times and count how many \ntimes the result “ heads ” appears. We expect to see this \nresult an average of 50 times, but this average value actu-\nally occurs fairly rarely; exactly 50 out of 100 flips com-\ning up heads happen less than 8% of the time. On the \nother hand, the number of heads will usually be not too \nfar from the average value of 50, with the number being \nbetween 40 and 60 more than 95% of the time. Similarly, \nwith biometrics, captured data will probably be close, \nbut not identical, to an average value, and it will also not \nbe too different from the average value. \n The comparison score calculated by a matching sub-\nsystem is passed to a decision subsystem, where it is \nused to make a decision about the identity of the person \nwho was the source of the biometric data. The symbol \nthat is used to indicate a matching subsystem is shown \nin Figure 37.4 . \n Data Storage \n A data storage subsystem stores templates that are used \nby the matching subsystem. The symbol that is used to \nindicate a data storage subsystem is shown in Figure 37.5 . \n A database is one obvious candidate for a place to \nstore templates, but it is possible to store a template on \na portable data storage device such as a chip card or a \nsmart card. The relative strengths and weaknesses of dif-\nferent ways of doing this are discussed in the section on \nsecurity considerations. \n Decision \n A decision subsystem takes a comparison score that is \nthe output of a matching subsystem and returns a binary \n yes or no decision from it. This decision indicates \nwhether or not the matching subsystem made a compari-\nson which resulted in a match or not. The value yes is \nreturned if the comparison was probably a match; the \nvalue no is returned is the comparison was probably not \nScore\nProbability\nof Score\n FIGURE 37.3 Distribution in comparison scores for a typical user. \nMatching\n FIGURE 37.4 Symbol used to indicate a matching subsystem. \nData\nStorage\n FIGURE 37.5 Symbol used to indicate a data storage subsystem. \n 3 M. Martinez-Diaz, et al., “ Hill-climbing and brute-force attacks \non biometric systems: A case study in match-on-card fi ngerprint \nverifi cation, ” Proceedings of the 40th IEEE International Carahan \nConference on Security Technology , Lexington, October 2006, \npp. 151 – 159. \n" }, { "page_number": 683, "text": "PART | VI Physical Security\n650\na match. The symbol that is used to indicate a decision \nsubsystem is shown in Figure 37.6 . \n To make a yes or no decision, a decision subsystem \ncompares a comparison score with a parameter called a \n threshold . The threshold value represents a measure of \nhow good a comparison needs to be to be considered a \nmatch. If the comparison score is less than or equal to \nthe threshold value then the decision subsystem returns \nthe value yes . If the comparison score is greater than the \nthreshold, it returns the value no . Comparison scores \nthat will result in a yes or no response from a decision \nsubsystem are shown in Figure 37.7 . Comparison scores \nin the gray area of this illustration are close to the aver-\nage value and result in a yes, whereas comparison scores \nthat are outside the gray area are too far from the aver-\nage value and result in a no . In Figure 37.7 , the thresh-\nold value defines how far the gray area extends from \nthe central average value. If the threshold is decreased, \nthe size of the gray area will get narrower and decrease \nin size so that fewer comparison scores result in a yes \nanswer. If the threshold is increased, the gray area will \nget wider and increase in size so that more comparison \nscores result in a yes answer. \n Errors may occur in any decision subsystem. There \nare two general types of errors that can occur. In one \ncase, a decision subsystem makes the incorrect decision \nof no instead of yes . In this case, a user is indeed who \nshe claims to be, but large random errors occur in the \ndata capture subsystem and cause her to be incorrectly \nrejected. This type of error might result in the legitimate \nuser Alice inaccurately failing to authenticate as herself. \n This class of error is known as a type-1 error by \nstatisticians, 4 a term that would almost certainly be a \ncontender for an award for the least meaningful termi-\nnology ever invented if such an award existed. It was \nonce called false rejection by biometrics researchers and \nvendors, a term that has more recently been replaced by \nthe term false nonmatch . One way in which the accuracy \nof biometric systems is now typically quantified is by \ntheir false nonmatch rate (FNMR), a value that estimates \nthe probability of the biometric system making a type-1 \nerror in its decision subsystem. \n In the second case, a decision subsystem incorrectly \nreturns a yes instead of a no . In this case, random errors \noccur that let a user be erroneously recognized as a dif-\nferent user. This might happen if the user Alice tries to \nauthenticate as the user Bob, for example. This class of \nerror is known as a type-2 error by statisticians. 5 It was \nonce called false acceptance by biometrics researchers \nand vendors, a term that has been more recently been \nreplaced by the term false match . This leads to quantify-\ning the accuracy of biometrics by their false match rate \n(FMR), a value that estimates the probability of the bio-\nmetric system making a type-2 error. \n For a particular biometric technology, it is impossible \nto simultaneously reduce both the FNMR and the FMR, \nalthough improving the technology does make it possible \nto do this. If the parameters used in a matching subsys-\ntem are changed so that the FNMR decreases, the FMR \nrate must increase; if the parameters used in a matching \nsubsystem are changed so that the FMR decreases, the \nFNMR must increase. This relationship follows from \nthe nature of the statistical tests that are performed by the \ndecision subsystem and is not limited to just biometric \nsystems. Any system that makes a decision based on sta-\ntistical data will have the same property. The reason for \nthis is shown in Figures 37.8 and 37.9 . \n Suppose that we have two users of a biometric sys-\ntem: Alice and Bob, whose comparison scores are \ndistributed as shown in Figure 37.8 . Note that the distri-\nbutions of these values overlap so that in the area where \nthey overlap, the comparison score could have come \n 4 J. Neyman and E. Pearson, “ On the use and interpretation of certain \ntest criteria for purposes of statistical inference: Part I, ” Biometrika , \nVol. 20A, No. 1 – 2, pp. 175 – 240, July 1928. \n 5 S. King, H. Harrelson and G. Tran, “ Testing iris and face recognition \nin a personnel identifi cation application, ” 2002 Biometric Consortium \nConference, February 2002. \nDecision\n FIGURE 37.6 Symbol used to indicate a decision subsystem. \n FIGURE 37.7 Comparison scores close to the average that result in a \n yes decision. \n" }, { "page_number": 684, "text": "Chapter | 37 Biometrics\n651\nfrom either Alice or Bob, but we cannot tell which. If \nthe average values that we expect for Alice and Bob are \nfar enough apart, the chances of this happening may get \nextremely low, but even in such cases it is possible to \nhave large enough errors creep into the data capture step \nto make even the rarest of errors possible. \n Figure 37.9 shows how a false match can occur. Suppose \nthat Bob uses our hypothetical biometric system but claims \nto be Alice when he does this, and the output of the match-\ning subsystem is the point B that is shown in Figure 37.9 . \nBecause this point is close enough to the average that we \nexpect from biometric data from Alice, the decision sub-\nsystem will erroneously decide that the biometric data \nthat Bob presented is good enough to authenticate him as \nAlice. This is a false match, and it contributes to the FMR \nof the system. \n Figure 37.10 shows how a false nonmatch can occur. \nSuppose that Alice uses our hypothetical biometric sys-\ntem and the output of the matching subsystem is the \npoint A that is shown in Figure 37.10 . Because this point \nis too far from the average that we expect when Alice \nuses the system, it is more likely to have come from \nsomeone else other than from Alice, and the decision \nsubsystem will erroneously decide that the biometric \ndata that Alice presented is probably not hers. This is a \nfalse nonmatch, and it contributes to the FNMR of the \nsystem. \n Because the FNMR and FMR are related, the most \nmeaningful way to represent the accuracy of a biometric \nsystem is probably by showing the relationship between \nthe two error rates. The relationship between the two is \nknown by the term receiver operating characteristic , \nor ROC, a term that originated in the study of the sen-\nsitivity of radio receivers as their operating parameters \nchange. Figure 37.11 shows an ROC curve for a hypo-\nthetical biometric. Such an ROC curve assumes that \nthe only way in which the error rates are changed is by \nchanging the threshold value that is used in the decision \nsubsystem. Note that this ROC curve indicates that when \nthe FMR increases the FNMR decreases, and vice versa. \n By adjusting the threshold that a decision subsys-\ntem uses it is possible to make the FMR very low while \nallowing the FNMR to get very high or to allow the \nFMR to get very high while making the FNMR very low. \nBetween these two extreme cases lies the case where the \nFMR and the FNMR are the same. This point is some-\ntimes the equal error rate (EER) or crossover error rate \n(CER) and is often used to simplify the discussions of \nerror rates for biometric systems. \nAlice\nBob\n FIGURE 37.8 Overlap in possible comparison scores for Alice and Bob. \nAlice\nBob\nB\n FIGURE 37.9 Type-2 error that causes a false match. \nAlice\nA\n FIGURE 37.10 Type-1 error that causes a false nonmatch. \n0.001\n0.01\n0.1\nFMR\nFNMR\n0.1\n0.01\n0.001\n FIGURE 37.11 ROC for a hypothetical biometric system. \n" }, { "page_number": 685, "text": "PART | VI Physical Security\n652\n Though using a single value does indeed make it eas-\nier to compare the performance of different biometric sys-\ntems, it can also be somewhat misleading. In high-security \napplications like those used by government or military \norganizations, keeping unauthorized users out may be \nmuch more important than the inconvenience caused by \na high FNMR. In consumer applications, like ATMs, it \nmay be more important to keep the FNMR low. This can \nhelp avoid the anger and accompanying support costs of \ndealing with customers who are incorrectly denied access \nto their accounts. In such situations, a low FNMR may \nbe more important than the higher security that a higher \nFMR would provide. The error rates that are acceptable \nare strongly dependent on how the technology is being \nused, so be wary of trying to understand the performance \nof a biometric system by only considering the CER. \n There is no theoretical way to accurately estimate the \nFMR and FNMR of biometric systems, so all estimates \nof these error rates need to be made from empirical data. \nBecause testing can be expensive, the sample sizes used in \nsuch testing are often relatively small, so the results may \nnot be representative of larger and more general popula-\ntions. This is further complicated by the fact that some of \nthe error rates that such testing attempts to estimate are \nfairly low. This means that human error from mislabeling \ndata or other mistakes that occur during testing may make \na bigger contribution to the measured error rates than the \nerrors caused by a decision subsystem. It may be possible \nto create a biometric system that makes an error roughly \nonly one time in 1 million operations, for example, but it \nis unrealistic to expect such high accuracy from the people \nwho handle the data in an experiment that tries to estimate \nsuch an error rate. And because there are no standard-\nized sample sizes and test conditions for estimating these \nerror rates, there can be a wide range of reliability of error \nrate estimates. In one study, 5 a biometric system that per-\nformed well in a laboratory setting when used by trained \nusers ended up correctly identifying enrolled users only \n51% of the time when it was tested in a pilot project under \nreal-world conditions, perhaps inviting an unenviable \ncomparison with a system that recognizes a person by his \nability to flip a coin and have it come up heads. Because \nof these effects, estimates of error rates should be viewed \nwith a healthy amount of skepticism, particularly when \nextremely low rates are claimed. \n Adaptation \n Some biometric data changes over time. This may result in \nmatches with a template becoming worse and worse over \ntime, which will increase the FNMR of a biometric s ystem. \nOne way to avoid the potential difficulties associated with \nhaving users eventually becoming unrecognizable is to \nupdate their template after a successful authentication. This \nprocess is called adaptation , and it is done by an optional \npart of a biometric system called an adaptation subsystem. \nIf an adaptation subsystem is present, the symbol shown in \n Figure 37.12 is used to indicate it. \n 3. USING BIOMETRIC SYSTEMS \n There are three main operations that a biometric system \ncan perform. These are the following. \n ● Enrollment. During this operation, a biometric sys-\ntem creates a template that is used in later authenti-\ncation and identification operations. This template, \nalong with an associated identity, is stored in a data \nstorage subsystem. \n ● Authentication. During this operation, a biometric \nsystem collects captured biometric data and a claimed \nidentity and determines whether or not the captured \nbiometric data matches the template stored for that \nidentity. Although the term authentication is almost \nuniversally used in the information security industry \nfor this operation, the term verification is often used \nby biometrics vendors and researchers to describe this. \n ● Identification. During this operation, a biometric sys-\ntem collects captured biometric data and attempts to \nfind a match against any of the templates stored in a \ndata storage subsystem. \n Enrollment \n Before a user can use a biometric system for either \nauthentication or identification, a data storage subsystem \nneeds to contain a template for the user. The process of \ninitializing a biometric system with such a template is \ncalled enrollment , and it is the source of another error \nrate that can limit the usefulness of biometric systems. \nThe interaction of the subsystems of a biometric system \nwhen enrolling a user is shown in Figure 37.13 . \n In the first step of enrollment, a user presents his bio-\nmetric data to a data capture subsystem. The captured \nbiometric data is then converted into a reference by a sig-\nnal processing subsystem. This reference is then stored \nAdaptation\n FIGURE 37.12 Symbol used to indicate an adaptation subsystem. \n" }, { "page_number": 686, "text": "Chapter | 37 Biometrics\n653\nin a data storage subsystem, at which point it becomes \na template. Such a template is typically calculated from \nseveral captures of biometric data to ensure that it reflects \nan accurate average value. An optional step includes \nusing a matching subsystem to ensure that the user is not \nalready enrolled. \n The inherent nature of some captured biometric data \nas well as the randomness of captured biometric data \ncan cause the enrollment process to fail. Some people \nhave biometrics that are far enough outside the normal \nrange of such data that they cause a signal processing \nsubsystem to fail when it attempts to convert their cap-\ntured data into a reference. The same types of random \nerrors that contribute to the FMR and FNMR are also \npresent in the enrollment process, and can be sometimes \nbe enough to turn captured biometric data that would \nnormally be within the range that the signal processing \nsubsystem can handle into data that is outside this range. \nIn some cases, it may even be impossible to collect some \ntypes of biometric data from some users, like the case \nwhere missing hands make it impossible to collect data \non the geometry of the missing hands. \n The probability of a user failing in the enrollment \nprocess is used to calculate the failure to enroll rate \n(FER). Almost any biometric can fail sometimes, either \ntemporarily or permanently. Dry air or sticky fingers can \ncause fingerprints to temporarily change. A cold can \ncause a voice to temporarily become hoarse. A broken \narm can temporarily change the way a person writes his \nsignature. Cataracts can permanently make retina pat-\nterns impossible to capture. Some skin diseases can even \npermanently change fingerprints. \n A useful biometric system should have a low FER, \nbut because all such systems have a nonzero value for this \nrate, it is likely that there will always be some users that \ncannot be enrolled in any particular biometric system, and \na typical FER for a biometric system may be in the range \nof 1% to 5%. For this reason, biometric systems are often \nmore useful as an additional means of authentication in \nmultifactor authentication system instead of the single \nmethod used. \n Authentication \n After a user is enrolled in a biometric system, the sys-\ntem can be used to authenticate this user. The interaction \nof the subsystems of a biometric system when used to \nauthenticate a user is shown in Figure 37.14 . \nData\nCapture\nSignal\nProcessing\nMatching\nData\nStorage\n FIGURE 37.13 Enrollment in a biometric system. \nData\nCapture\nSignal\nProcessing\nMatching\nData\nStorage\nDecision\nYes/No\nAdaptation\n FIGURE 37.14 Authentication with a biometric system. \n" }, { "page_number": 687, "text": "PART | VI Physical Security\n654\n To use a biometric system for authentication, a user \nfirst presents both a claimed identity and his biometric \ndata to a data capture subsystem. The captured biomet-\nric data is then passed to a signal processing subsystem \nwhere features of the captured data are extracted and \nconverted into a reference. A matching subsystem then \ncompares this reference to a template from a data stor-\nage subsystem for the claimed identity and produces a \ncomparison score. This comparison score is then passed \nto a decision subsystem, which produces a yes or no \ndecision that reflects whether or not the biometric data \nagrees with the template stored for the claimed identity. \nThe result of the authentication operation is the value \nreturned by the decision subsystem. \n A false match that occurs during authentication will \nallow one user to successfully authenticate as another user. \nSo if Bob claims to be Alice and a false match occurs, he \nwill be authenticated as Alice. A false nonmatch during \nauthentication will incorrectly deny a user access. So if \nBob attempts to authenticate as himself, he will be incor-\nrectly denied assess if a false nonmatch occurs. \n Because biometric data may change over time, an \nadaptation subsystem may update the stored template for \na user after they have authenticated to the biometric sys-\ntem. If this is done, it will reduce the number or times that \nusers will need to go through the enrollment process again \nwhen their biometric data changes enough to increase \ntheir FNMR rate to an unacceptable level. \n Identification \n A biometric system can be used to identify a user that \nhas already enrolled in the system. The interaction of the \nsubsystems of a biometric system when used for identi-\nfication is shown in Figure 37.15 . \n To use a biometric system for identification, a user \npresents his biometric data to a data capture subsystem. \nThe captured biometric data is then passed to a signal \nprocessing subsystem where features of the captured data \nare extracted and converted into a reference. A match-\ning subsystem then compares this reference to each of \nthe templates stored in a data storage subsystem and \nproduces a comparison score. Each of these comparison \nscores are passed to a decision subsystem, which pro-\nduces a yes or no decision that reflects whether or not \nthe reference is a good match for each template. If a yes \ndecision is reached, then the identity associated with the \ntemplate is returned for the identification operation. It is \npossible for this process to return more that one identity. \nThis may or may not be useful, depending on the applica-\ntion. If a yes decision is not reached for any of the tem-\nplates in a data storage subsystem, then a response that \nindicates that no match was found is returned for the \nidentification operation. \n A false match that occurs during identification will \nincorrectly identify a user as another enrolled user. So \nif Bob uses a biometric system for identification, he \nmight be incorrectly identified as Alice if a false match \noccurs. Because there are typically many comparisons \ndone when a biometric system is used for identification, \nthe FMR can increase dramatically because there is an \nopportunity for a false match with every comparison. \nSuppose that for a single comparison we have an FMR \nof \u000e 1 and that \u000e n represents the FMR for n comparisons. \nThese two error rates are related by \u000e n \u0003 1 \t (1 \t \u000e 1 ) n . \nIf n · \u000e 1 \u0004 \u0004 1, then we have that \u000e n \u0002 n · \u000e 1 . This means \nthat for a small FMR, the FMR is increased by a fac-\ntor equal to the number of enrolled users when a system \nis used for identification instead of authentication. So an \nFMR of 10 \t 6 when a system is used for authentication \nwill be increased to approximately 10 \t 3 if the identifi-\ncation is done by comparing to 1000 templates. A false \nnonmatch during identification will fail to identify an \nenrolled user as one who is enrolled in the system. So \nBob might be incorrectly rejected, even though he is \nactually an enrolled user. \nData\nCapture\nSignal\nProcessing\nMatching\nData\nStorage\nDecision\nIdentity\n FIGURE 37.15 Identification with a biometric system. \n" }, { "page_number": 688, "text": "Chapter | 37 Biometrics\n655\n 4. SECURITY CONSIDERATIONS \n Biometric systems differ from most other authentication \nor identification technologies in several ways, and these \ndifferences should be understood by anyone consider-\ning using such systems as part of an information security \narchitecture. \n Biometric data is not secret, or at least it is not very \nsecret. Fingerprints, for example, are not very secret \nbecause so-called latent fingerprints are left almost eve-\nrywhere. On the other hand, reconstructing enough of \na fingerprint from latent fingerprints to fool a biomet-\nric system is actually very difficult because latent fin-\ngerprints are typically of poor quality and incomplete. \nBecause biometric data is not very secret, it may be use-\nful to verify that captured biometric data is fresh instead \nof being replayed. There are technologies available that \nmake it more difficult for an adversary to present fake \nbiometric data to a biometric system for this very pur-\npose. The technology exists to distinguish between a liv-\ning finger and a manufactured copy, for example. Such \ntechnologies are not foolproof and can themselves be \ncircumvented by clever attackers. This means that they \njust make it more difficult for an adversary to defeat a \nbiometric system, but not impossible. \n It is relatively easy to require users to frequently change \ntheir passwords and to enforce the expiration of crypto-\ngraphic keys after their lifetime has passed, but many types \nof biometric data last for a long time, and it is essentially \nimpossible to force users to change their biometric data. So \nwhen biometric data is compromised in some way, it is not \npossible to reissue new biometric data to the affected users. \nFor that reason, it may be useful to both plan for alternate \nforms of authentication or identification in addition to a \nbiometric system and to not rely on a single biometric sys-\ntem being useful for long periods of time. \n Biometrics used for authentication may have much \nlower levels of security than other authentication technol-\nogies. This, plus the fact there is usually a non-zero FER \nfor any biometric system, means that biometric systems \nmay be more useful as an additional means of authenti-\ncation than as a technology that can work alone. \n Types of authentication technology can be divided \ninto three general categories or “ factors ” : \n ● Something that a user knows , such as a password \nor PIN \n ● Something that a user has , such as a key or access \ncard \n ● Something that a user is or does , which is exactly the \ndefinition of a biometric \n To be considered a multifactor authentication sys-\ntem, a system must use means from more than one of \nthese categories to authenticate a user so that a system \nthat uses two independent password-based systems for \nauthentication does not qualify as a multifactor authen-\ntication system, whereas one that uses a password plus a \nbiometric does. There is a commonly held perception that \nmultifactor authentication is inherently more secure than \nauthentication based on only a single factor, but this is \nnot true. The concepts of strong authentication, in which \nan attacker has a small chance of bypassing the means of \nauthentication, and multifactor authentication are totally \nindependent. It is possible to have strong authentication \nbased on only one factor. It is also possible to have weak \nauthentication based on multiple authentication factors. \nSo, including a biometric system as part of a multifactor \nauthentication system should be done for reasons other \nthan to simply use more than a single authentication fac-\ntor. It may be more secure to use a password plus a PIN \nfor authentication than to use a password plus a biomet-\nric, for example, even though both the password and PIN \nare the same type of authentication factor. \n Error Rates \n The usual understanding of biometric systems assumes \nthat the FMR and FNMR of a biometric system are due \nto random errors that occur in a data capture subsystem. \nIn particular, this assumes that the biometrics that are \nused in these systems are actually essentially unique. \nThe existing market for celebrity lookalikes demonstrates \nthat enough similarities exist in some physical features to \njustify the concern that similarities also exist in the more \nsubtle characteristics that biometric systems use. \n We assume, for example, that fingerprints are unique \nenough to identify a person without any ambiguity. This \nmay indeed be true, 6 but there has been little careful \nresearch that demonstrates that this is actually the case. \nThe largest empirical study of the uniqueness of finger-\nprints used only 50,000 fingerprints, a sample that could \nhave come from as few as 5000 people, and has been \ncriticized by experts for its careless use of statistics. 7 This \nis an area that deserves a closer look by researchers, but \nthe expense of large-scale investigations probably means \nthat they will probably never be carried out, leaving the \nuniqueness of biometrics an assumption that underlies the \n 6 S. Pankanti, S. Prabhakar and A. Jain, “ On the individuality of \nfi ngerprints, ” IEEE Transactions on Pattern Analysis and Machine \nIntelligence , Vol. 24, No. 8, pp. 1010 – 1025, August 2002. \n 7 S. Cole, Suspect Identities: A History of Fingerprinting and \nCriminal Identifi cation , Harvard University Press, 2002. \n" }, { "page_number": 689, "text": "PART | VI Physical Security\n656\nuse of the technology for security applications. Note that \nother parts of information security also rely on assump-\ntions that may never be proved. The security provided \nby all public-key cryptographic algorithms, for example, \nassumes that certain computational problems are intracta-\nble, but there are currently no proofs that this is actually \nthe case. \n The chances that the biometrics used in security sys-\ntems will be found to be not unique enough for use in \nsuch systems is probably remote, but it certainly could \nhappen. One easy way to prepare for this possibility is \nto use more than one biometric to characterize users. In \nsuch multi-modal systems, if one of the biometrics used \nis found to be weak, the others can still provide adequate \nstrength. On the other hand, multi-modal systems have \nthe additional drawback of being more expensive that a \nsystem that uses a single biometric. \n Note that a given error rate can have many differ-\nent sources. An error rate of 10% could be caused by an \nentire population having an error rate of 10%, or it could \nbe caused by 90% of a population having an error rate of \nzero and 10% of the population having an error rate of \n100%. The usability of the system is very different in each \nof these cases. In one case, all users are equally inconven-\nienced, but in the other case, some users are essentially \nunable to use the system at all. So understanding how \nerrors are distributed can be important in understanding \nhow biometric systems can be used. If a biometric system \nis used to control access to a sensitive facility, for example, \nit may not be very useful in this role if some of the people \nwho need entry to the facility are unlucky enough to have \na 100% FNMR. Studies have suggested that error rates are \nnot uniformly distributed in some populations, but they \nare not quite as bad as the worst case. The nonuniform \ndistribution of error rates that is observed in biometric sys-\ntems is often called Doddington’s Zoo and is named after \nthe researcher who first noticed this phenomenon and the \ncolorful names that he gave to the classes of users who \nmade different contributions to the observed error rates. \n Doddington’s Zoo \n Based on his experience testing biometric systems, George \nDoddington divided people into four categories: sheep, \ngoats, lambs, and wolves. 8 Sheep are easily recognized by \na biometric system and comprise most of the population. \nGoats are particularly unsuccessful at being recognized. \nThey have chronically high FNMRs, usually because \ntheir biometric is outside the range that a particular sys-\ntem recognizes. Goats can be particularly troublesome if \na biometric system is used for access control, where it \nis critical that all users be reliably accepted. Lambs are \nexceptionally vulnerable to impersonation, so they con-\ntribute to the FMR. Wolves are exceptionally good false \nmatchers and they also make a significant contribution \nto the FMR. \n Doddington’s goats can also cause another problem. \nBecause their biometric pattern is outside the range that \na particular biometric system expects, they may be una-\nble to enroll in such a system and thus be major contrib-\nutors to the FER of the system. \n Note that users that may be sheep for one biometric \nmay turn out to be goats for another, and so on. Because \nof this it is probably impossible to know in advance how \nerror rates are distributed for a particular biometric sys-\ntem, it is almost always necessary to test such systems \nthoroughly before deploying them on a wide scale. \n Birthday Attacks \n Suppose that we have n users enrolled in a biometric \nsystem. This biometric system maps arbitrary inputs \ninto n \u0002 1 states that represent deciding on a match \nwith one of the n users plus the additional “ none of the \nabove ” user that represents the option of deciding on a \nmatch with none of the n enrolled users. From this point \nof view, this biometric system acts like a hash function \nso might try to use well-known facts about hash func-\ntions to understand the limits that this property puts on \nerror rates. In particular, errors caused by the FMR of a \nbiometric system look like a collision in this hash func-\ntion, which happens when two different input values to \na hash function result in the same output from the hash \nfunction. For this reason, we might think that the same \n “ birthday attack ” that can find collisions for a hash func-\ntion can also increase the FMR of a biometric system. \nThe reason for this is as follows. \n For a hash function that maps inputs into m differ-\nent message digests, the probability of finding at least \none collision, a case where different inputs map to the \nsame message digest, after calculating n message digests \nis approximately 1\n2 2\n\t\n\t\ne n\nm\n/\n . 9 Considering birthdays as \n 9 D. Knuth, The Art of Computer Programming, Volume 2: Sorting \nand Searching , Addison-Wesley, 1973. \n 8 G. Doddington, et al., “ Sheep, goats, lambs and wolves: A statistical \nanalysis of speaker performance in the NIST 1998 speaker recognition \nevaluation, ” Proceedings of the Fifth International Conference on Spoken \nLanguage Processing , Sydney, Australia, November – December, 1998, \npp. 1351 – 1354. \n" }, { "page_number": 690, "text": "Chapter | 37 Biometrics\n657\na hash function that maps people into one of 365 pos-\nsible birthdays, this tells us that the probability of two or \nmore people having the same birthday in a group of only \n23 people is approximately 1\n0 52\n23\n2 365\n2\n\t\n\t\n\u000f\ne\n/\n≈\n.\n . This \nmeans that there is greater that a 50% chance of find-\ning two people with the same birthday in a group of only \n23 people, a result that is often counter to people’s intui-\ntion. Using a biometric system is much like a hash func-\ntion in that it maps biometric data into the templates in \nthe data storage subsystem that it has a good match for, \nand collisions in this hash function cause a false match. \nTherefore, the FMR may increase as more users are \nadded to the system, and if it does, we might expect the \nFMR rate to increase in the same way that the chances of \na collision in a hash function do. This might cause false \nmatches at a higher rate that we might expect, just like \nthe chances of finding matching birthdays does. \n In practice, however, this phenomenon is essentially not \nobserved. This may be due to the nonuniform distribution \nin error rates of Doddingon’s Zoo. If the threshold used \nin a decision subsystem is adjusted to create a particular \nFMR, it may be limited by the properties of Doddington’s \nlambs. This may leave the sheep that comprise the majority \nof the user population with enough room to add additional \nusers without getting too close to other sheep. \n ANS X9.84 requires that the FMR for biometric sys-\ntems provide at least the level of security provided by a \nfour-digit PIN, which equates to an FMR of no greater \nthan 10 \t 4 and recommends that they provide an FMR of \nno more than 10 \t 5 . In addition, this standard requires that \nthe corresponding FNMR be no greater than 10 \t 2 at the \nFMR selected for use. These error rates may be too ambi-\ntious for some existing technologies, but it is certainly \npossible to attain these error rates with some technologies. \n On the other hand, these error rates compare very unfa-\nvorably with other authentication technologies. For exam-\nple, an FMR of 10 \t 4 is roughly the same as the probability \nof randomly guessing a four-digit PIN, a three-character \npassword, or a 13-bit cryptographic key. And although \nfew people would find three-character passwords or 13-bit \ncryptographic keys acceptable, they might have to accept \nan FMR of 10 \t 4 from a biometric system because of the \nlimitations of affordable current technologies. Table 37.3 \nsummarizes how the security provided by various FMRs \ncompares to both the security provided by all-numeric \nPINs and passwords that use only case-independent letters. \n Comparing Technologies \n There are many different biometrics that are used in \n currently available biometric systems. Each of these \ncompeting technologies tries to be better than the alter-\nnatives in some way, perhaps being easier to use, more \naccurate, or cheaper to operate. Because there are so \nmany technologies available, however, it should come as \nno surprise that there is no single “ best ” technology for \nuse in biometric systems. Almost any biometric system \nthat is available is probably the best solution for some \nproblem, and it is impossible to list all of the cases where \neach technology is the best without a very careful analy-\nsis of each authentication or identification problem. So \nany attempt to make a simple comparison between the \ncompeting technologies will be inherently inaccurate. \nDespite this, Table 37.4 attempts to make such a high-\nlevel comparison. In this table, the ease of use, accuracy \nand cost are rated as high, medium, or low. \n Using DNA as a biometric provides an example of the \ndifficulty involved in making such a rough classification. \n TABLE 37.3 Comparison of Security Provided by \nBiometrics and Other Common Mechanisms \n FMR \n PIN Length \n Password \nLength \n Key Length \n 10 – 3 \n 3 digits \n 2 letters \n 10 bits \n 10 – 4 \n 4 digits \n 3 letters \n 13 bits \n 10 – 5 \n 5 digits \n 4 letters \n 17 bits \n 10 – 6 \n 6 digits \n 4 letters \n 20 bits \n 10 – 7 \n 7 digits \n 5 letters \n 23 bits \n 10 – 8 \n 8 digits \n 6 letters \n 27 bits \n TABLE 37.4 Comparison of Selected Biometric \nTechnologies \n Biometric \n Ease of use \n Accuracy \n Cost \n DNA \n Low \n High \n High \n Face geometry \n High \n Medium \n Medium \n Fingerprint \n High \n High \n Low \n Hand geometry \n Medium \n Medium \n Medium \n Iris \n High \n High \n High \n Retina \n Medium \n High \n High \n Signature \ndynamics \n Low \n Medium \n Medium \n Voice \n Medium \n Low \n Low \n (Password) \n (Medium) \n (Low) \n (Low) \n" }, { "page_number": 691, "text": "PART | VI Physical Security\n658\nThe accuracy of DNA testing is limited by the fairly large \nnumber of identical twins that are present in the overall \npopulation, but in cases other than distinguishing identical \ntwins it is very accurate. So if identical twins need to be \ndistinguished, it may not be the best solution. By slightly \nabusing the usual understanding of what a biometric is, it \nis even possible to think of passwords as a biometric that \nis based purely on behavioral characteristics, along with \nthe FMR, FNMR, and FER rates that come with their use, \nbut they are certainly outside the commonly understood \nmeaning of the term. Even if they are not covered by the \nusual definition of a biometric, passwords are fairly well \nunderstood, so they provide a point of reference for com-\nparing against the relative strengths of biometrics that \nare commonly used in security systems. The accuracy \nof passwords here is meant to be that of passwords that \nusers select for their own use instead of being randomly \ngenerated. Such passwords are typically much weaker \nthan their length indicates because of the structure that \npeople need to make passwords easy to remember. This \nmeans that the chances of guessing a typical eight-char-\nacter case-insensitive password is actually much greater \nthan the 26 \t 8 that we would expect for strong passwords. \nStudies of the randomness in English words have esti-\nmated that there is approximately one bit of randomness \nper letter. 10 If we conservatively double this to estimate \nthat there are approximately two bits of randomness per \nletter in a typical user-selected password, we get the esti-\nmate that an eight-character password probably provides \nonly about 16 bits of randomness, which is close to the \nsecurity provided by a biometric system with an FMR of \n10 \t 5 . This means that the security of passwords as used \nin practice is often probably comparable to that attainable \nby biometric systems, perhaps even less if weak passwords \nare used. \n Storage of Templates \n One obvious way to store the templates used in a biomet-\nric system is in a database. This can be a good solution, \nand the security provided by the database may be ade-\nquate to protect the templates that it stores. In other cases, \nit may be more useful for a user to carry his template \nwith him on some sort of portable data storage device and \nto provide that template to a matching subsystem along \nwith his biometric data. Portable, credit-card-sized data \nstorage devices are often used for this purpose. There are \nthree general types of such cards that are used in this way, \nand each has a different set of security considerations that \nare relevant to it. \n In one case, a memory card with unencrypted data \nstorage can be used to store a template. This is the least \nexpensive option, but also the least secure. Such a mem-\nory card can be read by anyone who finds it and can easily \nbe duplicated, although it may be impossible for anyone \nother that the authorized user to use it. Nonbiometric data \nstored on such a card may also be compromised when a \ncard is lost. \n In principle, a nonauthorized user can use such a card \nto make a card that lets them authenticate as an author-\nized user. This can be done as follows. Suppose that Eve, \na nonauthorized user, gets the memory card that stores the \ntemplate for the authorized user Alice. Eve may be able to \nuse Alice’s card to make a card that is identical in every \nway to Alice’s card but that has Eve’s template in place \nof Alice’s. Then when Eve uses this card to authenticate, \nshe uses her biometric data, which then gets compared \nto her template on the card and gets her authenticated as \nthe user Alice. Note that doing this relies on a fairly unse-\ncured implementation. \n A case that is more secure and also more expensive is \na memory card in which data storage is encrypted. The \ncontents of such a card can still be read by anyone who \nfinds it and can easily be duplicated, but it is infeasible \nfor an unauthorized user to decrypt and use the data on \nthe card, which may also include any nonbiometric data \non the card. Encrypting the data storage also makes it \nimpractical for an unauthorized user to make a card that \nwill let them authenticate as an authorized user. Though \nit may be possible to simply replace one template with \nanother if the template is stored unencrypted on a mem-\nory card, carrying out the same attack on a memory card \nthat stores data encrypted requires being able to create \na valid encrypted template, which is just as difficult as \ndefeating the encryption. \n The most secure as well as the most expensive case \nis where a smart card with cryptographic capabilities is \nused to store a template. The data stored on such a smart \ncard can only be read and decrypted by trusted applica-\ntions, so that it is infeasible for anyone who finds a lost \nsmart card to read data from it or to copy it. This makes \nit infeasible for unauthorized users to use a smart card to \ncreate a way to authenticate as an authorized user. It also \nprotects any nonbiometric data that might be stored on \nthe card. \n 10 C. Shannon, “ Prediction and entropy of printed english, ” Bell \nSystem Technical Journal , Vol. 30, pp. 50 – 64, January 1951. \n" }, { "page_number": 692, "text": "Chapter | 37 Biometrics\n659\n 5. CONCLUSION \n Using biometric systems as the basis for security tech-\nnologies for authentication or identification is currently \nfeasible. Each biometric has properties that may make it \nuseful in some situations but not others, and security sys-\ntems based on biometrics have the same property. This \nmeans that there is no single “ best ” biometric for such \nuse and that each biometric technology has an application \nwhere it is superior to the alternatives. \n There is still a great deal of research that needs to \nbe done in the field, but existing technologies have pro-\ngressed to the point that security systems based on bio-\nmetrics are now a viable way to perform authentication \nor identification of users, although the properties of \nbiometrics also make them more attractive as part of a \nmultifactor authentication system instead of the single \nmeans that is used. \n \n" }, { "page_number": 693, "text": "This page intentionally left blank\n" }, { "page_number": 694, "text": "661\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Homeland Security \n Rahul Bhaskar Ph. D. \n California State University \n Bhushan Kapoor \n California State University \n Chapter 38 \n The September 11, 2001, terrorist attacks, permanently \nchanged the way the United States and the world’s other \nmost developed countries perceived the threat from \nterrorism. Massive amounts of resources were mobi-\nlized in a very short time to counter the perceived and \nactual threats from terrorists and terrorist organiza-\ntions. In the United States, this refocus was pushed as \na necessity for what was called homeland security . \nThe homeland security threats were anticipated for the \nIT infrastructure as well. It was expected that not only \nthe IT at the federal level was vulnerable to disrup-\ntions due to terrorism-related attacks but, due to the \nubiquity of the availability of IT, any organization was \nvulnerable. \n Soon after the terrorist attacks, the U.S. Congress \npassed various new laws and enhanced some exist-\ning ones that introduced sweeping changes to home-\nland security provisions and to the existing security \norganizations. The executive branch of the government \nalso issued a series of Homeland Security Presidential \nDirectives to maintain domestic security. These laws \nand directives are comprehensive and contain detailed \nprovisions to make the U.S. secure from its vulnerabili-\nties. Later in the chapter, we describe some principle \nprovisions of these homeland security-related laws and \npresidential directives. Next, we discuss the organiza-\ntional changes that were initiated to support homeland \nsecurity in the United States. Then we highlight the \n9-11 Commission that Congress charted to provide a full \naccount of the circumstances surrounding the attacks \nand to develop recommendations for corrective measures \nthat could be taken to prevent future acts of terrorism. \nWe also detail the Intelligence Reform and Terrorism \nPrevention Act of 2004 and the Implementing the 9-11 \nCommission Recommendations Act of 2007. Finally, we \nsummarize the chapter’s discussion. \n 1. STATUTORY AUTHORITIES \n Here we discuss the important homeland security-related \nlaws passed in the aftermath of the terrorist attacks. \nThese laws are listed in Figure 38.1 . \n The USA PATRIOT Act of 2001 \n(PL 107-56) \n Just 45 days after the September 11 attacks, Congress \npassed the USA PATRIOT Act of 2001 (also known as \nThe USA PATRIOT Act of 2001\nHomeland Security Act of 2002\nE-Government Act of 2002\nThe Aviation and Transportation Security Act of 2001\nEnhanced Border Security and Visa Entry Reform Act of 2002\nPublic Health Security, Bioterrorism Preparedness & Response Act of 2002\n FIGURE 38.1 Laws passed in the aftermath of the 9/11 terrorist attacks. \n" }, { "page_number": 695, "text": "PART | VI Physical Security\n662\nthe Uniting and Strengthening America by Providing \nAppropriate Tools Required to Intercept and Obstruct \nTerrorism Act of 2001). This Act, divided into 10 titles, \nexpands law enforcement powers of the government and \nlaw enforcement authorities. 1 These titles are listed in \n Figure 38.2 . \n A summary of the titles is shown in the sidebar, \n “ Summary of USA PATRIOT Act Titles. ” \nEnhancing Domestic Security Against Terrorism\nEnhanced Surveillance Procedures\nInternational Money Laundering Abatement and\nAntiterrorist Financing Act of 2001\nProtecting the Border\nRemoving Obstacles to Investigate Terrorism\nProviding for Victims of Terrorism, Public Safety\nOfficers, and their Families\nIncreased Information Sharing for Critical\nInfrastructure Protection\nStrengthening the Criminal Laws against Terrorism\nImproved Intelligence\nMiscellaneous\nTITLE I\nTITLE II\nTITLE III\nTITLE IV\nTITLE V\nTITLE VI\nTITLE VII\nTITLE VIII\nTITLE IX\nTITLE X\n FIGURE 38.2 USA PATRIOT Act titles. \n 1. “ USA PATRIOT Act of 2001 ” U.S. Government Printing Offi ce, \n http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname \u0003 107_cong_\npublic_laws & docid \u0003 f:publ056.107.pdf (downloaded 10/20/2008). \n Summary of USA PATRIOT Act Titles \n Title I – Enhancing Domestic Security Against Terrorism \n Increased funding for the technical support center at the \nFederal Bureau of Investigation, allowed military assist-\nance to enforce prohibition in certain emergencies, and \nexpanded National Electronic Crime Task Force Initiative. \n Title II – Enhanced Surveillance Procedures \n Authorized to intercept wire, oral, and electronic commu-\nnications relating to terrorism, computer fraud and abuse \noffenses, to share criminal investigative information. It \nallowed seizure of voicemail messages pursuant to warrants \nand subpoenas for records of electronic communications. \nIt provided delaying notice of the execution of a warrant, \npen register and trap and trace authority under the Foreign \nIntelligence Surveillance Act, access to records and other \nitems under FISA, interception of computer trespasser com-\nmunications, and nationwide service of search warrants for \nelectronic evidence. \n Title III – International Money-laundering Abatement and \nAntiterrorist Financing Act of 2001 \n Special measures relating to the following three subtitles \nwere created: \n A. International Counter Money Laundering and Related \nMeasures \n B. Bank Secrecy Act Amendments and Related Improvements \n C. Currency Crimes and Protection \n Title IV – Protecting the Border \n Special measures relating to the following three subtitles \nwere created: \n A. Protecting the Northern Border \n B. Enhanced Immigration Provisions \n C. Preservation of Immigration Benefits for Victims of \nTerrorism \n Title V – Removing Obstacles to Investigating Terrorism \n Attorney General and Secretary of State are authorized to \npay rewards to combat terrorism. It allowed DNA iden-\ntification of terrorists and other violent offenders, and \nallowed disclosure of information from National Center for \nEducation Statistics (NCES) surveys. \n Title VI – Providing for Victims of Terrorism, Public Safety \nOfficers, and their Families \n Special measures relating to the following subtitles were \ncreated: \n A. Aid to Families of Public Safety Officers \n B. Amendments to the Victims of Crime Act of 1984 \n Title VII – Increased Information Sharing for Critical \nInfrastructure Protection \n Expansion of regional information sharing systems to facili-\ntate federal, state, and local law enforcement response \nrelated to terrorist attacks. \n" }, { "page_number": 696, "text": "Chapter | 38 Homeland Security\n663\n The Aviation and Transportation Security \nAct of 2001 (PL 107-71) \n The series of September 11 attacks, perpetrated by 19 \nhijackers, killed 3000 people and brought commercial \naviation to a standstill. It became obvious that enhanced \nlaws and strong measures were needed to tighten avia-\ntion security. The Aviation and Transportation Security \nAct of 2001 transfers authority over civil aviation secu-\nrity from the Federal Aviation Administration (FAA) \nto the Transportation Security Administration (TSA). 2 \nWith the passage of the Homeland Security Act of 2002, \nthe TSA was later transferred to the Department of \nHomeland Security. \n Key features of the act include the creation of an \nUndersecretary of Transportation for Security; feder-\nalization of airport security screeners; and the assign-\nment of Federal Security Managers to each airport. Also \nincluded in the act are these provisions: airports provide \nfor the screening of all checked baggage by explosive \ndetection devices; allowing pilots to carry firearms; \n 2 “ Aviation and Transportation Security Act of 2001, ” National \nTransportation Library, http://ntl.bts.gov/faq/avtsa.html (downloaded \n10/20/2008). \n 3 “ Aviation and Transportation Security Act of 2001, ” National \nTransportation Library, http://ntl.bts.gov/faq/avtsa.html (downloaded \n10/20/2008). \n 4 “ Enhanced Border Security and Visa Entry Reform Act of 2002 (PL \n107-173), ” Center for Immigration Studies, www.cis.org/articles/2002/\nback502.html (downloaded 10/20/2008). \n Title VIII – Strengthening the Criminal Laws against Terrorism \n Strengthened laws against terrorist attacks and other acts \nof violence against mass transportation systems and crimes \ncommitted at U.S. facilities abroad. \n Provided for the development and support of cyber \nsecurity forensic capabilities and expanded the biological \nweapons statute. \n Title IX – Improved Intelligence \n Responsibilities of Director of Central Intelligence regarding \nforeign intelligence collected under the Foreign Intelligence \nSurveillance Act of 1978. \n Inclusion of international terrorist activities within scope \nof foreign intelligence under National Security Act of 1947. \n Disclosure to Director of Central Intelligence of foreign \nintelligence-related information with respect to criminal \ninvestigations. \n Foreign terrorist asset tracking center. \n Title X – Miscellaneous \n Review of the Department of Justice. \n A. Definition of electronic surveillance. \n B. Venue in money-laundering cases. \n C. Automated fingerprint identification system at overseas \nconsular posts and points of entry to the United States. \n D. Critical infrastructures protection. \nrequiring the electronic transmission of passenger mani-\nfests on international flights prior to landing in the U.S.; \nrequiring background checks, including national security \nchecks, of persons who have access to secure areas at \nairports; and requiring that all federal security screeners \nbe U.S. citizens. 3 These key features are highlighted in \nthe Figure 38.3 . \n Enhanced Border Security and \nVisa Entry Reform Act of 2002 \n(PL 107-173) \n This Act, divided into six titles, represents the most \ncomprehensive immigration-related response to the ter-\nrorist threat. 4 The titles are listed in Figure 38.4 . \n A summary of these titles is shown in the side-\nbar, “ Summary of the Border Security and Visa Entry \nReform Act of 2002. ” \nCreation of an Undersecretary of Transportation for Security\nFederalization of Airport Security Screeners\nAssignment of Federal Security Managers\nAirport Screening by Explosion Detection Devices\nAllowing Pilots to Carry Firearms\nElectronic Transmission of Passenger Manifests on International Flights\n FIGURE 38.3 Key features of the Aviation and Transportation Security Act of 2001. \n" }, { "page_number": 697, "text": "PART | VI Physical Security\n664\n 5 “ Public Health Security, Bioterrorism Preparedness & Response \nAct of 2002, ” U.S. Government Printing Offi ce, http://frwebgate.\naccess.gpo.gov/cgi-bin/getdoc.cgi?dbname \u0003 107_cong_public_\nlaws & docid \u0003 f:publ188.107 (downloaded 10/20/2008). \nTITLE I\nTITLE II\nTITLE III\nTITLE IV\nTITLE V\nTITLE VI\nTITLE VII\nFunding\nInteragency Information Sharing\nVisa Issuance\nInspection and Admission of Aliens\nRemoving the Obstacles to Investigate Terrorism\nForeign Students and Exchange Visitors\nMiscellaneous\n FIGURE 38.4 Border Security and Visa Entry Reform Act of 2002. \n Summary of the Border Security and Visa Entry Reform \nAct of 2002 \n Title I – Funding \n The Act provides for additional staff and training to increase \nsecurity on both the northern and southern borders. \n Title II – Interagency Information Sharing \n The Act requires the President to develop and implement \nan interoperable electronic data system to provide cur-\nrent and immediate access to information contained in \nthe databases of federal law enforcement agencies and the \nintelligence community that is relevant to visa issuance \ndeterminations and determinations of an alien’s admissibil-\nity or deportability. \n Title III – Visa Issuance \n This requires consular officers issuing a visa to an alien to \ntransmit an electronic version of the alien’s visa file to the \nINS so that the file is available to immigration inspectors at \nU.S. ports of entry before the alien’s arrival. \n This Act requires the Attorney General and the Secretary \nof State to begin issuing machine-readable, tamper-resistant \ntravel documents with biometric identifiers. \n Title IV – Inspection and Admission of Aliens \n It requires the President to submit to Congress a report \ndiscussing the feasibility of establishing a North American \nNational Security Program to enhance the mutual security \nand safety of the U.S., Canada, and Mexico. \n It also requires that all commercial flights and vessels \ncoming to the U.S. from any place outside the country must \nprovide to manifest information about each passenger, crew \nmember, and other occupant prior to arrival in the U.S. In \naddition, each vessel or aircraft departing from the U.S. \nfor any destination outside the U.S. must provide manifest \ninformation before departure. \n Title VI – Foreign Students and Exchange Visitors \n It requires the Attorney General, in consultation with the \nSecretary of State, to establish an electronic means to \nmonitor and verify the various steps involved in the admit-\ntance to the U.S. of foreign students, such as: the issuance \nof documentation of acceptance of a foreign student by an \neducational institution or exchange visitor program. \n Title VII – Miscellaneous \n The Act requires the Comptroller General to conduct a \nstudy to determine the feasibility of requiring every nonim-\nmigrant alien in the U.S. to provide the INS, on an annual \nbasis, with a current address, and where applicable, the \nname and address of an employer. \n It requires the Secretary of State and the INS Com-\nmissioner, in consultation with the Director of the Office \nof Homeland Security, to conduct a study on the proce-\ndures necessary for encouraging or requiring countries \npartici pating in the Visa Waiver Program to develop an \nintergovernmental network of interoperable electronic data \nsystems. \n Public Health Security, Bioterrorism \nPreparedness & Response Act of 2002 \n(PL 107-188) \n The Act authorizes funding for a wide range of pub-\nlic health initiatives. 5 Title I of the Act addresses the \nnational need to combat threats to public health, and to \nprovide grants to state and local governments to help \nthem prepare for public health emergencies, including \nemergencies resulting from acts of bioterrorism. The \nAct establishes opportunities for grants and cooperative \nagreements for states and local governments to conduct \nevaluations of public health emergency preparedness, \nand enhance public health infrastructure and the capacity \nto prepare for and respond to those emergencies. Other \ngrants support efforts to combat antimicrobial resistance, \n" }, { "page_number": 698, "text": "Chapter | 38 Homeland Security\n665\nimprove public health laboratory capacity, and support \ncollaborative efforts to detect, diagnose, and respond to \nacts of bioterrorism. \n The Act also addresses other related public health \nsecurity issues. Some of these provisions include: \n ● New controls on biological agents and toxins \n ● Additional safety and security measures affecting the \nnation’s food and drug supply \n ● Additional safety and security measures affecting the \nnation’s drinking water \n ● Measures affecting the Strategic National Stockpile \nand development of priority countermeasures to \nbioterrorism \n Homeland Security Act of 2002 \n(PL 107-296) \n This landmark Act establishes a new Executive Branch \nagency, the U.S. Department of Homeland Security \n(DHS), and consolidates the operations of 22 existing \nfederal agencies. 6 \n The primary mission of the DHS is given in \n Figure 38.5 . \n As a part of this act, a directorate of information \nanalysis and infrastructure protection was set up. The \nprimary role of this directorate is to 7 : \n 1. To access, receive, and analyze law enforcement \ninformation, intelligence information, and other \ninformation from agencies of the federal government, \nstate and local government agencies (including law \nenforcement agencies), and private sector entities, \nand to integrate such information in order to: \n A. Identify and assess the nature and scope of terrorist \nthreats to the homeland \n B. Detect and identify threats of terrorism against \nthe United States \n C. Understand such threats in light of actual and \npotential vulnerabilities of the homeland \n 2. To carry out comprehensive assessments of the \nvulnerabilities of the key resources and critical \ninfrastructure of the United States, including the \nperformance of risk assessments to determine the \nrisks posed by particular types of terrorist attacks \nwithin the United States (including an assessment \nof the probability of success of such attacks and \nthe feasibility and potential efficacy of various \ncountermeasures to such attacks). \n 3. To integrate relevant information, analyses, and \nvulnerability assessments (whether such information, \nanalyses, or assessments are provided or produced \nby the Department or others) in order to identify \npriorities for protective and support measures by \nthe Department, other agencies of the federal \ngovernment, state and local government agencies \nand authorities, the private sector, and other \nentities. \n 4. To ensure, pursuant to section 202, the timely \nand efficient access by the Department to all \ninformation necessary to discharge the responsibili-\nties under this section, including obtaining such \ninformation from other agencies of the federal \ngovernment. \n 5. To develop a comprehensive national plan for \nsecuring the key resources and critical infrastructure \nof the United States, including power production, \ngeneration, and distribution systems, information \ntechnology and telecommunications systems \n(including satellites), electronic financial and \nproperty record storage and transmission systems, \nemergency preparedness communications systems, \nand the physical and technological assets that \nsupport such systems. \n 6. To recommend measures necessary to protect the\n key resources and critical infrastructure of the \nUnited States in coordination with other agencies of \nthe federal government and in cooperation with state \nand local government agencies and authorities, the \nprivate sector, and other entities. \n 7. To administer the Homeland Security Advisory \nSystem, including: \n A. Exercising primary responsibility for public \nadvisories related to threats to homeland \nsecurity. \n B. In coordination with other agencies of the \nfederal government, providing specific warning \nPrevent Terrorist Attacks\nReduce the Vulnerability of the United States to Terrorism\nMinimize the damage, and assist in recovery from terrorist attacks that do occur in the United States\n FIGURE 38.5 DHS mission. \n 6 “ Homeland Security Act of 2002, ” Homeland Security, www.dhs.gov/\nxabout/laws/law_regulation_rule_0011.shtm (downloaded 10/20/2008). \n 7 “ Homeland Security Act of 2002, ” Homeland Security, www.dhs.gov/\nxabout/laws/law_regulation_rule_0011.shtm (downloaded 10/20/2008). \n" }, { "page_number": 699, "text": "PART | VI Physical Security\n666\ninformation, and advice about appropriate \nprotective measures and countermeasures, to state \nand local government agencies and authorities, \nthe private sector, other entities, and the \npublic. \n 8. To review, analyze, and make recommendations \nfor improvements in the policies and procedures \ngoverning the sharing of law enforcement \ninformation, intelligence information, intelligence-\nrelated information, and other information relating \nto homeland security within the federal government \nand between the federal government and state and \nlocal government agencies and authorities. \n 9. To disseminate, as appropriate, information ana-\nlyzed by the Department within the Department, to \nother agencies of the federal government with \nresponsibilities relating to homeland security, and \nto agencies of state and local governments and \nprivate sector entities with such responsibilities \nin order to assist in the deterrence, prevention, \npreemption of, or response to, terrorist attacks \nagainst the United States. \n 10. To consult with the Director of Central Intelligence \nand other appropriate intelligence, law enforce-\nment, or other elements of the federal government \nto establish collection priorities and strategies for \ninformation, including law enforcement-related \ninformation, relating to threats of terrorism against \nthe United States through such means as the \nrepresentation of the Department in discussions \nregarding requirements and priorities in the \ncollection of such information. \n 11. To consult with state and local governments and \nprivate sector entities to ensure appropriate \nexchanges of information, including law \nenforcement-related information, relating to \nthreats of terrorism against the United States. \n 12. To ensure that: \n A. Any material received pursuant to this Act is \nprotected from unauthorized disclosure and \nhandled and used only for the performance of \nofficial duties. \n B. Any intelligence information under this Act is \nshared, retained, and disseminated consistent \nwith the authority of the Director of Central \nIntelligence to protect intelligence sources and \nmethods under the National Security Act of \n1947 (50 U.S.C. 401 et seq.) and related proce-\ndures and, as appropriate, similar authorities of \nthe Attorney General concerning sensitive law \nenforcement information. \n 13. To request additional information from other agen-\ncies of the federal government, state and local \ngovernment agencies, and the private sector relat-\ning to threats of terrorism in the United States, or \nrelating to other areas of responsibility assigned by \nthe Secretary, including the entry into cooperative \nagreements through the Secretary to obtain such \ninformation. \n 14. To establish and utilize, in conjunction with the \nchief information officer of the Department, a \nsecure communications and information technol-\nogy infrastructure, including data mining and \nother advanced analytical tools, in order to access, \nreceive, and analyze data and information in fur-\ntherance of the responsibilities under this section, \nand to disseminate information acquired and ana-\nlyzed by the Department, as appropriate. \n 15. To ensure, in conjunction with the chief informa-\ntion officer of the Department, any information \ndatabases and analytical tools developed or utilized \nby the Department: \n A. Are compatible with one another and with \nrelevant information databases of other \nagencies of the federal government. \n B. Treat information in such databases in a \nmanner that complies with applicable federal \nlaw on privacy. \n 16. To coordinate training and other support to the \nelements and personnel of the Department, other \nagencies of the federal government, and state and \nlocal governments that provide information to the \nDepartment, or are consumers of information pro-\nvided by the Department, in order to facilitate the \nidentification and sharing of information revealed \nin their ordinary duties and the optimal utilization \nof information received from the Department. \n 17. To coordinate with elements of the intelligence \ncommunity and with federal, state, and local law \nenforcement agencies, and the private sector, as \nappropriate. \n 18. To provide intelligence and information analysis \nand support to other elements of the Department. \n 19. To perform such other duties relating to such \nresponsibilities as the Secretary may provide. \n E-Government Act of 2002 (PL 107-347) \n The E-Government Act of 2002 establishes a Federal \nChief Information Officers Council to oversee govern-\nment information and services, and creation of a new \n" }, { "page_number": 700, "text": "Chapter | 38 Homeland Security\n667\nOffice of Electronic Government within the Office of \nManagement and Budget. 8 The purposes of the Act are 9 : \n ● To provide effective leadership of federal \ngovernment efforts to develop and promote \nelectronic government services and processes by \nestablishing an Administrator of a new Office \nof Electronic Government within the Office of \nManagement and Budget. \n ● To promote use of the Internet and other information \ntechnologies to provide increased opportunities for \ncitizen participation in government. \n ● To promote interagency collaboration in providing \nelectronic government services, where this \ncollaboration would improve the service to citizens \nby integrating related functions, and in the use of \ninternal electronic government processes, where \nthis collaboration would improve the efficiency and \neffectiveness of the processes. \n ● To improve the ability of the government to achieve \nagency missions and program performance goals. \n ● To promote the use of the Internet and emerging \ntechnologies within and across government agencies \nto provide citizen-centric government information \nand services. \n ● To reduce costs and burdens for businesses and other \ngovernment entities. \n ● To promote better informed decision-making by \npolicy makers. \n ● To promote access to high quality government \ninformation and services across multiple \nchannels. \n ● To make the federal government more transparent \nand accountable. \n ● To transform agency operations by utilizing, where \nappropriate, best practices from public and private \nsector organizations. \n ● To provide enhanced access to government informa-\ntion and services in a manner consistent with laws \nregarding protection of personal privacy, national \nsecurity, records retention, access for persons with \ndisabilities, and other relevant laws. \n Title III of the Act is known as the Federal \nInformation Security Management Act of 2002. This act \napplies to the national security systems, that include any \ninformation systems used by an agency or a contractor \nof an agency involved in intelligence activities; cryptol-\nogy activities related to the nation’s security; command \nand control of military equipment that is an integral \npart of a weapon or weapons system or is critical to the \ndirect fulfillment of military or intelligence missions. \nNevertheless, this definition does not apply to a system \nthat is used for routine administrative and business appli-\ncations (including payroll, finance, logistics, and person-\nnel management applications). The purposes of this Title \nare to: \n ● Provide a comprehensive framework for ensuring the \neffectiveness of information security controls over \ninformation resources that support federal operations \nand assets. \n ● Recognize the highly networked nature of the current \nfederal computing environment and provide effective \ngovernment-wide management and oversight of \nthe related information security risks, including \ncoordination of information security efforts \nthroughout the civilian, national security, and law-\nenforcement communities. \n ● Provide for development and maintenance of \nminimum controls required to protect federal \ninformation and information systems. \n ● Provide a mechanism for improved oversight of \nfederal agency information security programs. \n ● Acknowledge that commercially developed \ninformation security products offer advanced, \ndynamic, robust, and effective information security \nsolutions, reflecting market solutions for the \nprotection of critical information infrastructures \nimportant to the national defense and economic \nsecurity of the nation that are designed, built, and \noperated by the private sector. \n ● Recognize that the selection of specific technical \nhardware and software information security solutions \nshould be left to individual agencies from among \ncommercially developed products. \n 2. HOMELAND SECURITY PRESIDENTIAL \nDIRECTIVES \n Presidential directives are issued by the National \nSecurity Council and are signed or authorized by the \nPresident. 9 A series of Homeland Security Presidential \n 8 “ E-Government Act of 2002, ” U.S. Government Printing Offi ce, http://\nfrwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname \u0003 107_cong_pub-\nlic_laws & docid \u0003 f:publ347.107.pdf (downloaded 10/20/2008). \n 9 “ Presidential directives, ” Wikipedia, http://en.wikipedia.org , 2008 \n(downloaded 10/24/2008). \n" }, { "page_number": 701, "text": "PART | VI Physical Security\n668\nDirectives (HSPDs) were issued by President George W. \nBush on matters pertaining to Homeland Security 10 : \n ● HSPD 1: Organization and Operation of the \nHomeland Security Council. Ensures coordination \nof all homeland security-related activities among \nexecutive departments and agencies and promotes \nthe effective development and implementation of all \nhomeland security policies. \n ● HSPD 2: Combating Terrorism Through Immigration \nPolicies. Provides for the creation of a task force \nwhich will work aggressively to prevent aliens who \nengage in or support terrorist activity from entering \nthe United States and to detain, prosecute, or deport \nany such aliens who are within the United States. \n ● HSPD 3: Homeland Security Advisory System. \nEstablishes a comprehensive and effective means \nto disseminate information regarding the risk of \nterrorist acts to federal, state, and local authorities \nand to the American people. \n ● HSPD 4: National Strategy to Combat Weapons \nof Mass Destruction. Applies new technologies, \nincreased emphasis on intelligence collection and \nanalysis, strengthens alliance relationships, and \nestablishes new partnerships with former adversaries \nto counter this threat in all of its dimensions. \n ● HSPD 5: Management of Domestic Incidents. \nEnhances the ability of the United States to manage \ndomestic incidents by establishing a single, \ncomprehensive national incident management \nsystem. \n ● HSPD 6: Integration and Use of Screening \nInformation. Provides for the establishment of the \nTerrorist Threat Integration Center. \n ● HSPD 7: Critical Infrastructure Identification, \nPrioritization, and Protection. Establishes a national \npolicy for federal departments and agencies \nto identify and prioritize United States critical \ninfrastructure and key resources and to protect them \nfrom terrorist attacks. \n ● HSPD 8: National Preparedness. Identifies steps for \nimproved coordination in response to incidents. This \ndirective describes the way federal departments and \nagencies will prepare for such a response, including \nprevention activities during the early stages of a \nterrorism incident. This directive is a companion to \nHSPD-5. \n ● HSPD 8 Annex 1: National Planning. Further \nenhances the preparedness of the United States by \nformally establishing a standard and comprehensive \napproach to national planning. \n ● HSPD 9: Defense of United States Agriculture and \nFood. Establishes a national policy to defend the \nagriculture and food system against terrorist attacks, \nmajor disasters, and other emergencies. \n ● HSPD 10: Biodefense for the 21st Century. Provides \na comprehensive framework for our nation’s \nBiodefense. \n ● HSPD 11: Comprehensive Terrorist-Related \nScreening Procedures. Implements a coordinated \nand comprehensive approach to terrorist-related \nscreening that supports homeland security, at home \nand abroad. This directive builds upon HSPD 6. \n ● HSPD 12: Policy for a Common Identification \nStandard for Federal Employees and Contractors. \nEstablishes a mandatory, government-wide standard \nfor secure and reliable forms of identification issued \nby the federal government to its employees and \ncontractors (including contractor employees). \n ● HSPD 13: Maritime Security Policy. Establishes \npolicy guidelines to enhance national and homeland \nsecurity by protecting U.S. maritime interests. \n ● HSPD 15: U.S. Strategy and Policy in the War on \nTerror. \n ● HSPD 16: Aviation Strategy. Details a strategic \nvision for aviation security while recognizing \nongoing efforts, and directs the production of \na National Strategy for Aviation Security and \nsupporting plans. \n ● HSPD 17: Nuclear Materials Information Program. \n ● HSPD 18: Medical Countermeasures against \nWeapons of Mass Destruction. Establishes policy \nguidelines to draw upon the considerable potential \nof the scientific community in the public and \nprivate sectors to address medical countermeasure \nrequirements relating to CBRN threats. \n ● HSPD 19: Combating Terrorist Use of Explosives in \nthe United States. Establishes a national policy, and \ncalls for the development of a national strategy and \nimplementation plan, on the prevention and detection \nof, protection against, and response to terrorist use of \nexplosives in the United States. \n ● HSPD 20: National Continuity Policy. Establishes \na comprehensive national policy on the continuity \nof federal government structures and operations \nand a single National Continuity Coordinator \nresponsible for coordinating the development and \nimplementation of federal continuity policies. \n 10 “ Homeland Security presidential directives, ” Homeland Security, \nhttps://www.drii.org/professional_prac/profprac_appendix.html#\nBUSINESS_CONTINUITY_PLANNING_INFORMATION , \n2008 \n(downloaded 10/24/2008). \n" }, { "page_number": 702, "text": "Chapter | 38 Homeland Security\n669\n ● HSPD 20 Annex A: Continuity Planning. Assigns \nexecutive departments and agencies to a category \ncommensurate with their COOP/COG/ECG \nresponsibilities during an emergency. \n ● HSPD 21: Public Health and Medical Preparedness. \nEstablishes a national strategy that will enable a level \nof public health and medical preparedness sufficient \nto address a range of possible disasters. \n ● HSPD 23: National Cyber Security Initiative. \n ● HSPD 24: Biometrics for Identification and \nScreening to Enhance National Security. Establishes \na framework to ensure that federal executive \ndepartments use mutually compatible methods \nand procedures regarding biometric information \nof individuals, while respecting their information \nprivacy and other legal rights. \n 3 . ORGANIZATIONAL ACTIONS \n These laws and homeland security presidential direc-\ntives called for deep and fundamental organizational \nchanges to the executive branch of the government. \nThe Homeland Security Act of 2002 established a new \nExecutive Branch agency, the U.S. Department of Home-\nland Security (DHS), and consolidated the operations of \n22 existing federal agencies. 11 This Department’s over-\nriding and urgent missions are (1) to lead the unified \nnational effort to secure the country and preserve our \nfreedoms, and (2) to prepare for and respond to all haz-\nards and disasters. The citizens of the United States must \nhave the utmost confidence that the Department can exe-\ncute both of these missions. \n Faced with the challenge of strengthening the com-\nponents to function as a unified Department, DHS must \ncoordinate centralized, integrated activities across com-\nponents that are distinct in their missions and opera-\ntions. Thus, sound and cohesive management is the key \nto department-wide and component-level strategic goals. \nWe seek to harmonize our efforts as we work diligently \nto accomplish our mission each and every day. \n The Department of Homeland Security is headed by \nthe Secretary of Homeland Security. It has various depart-\nments, including management, science and technology, \nhealth affairs, intelligence and analysis, citizenship and \nimmigration services, and national cyber security center. \n Department of Homeland Security \nSubcomponents \n There are various subcomponents of The Department of \nHomeland Security that are involved with Information \nTechnology Security. 12 These include the following: \n ● The Office of Intelligence and Analysis is \nresponsible for using information and intelligence \nfrom multiple sources to identify and assess current \nand future threats to the United States. \n ● The National Protection and Programs Directorate \nhouses offices of the Cyber Security and \nCommunications Department. \n ● The Directorate of Science and Technology is \nresponsible for research and development of various \ntechnologies, including information technology. \n ● The Directorate for Management is responsible for \ndepartment budgets and appropriations, expenditure \nof funds, accounting and finance, procurement, \nhuman resources, information technology systems, \nfacilities and equipment, and the identification and \ntracking of performance measurements. \n ● The Office of Operations Coordination works \nto deter, detect, and prevent terrorist acts by \ncoordinating the work of federal, state, territorial, \ntribal, local, and private-sector parties and by \ncollecting and turning information from a variety \nof sources. It oversees the Homeland Security \nOperations Center (HSOC), which collects and fuses \ninformation from more than 35 federal, state, local, \ntribal, territorial, and private-sector agencies. \n State and Federal Organizations \n There are various organizations that support information \nsharing at the state and the federal levels. The Department \nof Homeland Security through the Office of Intelligence \nand Analysis provides personnel with operational and \nintelligence skills. The support to the state agencies is \ntailored to the unique needs of the locality and serves to: \n ● Help the classified and unclassified information flow \n ● Provide expertise \n ● Coordinate with local law enforcement and other \nagencies \n ● Provide local awareness and access \n 11 “ Public Health Security, Bioterrorism Preparedness & Response \nAct of 2002, ” U.S. Government Printing Offi ce, http://frwebgate.\naccess.gpo.gov/cgi-bin/getdoc.cgi?dbname \u0003 107_cong_public_\nlaws & docid \u0003 f:publ188.107 (downloaded 10/20/2008). \n 12 “ Public Health Security, Bioterrorism Preparedness & Response \nAct of 2002, ” U.S. Government Printing Offi ce, http://frwebgate.\naccess.gpo.gov/cgi-bin/getdoc.cgi?dbname \u0003 107_cong_public_\nlaws & docid \u0003 f:publ188.107 (downloaded 10/20/2008). \n" }, { "page_number": 703, "text": "PART | VI Physical Security\n670\n As of March 2008, there were 58 fusion centers \naround the country. The Department has provided more \nthan $254 million from FY 2004 – 2007 to state and local \ngovernments to support the centers. \n The Homeland Security Data Network (HSDN), \nwhich allows the federal government to move informa-\ntion and intelligence to the states at the Secret level, is \ndeployed at 19 fusion centers. Through HSDN, fusion \ncenter staff can access the National Counterterrorism \nCenter (NCTC), a classified portal of the most current \nterrorism-related information. \n There are various organizations at the state levels \nthat support the homeland security initiatives. These \norganizations vary in their size and budget from very \nlarge independently run departments to a department \nthat is a part of a larger related department. As an exam-\nple, California has the Office of Management Services \nthat is responsible for any emergencies in the state of \nCalifornia. The Governor’s Office of Homeland Security \nis responsible for the coordination among different \ndepartments to secure the state against potential terror-\nist threats. Very specific to IT security, the California \nOffice of Information Security and Privacy Protection is \nfunctional . \n The Governor’s Office of Homeland \nSecurity \n The Governor’s Office of Homeland Security (OHS) \nacts as the Cabinet-level state office for the prevention \nof and preparation for a potential terrorist event. 13 OHS \nserves a diverse set of federal, state, local, private sector, \nand tribal entities by taking an “ all-hazards ” approach to \nreducing risk and increasing responder capabilities. \n Because California is prone to floods, fires, and earth-\nquakes in addition to the potential for an attack using \nmanmade weapons of mass destruction, OHS is commit-\nted to contributing to a comprehensive, well-planned all-\nhazards strategy to prevent, prepare for, respond to, and \nrecover from any possible emergency. OHS is responsi-\nble for several key state functions, including 14 : \n ● Analysis and dissemination of threat-related \ninformation \n ● Protection of California’s critical infrastructure \n ● Management of the state’s homeland security \ngrants \n ● S/B training and exercising of first responders for \nterrorism events \n California Office of Information Security \nand Privacy Protection \n The California Office of Information Security and \nPrivacy Protection (OISPP) unites consumer privacy \nprotection with the oversight of government’s responsi-\nble management of information. OISPP provides serv-\nices to consumers, recommends practices to business, \nand provides policy direction, guidance, and compliance \nmonitoring to state government. 15 \n OISPP was established within the State and Consumer \nServices Agency by Chapter 183 of the Statutes of 2007 \n(Senate Bill 90), effective January 1, 2008. This legis-\nlation merged the Office of Privacy Protection, which \nopened in 2001 in the Department of Consumer Affairs \nwith a mission of identifying consumer problems in the \nprivacy area and encouraging the development of fair \ninformation practices, and the State Information Security \nOffice, established within the Department of Finance \nwith a mission of overseeing information security, risk \nmanagement, and operational recovery planning within \nstate government. 16 \n Private Sector Organizations for \nInformation Sharing \n Intelligence sharing and analysis groups have been \nset up in many private infrastructure industries. As an \nexample, National Electric Reliability Council has such \na group, Electricity Sector Information Sharing and \nAnalysis Center (ESISAC), which serves the electric-\nity sector by facilitating communications between sec-\ntor participants, federal governments, and other critical \ninfrastructure organizations. It is the job of the ESISAC \nto promptly disseminate threat indications, analyses, and \nwarnings, together with interpretations, to assist electric-\nity sector participants take protective actions. Similarly, \nmany other organizations in other infrastructure sectors \nare also members of an ISAC. \n 13 “ The Governor’s Offi ce of Homeland Security (OHS), ” www.\nhomeland.ca.gov/ (downloaded 10/24/2008). \n 14 “ The Governor’s Offi ce of Homeland Security (OHS), ” www.\nhomeland.ca.gov/ (downloaded 10/24/2008). \n 15 “ California Offi ce of Information Security and Privacy Protection, ” \n www.oispp.ca.gov/ (downloaded 10/20/2008). \n 16 “ California Offi ce of Information Security and Privacy Protection, ” \n www.oispp.ca.gov/ (downloaded 10/20/2008). \n" }, { "page_number": 704, "text": "Chapter | 38 Homeland Security\n671\n There are other organizations that share informa-\ntion among the member companies on issues related to \nincident response (see sidebar, “ National Commission \non Terrorist Attacks Upon the United States [The 9-11 \nCommission] ” ). These organizations include FIRST, the \nForum of Incident Response and Security Teams, 17 which \nhas as its members major corporations from all over the \nworld. The FBI encourages organizations from the pri-\nvate sector to become members of InfraGard to encour-\nage exchange of information among the members. 18 \n 17 “ Forum of incident response and security teams, ” www.fi rst.org/ \ndownloaded 10/20/2008). \n 18 InfraGard, www.infragard.net/ (downloaded 10/20/2008). \n 19 “ National Commission on Terrorist Attacks upon the United States \nAct of 2002, ” www.9-11commission.gov/about/107-306.pdf (down-\nloaded 10/20/2008). \n 20 “ The 9-11 Commission Report, ” National Commission on Terrorist \nAttacks upon the United States, http://govinfo.library.unt.edu/911/\nreport/911Report.pdf (downloaded 10/20/2008). \n 21 “ National Commission on Terrorist Attacks upon the United States \nAct of 2002, ” www.9-11commission.gov/about/107-306.pdf (down-\nloaded 10/20/2008). \n 22 “ National Commission on Terrorist Attacks upon the United States \nAct of 2002, ” www.9-11commission.gov/about/107-306.pdf (down-\nloaded 10/20/2008). \n National Commission on Terrorist Attacks Upon the United \nStates (The 9-11 Commission) \n Congress charted the National Commission on Terrorist \nAttacks Upon the United States (known as the 9-11 \nCommission) by Public Law 107-306, signed by the \nPresident on November 27, 2002, to provide a “ full and \ncomplete accounting ” of the attacks of September 11, 2001 \nand recommendations as to how to prevent such attacks in \nthe future. 19 On July 22, 2004, the 9-11 Commission issued \nits final report, which included 41 wide-ranging recom-\nmendations to help prevent future terrorist attacks. 20 Many \nof these recommendations were put in place with the pas-\nsage of the Intelligence Reform and Terrorism Prevention \nAct of 2004 (PL 108-458), which brought about significant \nreorganization of the intelligence community. Soon after \nthe Democratic Party came into the majority in the House \nof Representatives, the 110th Congress passed another act, \nImplementing Recommendations of the 9-11 Commission \nAct of 2007 (PL 110-53). This section is subdivided into the \nfollowing four subsections: \n 1. Creation of the National Commission on Terrorist \nAttacks Upon the United States (the 9-11 Commission) \n 2. Final Report of the National Commission on Terrorist \nAttacks Upon the United States (the 9-11 Commission \nReport) \n 3. Intelligence Reform and Terrorism Prevention Act of \n2004 (PL 108-458) \n 4. Implementing \nRecommendations \nof \nthe \n9-11 \nCommission Act of 2007 (PL 110-53) \n Creation of the National Commission on Terrorist Attacks \nUpon the United States (The 9-11 Commission) \n Congress created the National Commission on Terrorist \nAttacks Upon the United States (known as the 9-11 \nCommission) to provide a “ full and complete accounting ” of \nthe terrorist attacks and recommendations as to how \nto prevent such attacks in the future. 21 Specifically, the \nCommission was required to investigate “ facts and cir-\ncumstances relating to the terrorist attacks of September \n11, 2001, ” including those relating to intelligence agen-\ncies; law-enforcement agencies; diplomacy; immigration, \nnonimmigrant visas, and border control; the flow of assets \nto terrorist organizations; commercial aviation; the role \nof congressional oversight and resource allocation; and \nother areas determined relevant by the Commission for its \ninquiry. \n The Commission was composed of 10 members, of \nwhom not more than five members of the Commission \nwere from the same political party. \n In response to the requirements under law, the \nCommission organized work teams to address each of the \nfollowing eight topics 22 : \n 1. Al Qaeda and the organization of the 9-11 attack \n 2. Intelligence collection, analysis, and management \n(including oversight and resource allocation) \n 3. International counterterrorism policy, including states \nthat harbor or harbored terrorists, or offer or offered ter-\nrorists safe havens \n 4. Terrorist financing \n 5. Border security and foreign visitors \n 6. Law enforcement and intelligence collection inside the \nUnited States \n 7. Commercial aviation and transportation security, includ-\ning an Investigation into the circumstances of the four \nhijackings \n 8. The immediate response to the attacks at the national, \nstate, and local levels, including issues of continuity of \ngovernment. \n" }, { "page_number": 705, "text": "PART | VI Physical Security\n672\n Final Report of the National Commission on Terrorist \nAttacks Upon the United States (The 9-11 Commission \nReport) \n The 9-11 Commission interviewed more than 1000 indi-\nviduals in 10 countries and held at least 10 days of public \nhearings, receiving testimony from more than 110 federal, \nstate, and local officials and experts from the private sec-\ntor. The Commission issued three subpoenas to govern-\nment agencies: the Federal Aviation Administration (FAA), \nthe Department of Defense, and the City of New York. On \nJuly 22, 2004, the 9-11 Commission issued its final report, \nwhich included 41 wide-ranging recommendations to help \nprevent future terrorist attacks. This report covers both gen-\neral and specific findings. Here is the summary of their gen-\neral findings: \n Since the plotters were flexible and resourceful, we can-\nnot know whether any single step or series of steps would \nhave defeated them. What we can say with confidence \nis that none of the measures adopted by the U.S. govern-\nment from 1998 to 2001 disturbed or even delayed the \nprogress of the al Qaeda plot. Across the government, \nthere were failures of imagination, policy, capabilities, and \nmanagement. 23 \n Imagination \n The most important failure was one of imagination. We do \nnot believe leaders understood the gravity of the threat. The \nterrorist danger from Bin Laden and al Qaeda was not a \nmajor topic for policy debate among the public, the media, \nor in Congress. Indeed, it barely came up during the 2000 \npresidential campaign. \n Al Qaeda’s new brand of terrorism presented challenges \nto U.S. governmental institutions that they were not well-\ndesigned to meet. Though top officials all told us that they \nunderstood the danger, we believe there was uncertainty \namong them as to whether this was just a new and espe-\ncially venomous version of the ordinary terrorist threat the \nUnited States had lived with for decades, or it was indeed \nradically new, posing a threat beyond any yet experienced. \n As late as September 4, 2001, Richard Clarke, the White \nHouse staffer long responsible for counterterrorism policy \ncoordination, asserted that the government had not yet \nmade up its mind how to answer the question: “ Is al Qaeda \na big deal? ” \n A week later came the answer. \n Terrorism was not the overriding national security con-\ncern for the U.S. government under either the Clinton or \nthe pre-9/11 Bush administration. \n The policy challenges were linked to this failure of imagi-\nnation. Officials in both the Clinton and Bush administrations \nregarded a full U.S. invasion of Afghanistan as practically \ninconceivable before 9/11. \n Capabilities \n Before 9/11, the United States tried to solve the al Qaeda \nproblem with the capabilities it had used in the last stages of \nthe Cold War and its immediate aftermath. These capabilities \nwere insufficient. Little was done to expand or reform them. \n The CIA had minimal capacity to conduct paramilitary \noperations with its own personnel, and it did not seek a \nlarge-scale expansion of these capabilities before 9/11. The \nCIA also needed to improve its capability to collect intel-\nligence from human agents. \n At no point before 9/11 was the Department of Defense \nfully engaged in the mission of countering al Qaeda, even \nthough this was perhaps the most dangerous foreign enemy \nthreatening the United States. \n America’s homeland defenders faced outward. North \nAmerican Aerospace Defense Command (NORAD) itself \nwas barely able to retain any alert bases at all. Its planning \nscenarios occasionally considered the danger of hijacked \naircraft being guided to American targets, but only aircraft \nthat were coming from overseas. \n The most serious weaknesses in agency capabilities were \nin the domestic arena. The FBI did not have the capability to \nlink the collective knowledge of agents in the field to national \npriorities. Other domestic agencies deferred to the FBI. \n FAA capabilities were weak. Any serious examination of \nthe possibility of a suicide hijacking could have suggested \nchanges to fix glaring vulnerabilities — expanding no-fly \nlists, searching passengers identified by the Computer \nAssisted Passenger Prescreening System (CAPPS) screening \nsystem, deploying federal air marshals domestically, hard-\nening cockpit doors, alerting air crews to a different kind of \nhijacking possibility than they had been trained to expect. \nYet the FAA did not adjust either its own training or training \nwith NORAD to take account of threats other than those \nexperienced in the past. \n Management \n The missed opportunities to thwart the 9/11 plot were also \nsymptoms of a broader inability to adapt the way govern-\nment manages problems to the new challenges of the \ntwenty-first century. Action officers should have been able \nto draw on all available knowledge about al Qaeda in the \ngovernment. Management should have ensured that infor-\nmation was shared and duties were clearly assigned across \nagencies, and across the foreign-domestic divide. \n There were also broader management issues with respect \nto how top leaders set priorities and allocated resources. \n 23 “ The 9-11 Commission Report, ” National Commission on Terrorist \nAttacks upon the United States, http://govinfo.library.unt.edu/911/\nreport/911Report.pdf (downloaded 10/20/2008). \n" }, { "page_number": 706, "text": "Chapter | 38 Homeland Security\n673\nFor instance, on December 4, 1998, Director of Central \nIntelligence (DCI), Tenet issued a directive to several CIA offi-\ncials and the Deputy Director of Central Intelligence (DDCI) \nfor Community Management, stating: “ We are at war. I want \nno resources or people spared in this effort, either inside \nCIA or the Community. ” The memorandum had little overall \neffect on mobilizing the CIA or the intelligence community. \nThis episode indicates the limitations of the DCI’s authority \nover the direction of the intelligence community, including \nagencies within the Department of Defense. \n The U.S. government did not find a way of pooling intel-\nligence and using it to guide the planning and assignment \nof responsibilities for joint operations involving entities as \ndisparate as the CIA, the FBI, the State Department, the \nmilitary, and the agencies involved in homeland security. \n Intelligence Reform and Terrorism Prevention Act of 2004 \n(PL 108-458) \n Many of the recommendations of the Final Report of the \nNational Commission on Terrorist Attacks Upon the United \nStates (The 9-11 Commission Report) were put into the \nIntelligence Reform and Terrorism Prevention Act of 2004. \nThis Act, divided into 10 titles, brought about significant \nreorganization of intelligence community and critical infra-\nstructures protection. 24 \n Title I — Reform of the Intelligence Community \n Special measures relating to the following subtitles were \ncreated: \n A. Establishment of Director of National Intelligence \n B. National Counterterrorism Center, National Counter \nProliferation Center, and National Intelligence Centers \n C. Joint Intelligence Community Council \n D. Improvement of education for the intelligence community \n E. Additional improvements of intelligence activities \n F. Privacy and civil liberties \n G. Conforming and other amendments \n H. Transfer, termination, transition, and other provisions \n I. Other matters \n Title II — Federal Bureau of Investigation \n Improvement of intelligence capabilities of the Federal \nBureau of Investigation \n Title III — Security Clearances \n Special measures relating to the security clearances have \nbeen created. \n Title IV — Transportation Security \n Special measures relating to the following subtitles were \ncreated: \n A. National strategy for transportation security \n B. Aviation security \n C. Air cargo security \n D. Maritime security \n E. General provisions \n Title V — Border Protection, Immigration, and Visa \nMatters \n Special measures relating to the following subtitles were \ncreated: \n A. Advanced Technology Northern Border Security Pilot \nProgram \n B. Border and immigration enforcement \n C. Visa requirements \n D. Immigration reform \n E. Treatment of aliens who commit acts of torture, extraju-\ndicial killings, or other atrocities abroad \n Title VI — Terrorism Prevention \n Special measures relating to the following subtitles were \ncreated: \n A. Individual terrorists as agents of foreign powers \n B. Money laundering and terrorist financing \n C. Money laundering abatement and financial antiterrorism \ntechnical corrections \n D. Additional enforcement tools \n E. Criminal history background checks \n F. Grand jury information sharing \n G. Providing material support to terrorism \n H. Stop Terrorist and Military Hoaxes Act of 2004 \n I. Weapons of Mass Destruction Prohibition Improvement \nAct of 2004 \n J. Prevention of Terrorist Access to Destructive Weapons \n K. Pretrial detention of terrorists \n Title VII — Implementation of 9-11 Commission \nRecommendations \n Special measures relating to the following subtitles were \ncreated: \n A. Diplomacy, foreign aid, and the military in the war on \nterrorism \n B. Terrorist travel and effective screening \n C. National preparedness \n D. Homeland security \n E. Public safety spectrum \n F. Presidential transition \n G. Improving international standards and cooperation to \nfight terrorist financing \n H. Emergency financial preparedness \n 24 “ Intelligence Reform and Terrorism Prevention Act of 2004, ” U.S. \nSenate Select Committee on Intelligence http://intelligence.senate.gov/\nlaws/pl108-458.pdf (downloaded 10/20/2008). \n" }, { "page_number": 707, "text": "PART | VI Physical Security\n674\n Title VIII — Other Matters \n Special measures relating to the following subtitles were \ncreated: \n A. Intelligence matters \n B. Department of homeland security matters \n C. Homeland security civil rights and civil liberties \nprotection \n Implementing Recommendations of the 9-11 Commission \nAct of 2007 (PL 110-53) \n Soon after the Democratic Party came into the majority in \nthe House of Representatives, the 110th Congress passed \nanother act, “ Implementing Recommendations of the 9-11 \nCommission Act of 2007 (PL 110-53, August 3, 2007). ” 25 \nApproximately a year after the passing of this law, the \nMajority Staffs of the Committees on Homeland and Foreign \nAffairs put its attention on the extent to which the law \nwas indeed implemented and issued a report on “ Wasted \nLessons of 9/11: How The Bush Administration Ignored the \nLaw and Squandered Its Opportunities to Make Our Country \nSafer. ” 26 \n This comprehensive Homeland Security legislation \nincluded provisions to strengthen the nation’s security \nagainst terrorism by requiring screening of all cargo placed \non passenger aircraft; securing mass transit, rail and bus \nsystems; assuring the scanning of all U.S.-bound maritime \ncargo; distributing Homeland Security grants based on \nrisk; creating a dedicated grant program to improve inter-\noperable radio communications; creating a coordinator for \nU.S. nonproliferation programs and improving international \ncooperation for interdiction of weapons of mass destruction; \ndeveloping better mechanisms for modernizing education \nin Muslim communities and Muslim-majority countries, and \ncreating a new forum for reform-minded members of those \ncountries; formulating coherent strategies for key countries; \nestablishing a common coalition approach on the treatment \nof detainees; and putting resources into making democratic \nreform an international effort, rather than a unilaterally U.S. \none. When President George W. Bush signed H.R. 1 into \nlaw on August 3, 2007 without any limiting statement, it \nseemed that the unfulfilled security recommendations of the \n9-11 Commission would finally be implemented. To ensure \nthat they were, over the past year the Majority staffs of the \nCommittees on Homeland Security and Foreign Affairs have \nconducted extensive oversight to answer the question, How \nis the Bush Administration doing on fulfilling the require-\nments of the “ Implementing Recommendations of the 9-11 \nCommission Act of 2007 (P.L. 110-53)? The Majority staffs \nof the two Committees prepared this report to summarize \ntheir findings. While the Majority staffs of the Committees \nfound that the Bush Administration has taken some steps to \ncarry out the provisions of the Act, this report focuses on the \nAdministration’s performance with respect to key statutory \nrequirements in the following areas: (1) aviation security; \n(2) rail and public transportation security; (3) port security; \n(4) border security; (5) information sharing; (6) privacy and \ncivil liberties; (7) emergency response; (8) biosurveillance; \n(9) private sector preparedness; and (10) national security. \nIn each of the 25 individual assessments in this report, a \nstatus update is provided on the Bush Administration’s per-\nformance on these key provisions. The status of the key \nprovisions identified in the report, help explain why the \nreport is entitled “ Wasted Lessons of 9/11: How the Bush \nAdministration Has Ignored the Law and Squandered Its \nOpportunities to Make Our Country Safer. ” 27 \n Based on this report, it is clear that the Bush \nAdministration did not deliver on myriad critical home-\nland and national security mandates set forth in the \n “ Implementing the Recommendations of 9-11 Commission \nAct of 2007. ” Members of the Committees were alarmed \nthat the Bush Administration did not make more progress \non implementing these key provisions. \n 25 “ Implementing Recommendations of the 9-11 Commission Act of \n2007, ” The White House, www.whitehouse.gov/news/releases/2007/08\n/20070803-1.html (downloaded 10/24/2008). \n 26 “ Wasted lessons of 9/11: How the bush administration ignored \nthe law and squandered its opportunities to make our country safer, ” \n The \nGavel , \n http://speaker.house.gov/blog/?p \u0003 1501 \n(downloaded \n10/24/2008). \n 27 “ Wasted lessons of 9/11: How the bush administration ignored \nthe law and squandered its opportunities to make our country safer, ” \n The \nGavel , \n http://speaker.house.gov/blog/?p \u0003 1501 \n(downloaded \n10/24/2008) \n 4. CONCLUSION \n Within about a year after the terrorist attacks, Congress \npassed various new laws, such as The USA PATRIOT \nAct, Aviation and Transportation Security Act, Enhanced \nBorder Security and Visa Entry Reform Act, Public \nHealth Security, Bioterrorism Preparedness & Response \nAct, Homeland Security Act, and E-Government Act, \nand introduced sweeping changes to homeland security \nprovisions and to the existing security organizations. The \nexecutive branch of the government also issued a series \nof Homeland Security Presidential Directives (HSPDs) \nto maintain domestic security. These laws and direc-\ntives are comprehensive and contain detailed provisions \nto make the United States secure. For example, HSPD \n5 enhances the ability of the United States to manage \ndomestic incidents by establishing a single, comprehen-\nsive national incident management system. \n" }, { "page_number": 708, "text": "Chapter | 38 Homeland Security\n675\n These laws and homeland security presidential \ndirectives call for deep and fundamental organizational \nchanges to the executive branch of the government. For \nexample, the Homeland Security Act of 2002 established \na new Executive Branch agency, the U.S. Department \nof Homeland Security (DHS), and consolidated the \noperations of 22 existing federal agencies. Intelligence-\nsharing and analysis groups have been set up in many \nprivate infrastructure industries as well. For example, the \nNational Electric Reliability Council has such a group, \nthe Electricity Sector Information Sharing and Analysis \nCenter (ESISAC), which serves the electricity sector by \nfacilitating communications between sector participants, \nfederal governments, and other critical infrastructure \norganizations. \n Congress charted the “ National Commission on \nTerrorist Attacks Upon the United States (The 9-11 \nCommission) ” on November 27, 2002, to provide a “ full \nand complete accounting ” of the attacks of September \n11, 2001, and recommendations as to how to prevent \nsuch attacks in the future. On July 22, 2004, the 9-11 \nCommission issued its final report, which included 41 \nwide-ranging recommendations to help prevent future \nterrorist attacks. Many of these recommendations were \nput in place with the passage of the “ Intelligence Reform \nand Terrorism Prevention Act ” and “ Implementing \nRecommendations of the 9-11 Commission Act of \n2007. ” \n About a year after the passing of this law, the \nMajority Staffs of the Committees on Homeland and \nForeign Affairs drew its attention on the extent to which \nthe law was indeed implemented and issued a report on \n “ Wasted Lessons of 9/11: How the Bush Administration \nIgnored the Law and Squandered Its Opportunities to \nMake Our Country Safer. ” This report demonstrates that \nit is clear that the Bush Administration did not deliver on \nmyriad critical homeland and national security mandates \nset forth in the “ Implementing the 9-11 Commission \nRecommendations Act of 2007. ” Fulfilling the unfin-\nished business of the 9-11 Commission will most cer-\ntainly be a major focus of President Obama, as many of \nthe statutory requirements are to be met in stages. \n" }, { "page_number": 709, "text": "This page intentionally left blank\n" }, { "page_number": 710, "text": "677\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Information Warfare \n Jan Eloff \n University of Pretoria \n Anna Granova \n University of Pretoria \n Chapter 39 \n The times we live in are called the Information Age for \nvery good reasons: Today information is probably worth \nmuch more than any other commodity. Globalization, the \nother important phenomenon of the times we live in, has \ntaken the value of information to new heights. Citizens of \none country now feel entitled to know exactly what is hap-\npening in other countries around the globe. To this end, the \ncapabilities of the Internet have been put to use and people \nhave become accustomed to receiving information about \neveryone and everything as soon as it becomes available. \n October 20, 1969, marked the first message sent on the \nInternet, 1 and almost 40 years on we cannot imagine our \nlives without it. Internet banking, online gaming, and online \nshopping have become just as important to some as food \nand sleep. As the world has become more dependent on \nautomated environments, interconnectivity, networks, and \nthe Internet, instances of abuse and misuse of information \ntechnology infrastructures have increased proportionately. 2 \nSuch abuse has, unfortunately, not been limited only to the \nabuse of business information systems and Web sites but \nover time has also penetrated the military domain of state \nsecurity. Today this penetration of governmental IT infra-\nstructures, including, among others, the military domain, \nis commonly referred to as information warfare . However, \nthe concept of information warfare is not yet clearly defined \nand understood. Furthermore, information warfare is a \nmultidisciplinary field requiring expertise from technical, \nlegal, offensive, and defensive perspectives. Information \nsecurity professionals are challenged to respond to informa-\ntion warfare issues in a professional and knowledgable way. \n The purpose of this chapter is to define information \nwarfare (IW), discuss its most common tactics, weapons, \nand tools, compare IW terrorism with conventional war-\nfare, and address the issues of liability and the available \nlegal remedies under international law. To have this dis-\ncussion, a proper model and definition of IW first needs \nto be established. \n 1. INFORMATION WARFARE MODEL \n The author proposes a model for IW by mapping impor-\ntant concepts regarding IW on a single diagrammatic \nrepresentation (see Figure 39.1 ). This aids in simplifying \na complex concept as well as providing a holistic view \non the phenomenon. To this end, this chapter addresses \nthe four axes of IW: technical, legal, offensive, and \ndefensive, as depicted in Figure 39.1 . \n 1 An Internet History (2008), www.services.ex.ac.uk/cmit/modules/\nthe_internet/webct/ch-history.html , accessed on 19 February 2008. \n 2 Symantec Global Internet Security Threat Report Trends for July –\n December 07 (2008) Vol. 13, published April 2008 available at http://\neval.symantec.com/mktginfo/enterprise/white_papers/b-whitepaper_\ninternet_security_threat_report_xiii_04-2008.en-us.pdf , accessed on 21 \nApril 2008. \nTechnical\nLegal\nDefensive\nOffensive\n FIGURE 39.1 A perspective on IW. \n" }, { "page_number": 711, "text": "PART | VI Physical Security\n678\n The technical side of IW deals with technical exploits on \none side and defensive measures on the other. As is appar-\nent from Figure 39.1 , these range from the most destructive \noffensive strategies, such as a distributed denial-of-service \n(DDoS) attack, to various workstation emergency response \nteams, such as US-CERT. \n Considered from a legal perspective, IW can range \nfrom criminal prosecutions in international courts to use \nof force in retaliation. Therefore, the four axes of IW \ncontinuously interact and influence each other, as will \nbecome clearer from the discussion that follows. \n 2. INFORMATION WARFARE DEFINED \n The manner in which war is being conducted has \nevolved enormously, 3 and IW has been accepted as \na new d irection in military operations. 4 A number of \ndefinitions are relevant for the purposes of this chapter. \nSome authors 5 maintain that IW covers “ the full range of \ncompetitive information operations from destroying IT \nequipment to subtle perception management, and from \nindustrial espionage to marketing. ” If one regards the \nmore “ military ” definition of IW, one could say that IW \nis “ a subset of information operations ” — in other words \n “ actions taken to adversely affect information and infor-\nmation systems while defending one’s own information \nand information systems. ” 6 \n The UN Secretary-General’s report on Development in \nthe Field on Information and Telecommunications in the \ncontext of International Security describes IW as “ actions \naimed at achieving information superiority by executing \nmeasures to exploit, corrupt, destroy, destabilise, or dam-\nage the enemy’s information and its functions. ” 7 This \ndefinition is very similar to one of the more recent and \naccepted definitions found in literature that states that \nIW is “ actions taken in support of objectives that influ-\nence decision-makers by affecting the information and/or \ninformation systems of others while protecting your own \ninformation and/or information systems. ” 8 If one, how-\never, looks at IW in purely a military light, the following \ntechnical definition seems to be the most appropriate: \n “ The broad class of activities aimed at l everaging data, \ninformation and knowledge in support of military goals. ” 9 \n In light of the preceding, it is clear that IW is all about \ninformation superiority because “ the fundamental weapon \nand target of IW is information. ” 10 This being so, some \nauthors 11 outline the basic strategies of IW as follows: \n 1. Deny access to information \n 2. Disrupt/destroy data \n 3. Steal data \n 4. Manipulate data to change its context or its \nperception \n A slightly different perspective on the aims of IW is \nperhaps to see it as “ an attack on information systems for \nmilitary advantage using tactics of destruction, denial, \nexploitation or deception. ” 12 \n With these definitions in mind, it is now appropriate \nto consider whether IW is a concept that has been cre-\nated by enthusiasts such as individual hackers to impress \nthe rest of the world’s population or is, in fact, part of \ndaily military operations. \n 3. IW: MYTH OR REALITY? \n Groves once said: “ . . . nowhere it is safe … no one \nknows of the scale of the threat, the silent deadly menace \nthat stalks the network. ” 13 With the growing risk of ter-\nrorists and other hostile entities engaging in missions of \nsabotaging, either temporarily or permanently, important \npublic infrastructures through cyber attacks, the number \nof articles 14 on the topic has grown significantly. To \nunderstand the gravity of IW and its consequences, the \nfollowing real-life examples need to be considered. At the \noutset, however, it is important to mention that the reason \nthere is so little regulation of computer-related activities \nwith specific reference to IW both on national and inter-\nnational planes is that lawyers are very reluctant to ven-\nture into the unknown. The following examples, however, \ndemonstrate that IW has consistently taken place since at \n 6 Schmitt, “ Wired Warfare-Workstation Network Attack and jus \nin bello ” 2002 International Review of the Red Cross 365. See \nalso the defi nition by Goldberg available on line at http://psycom.\nnet/iwar.2.html . \n 7 UNG.A.Res A/56/164 dated 3 July 2001. \n 8 Thornton, R, Asymmetric Warfare – Threat and Response in the \n20-First Century 2007. \n 9 Vacca, J. R., Computer Forensics: Computer Crime Scene Investigation \n(2nd Edition) Charles River Media, 2005. \n 10 Hutchinson and Warren, IW – Corporate Attack and Defence in a \nDigital World (2001) , p . xviiii . \n 11 Hutchinson and Warren, IW – Corporate Attack and Defence in a \nDigital World (2001) p. xviiii . \n 12 Vacca, J. R., Computer Forensics: Computer Crime Scene Investigation \n(2nd Edition) Charles River Media, 2005. \n 13 Lloyd, I. J., Information Technology Law Oxford University Press \n181 (5th Edition) 2008. \n 14 Groves, The War on Terrorism: Cyberterrorist be Ware, Informational \nManagement Journal , Jan-Feb 2002. \n 5 Hutchinson and Warren, IW – Corporate Attack and Defence in a \nDigital World (2001) and XVIIII . \n 4 Rogers “ Protecting America against Cybeterrorism ” 2001 United \nStates Foreign Policy Agenda , 15. \n 3 Schmitt, “ Wired Warfare-Workstation Network Attack and jus in \nbello ” 2002 International Review of the Red Cross , 365. \n" }, { "page_number": 712, "text": "Chapter | 39 Information Warfare\n679\nleast 1991. One of the first IW incidents was recorded in \n1991 during the first Gulf War, where IW was used by the \nUnited States against Iraq. 15 \n In 1998 an Israeli national hacked into the government \nworkstations of the United States. 16 In 1999, a number of \ncyber attacks took place in Kosovo. During the attacks \nthe Serbian and NATO Web sites were taken down with \nthe aim of interfering with and indirectly influencing the \npublic’s perception and opinion of the conflict. 17 \n These cyber attacks were executed for different reasons: \nRussians hacked U.S. and Canadian websites “ in protest ” \nagainst NATO deployment, 18 Chinese joined the online war \nbecause their embassy in Belgrade was bombed by NATO, 19 \nand U.S. nationals were paralyzing the White House 20 and \nNATO 21 Web sites “ for fun. ” In 2000, classified information \nwas distributed on the Internet, 22 and attacks were launched \non NASA’s laboratories, 23 the U.S. Postal Service, and the \nCanadian Defense Department. 24 As early as 2001, detected \nintrusions into the U.S. Defense Department’s Web site \nnumbered 23,662. 25 Furthermore, there were 1300 pending \ninvestigations into activities “ ranging from criminal activity \nto national security intrusions. ” 26 Hackers also attempted to \nabuse the U.S. Federal Court’s database 27 to compromise \nthe peace process in the Middle East. 28 \n In 2002, incidents of cyber terrorism in Morocco, \nSpain, Moldova, and Georgia 29 proved once again that \na “ hacker influenced by politics is a terrorist, ” illustrated \nby more than 140,000 attacks in less than 48 hours alleg-\nedly executed by the “ G-Force ” (Pakistan) 30 . During this \nperiod, a series of convictions on charges of conspiracy, \nthe destruction of energy facilities, 31 the destruction of tel-\necommunications facilities, and the disabling of air naviga-\ntion facilities, 32 as well as cases of successful international \nluring and subsequent prosecutions, were recorded. 33 \n In the second half of 2007, 499,811 new malicious \ncode threats were detected, which represented a 571% \nincrease from the same period in 2006. 34 With two thirds \nof more than 1 million identified viruses created in \n2007, 35 the continued increase in malicious code threats \nhas been linked to the sharp rise in the development of \nnew Trojans and the apparent existence of institutions \nthat employ “ professionals ” dedicated to creation of new \nthreats. 36 In 2008 it has been reported in the media that \n “ over the past year to 18 months, there has been ‘ a huge \nincrease in focused attacks on our [United States] national \ninfrastructure networks . . . . and they have been coming \nfrom outside the United States. ’ ” 37 \n It is common knowledge that over the past 10 years, \nthe United States has tried to save both manpower and \ncosts by establishing a system to remotely control and \nmonitor the electric utilities, pipelines, railroads, and oil \n 15 Goodwin “ Don’t Techno for an Answer: The false promise of IW. ” \n 16 Israeli citizen arrested in Israel for hacking United States and \nIsraeli Government Workstations (1998) http://www.usdoj.gov/crimi-\nnal/cybercrime/ehudpr.hgm (accessed on 13 October 2002). \n 17 Hutchinson and Warren, IW – Corporate Attack and Defence in a \nDigital World (2001) . \n 18 Skoric (1999), http://amsterdam.nettime.org/Lists-Archives/net-\ntime-l-9906/msg00152.html , (accessed on 03 October 2002). \n 19 Messmer (1999) http://www.cnn.com/TECH/computing/9905/12/\ncyberwar.idg/ , (accessed on 03 October 2002). \n 20 “ Web Bandit ” Hacker Sentenced to 15 Months Imprisonment, 3 Years \nof Supervised Release, for Hacking USIA, NATO, Web Sites (1999), www.\nusdoj.gov/criminal/cybercrime/burns.htm (accessed on 13 October 2002). \n 21 Access to NATO’s Web Site Disrupted (1999), www.cnn.com/\nWORLD/europe/9903/31/nato.hack/ (accessed on 03 October 2002). \n 22 Lusher (2000), www.balkanpeace.org/hed/archive/april00/hed30.\nshtml (accessed on 03 October 2002). \n 23 Hacker Pleads Guilty in New York City to Hacking into Two NASA \nJet Propulsion Lab Workstations Located in Pasadena, California \n(2000), www.usdoj.gov/criminal/cybercrime/rolex.htm (accessed on \n13 October 2002). \n 24 (2000) www.usdoj.gov/criminal/cybercrime/VAhacker2.htm (accessed \non 13 October 2002). \n 25 www.coe.int/T/E/Legal_affairs/Legal_co-operation/Combating_\neconomic_crime/Cybercrime/International_conference/ConfCY(2001)\n5E-1.pdf (accessed on 9 October 2002). \n 26 Rogers, Protecting America against Cyberterrorism U.S. Foreign \nPolicy Agenda (2001). \n 27 (2001) “ Hacker Into United States Courts ’ Information System \nPleads Guilty, ” www.usdoj.gov/criminal/cybercrime/MamichPlea.htm \n(accessed on 13 October 2002). \n 28 (2001) “ Computer Hacker Intentionally Damages Protected Computer ” \n www.usdoj.gov/criminal/cybercrime/khanindict.htm \n(accessed \non \n13 October 2002). \n 29 Hacker Infl uenced by Politics is Also a Terrorist (2002), www.utro.\nru/articles/2002/07/24/91321.shtml (accessed on 24 July 2002). \n 30 www.echocct.org/main.html (accessed on 20 September 2002). \n 31 Hackers Hit Power Companies (2002) www.cbsnews.com/sto-\nries/2002/07/08/tech/main514426.shtml (accessed on 20 September 2002). \n 32 U.S. v. Konopka ( E.D.Wis .), www.usdoj.gov/criminal/cybercrime/\nkonopkaIndict.htm (accessed on 13 October 2002). \n 33 U.S. v. Gorshkov ( W.D.Wash ), www.usdoj.gov/criminal/cyber-\ncrime/gorshkovSent.htm (accessed on 13 October 2002). \n 34 Symantec Global Internet Security Threat Report Trends for July –\n December 07 (2008) Vol. 13, published April 2008 available at http://\neval.symantec.com/mktginfo/enterprise/white_papers/b-whitepaper_\ninternet_security_threat_report_xiii_04-2008.en-us.pdf accessed on 21 \nApril 2008 at p.45. \n 35 Symantec Global Internet Security Threat Report Trends for July –\n December 07 (2008) Vol. 13, published April 2008 available at http://\neval.symantec.com/mktginfo/enterprise/white_papers/b-whitepaper_\ninternet_security_threat_report_xiii_04-2008.en-us.pdf accessed on 21 \nApril 2008 at p. 45. \n 36 Symantec Global Internet Security Threat Report Trends for July –\n December 07 (2008) Vol. 13, published April 2008 available at http://\neval.symantec.com/mktginfo/enterprise/white_papers/b-whitepaper_\ninternet_security_threat_report_xiii_04-2008.en-us.pdf accessed on 21 \nApril 2008 at p. 46. \n 37 Nakashima, E and Mufson, S, “ Hackers have attacked foreign utilities, \nCIA Analysts says, ” 19 January 2008, available at www.washingtonpost.\ncom/wp/dyn/conmttent/atricle/2008/01/18/AR2008011803277bf.html ,\naccessed on 28 January 2008. \n" }, { "page_number": 713, "text": "PART | VI Physical Security\n680\ncompanies all across the United States. 38 The reality of \nthe threat of IW has been officially confirmed by the U.S. \nFederal Energy Regulatory Commission, which approved \neight cyber-security standards for electric utilities, which \ninclude “ identity controls, training, security ‘ parameters ’ \nphysical security of critical cyber equipment, incident \nreporting and recovery. ” 39 In January 2008, a CIA analyst \nwarned the public that cyber attackers have hacked into \nthe workstation systems of utility companies outside the \nUnited States and made demands, which led to at least \none instance where, as a direct result, a power outage that \naffected multiple cities took place. 40 \n To be able to appreciate the reality of the threat of IW, \none only needs to look at the mild disruption of electricity \nin South Africa, which led to nervousness both within the \nSouth African local population as well as among interna-\ntional investors, who, as a direct result, withdrew more \nthan US$500,000 from the South African economy. The \nblanket disruption of all essential services of a country \nmay not only grind that country to a halt but could cause \nmajor population riots and further damage to an economy. \nThe appropriate questions that then arise are, How can IW \nbe brought about and how can one ward against it? \n 4. INFORMATION WARFARE: MAKING \nIW POSSIBLE \n To conduct any form of warfare, one would require an \narsenal of weapons. As far as IW is concerned, two gen-\neral groups of strategies need to be considered: offen-\nsive strategies and defensive strategies. The nature of \nthe beast that is IW is that constant research is essential \nand the most recent technological advances need to be \nemployed to effectively employ and resist IW. \n Offensive Strategies \n The arsenal of IW includes weapons of psychological and \ntechnical nature. Both are significant and a combination \nof the two can bring about astounding and highly disrup-\ntive results. \n Psychological Weapons \n Psychological weapons include social engineering tech-\nniques and psychological operations ( psyops ). Psyops \ninclude deceptive strategies. Deception has been part of \nwarfare in general for hundreds of years. \n Sun Tzu, in his fundamental work on warfare, says that \n “ All warfare is based on deception. ” 41 Deception has been \ndescribed as “ a contrast and rational effort … to mislead \nan opponent. ” 42 In December 2005, it became known that \nthe Pentagon was planning to launch a US$300 million \noperation to place pro-U.S. messages “ in foreign media \nand on items such as T-shirts and bumper stickers without \ndisclosing the U.S. government as the source. ” 43 \n Trust is a central concept and a prerequisite for any \npsyops to succeed. The first known exploitation of trust \nin the cyber environment relates to Kevin Mitnick’s \nexploitation of a network system. 44 The dissemination of \nworkstation viruses as attachments to emails sent from \npeople that you know and trust is another example of a \npsyops, such as the distribution of “ I Love You ” virus \nfrom a number of years ago. 45 \n However, this does not represent an exhaustive list of \npsyops. Psyops can also target the general population by \nsubstituting the information on well-trusted news agen-\ncies ’ Web sites as well as public government sites with \ninformation favorable to the attackers. A good example is \nwhere the information on the Internet is misleading and \ndoes not reflect the actual situation on the ground. \n The problem with psyops is that they cannot be used \nin isolation because once the enemy stops trusting the \ninformation it receives and disregards the bogus mes-\nsages posted for its attention, psyops become useless, at \nleast for some time. Therefore, technical measures of IW \nshould also be employed to achieve the desired effect, \nsuch as DoS and botnet attacks. That way, the enemy \nmight not only be deceived but the information the enemy \nholds can be destroyed, denied, or even exploited. \n 39 Nakashima, E and Mufson, S, “ Hackers have attacked foreign utili-\nties, CIA Analysts says, ” 19 January 2008, available at www.washington-\npost.com/wp/dyn/conmttent/atricle/2008/01/18/AR2008011803277bf.\nhtml , accessed on 28 January 2008. \n 40 Nakashima, E and Mufson, S. (2008), “ Hackers have attacked for-\neign utilities, CIA Analysts says, ” www.washingtonpost.com/wp/dyn/\nconmttent/atricle/2008/01/18/AR2008011803277bf.html , accessed on \n28 January 2008. \n 41 Sun Tzu, Art of War . \n 42 Thornton, R., Asymmetric Warfare – Threat and Response in the \nTwenty-First Century , 2007. \n 43 www.infowar-monitor.net/modules.php?op \u0003 modload & name \u0003 \nNews & sid \u0003 1302 , accessed on 28 September 2006. \n 44 Vacca, J. R., Computer Forensics: Computer Crime Scene \nInvestigation (2nd Edition) Charles River Media, 2005. \n 45 Vacca, J. R., Computer Forensics: Computer Crime Scene \nInvestigation (2nd Edition) Charles River Media, 2005. \n 38 Nakashima, E and Mufson, S, “ Hackers have attacked foreign utili-\nties, CIA Analysts says, ” 19 January 2008, available at www.washington-\npost.com/wp/dyn/conmttent/atricle/2008/01/18/AR2008011803277bf.\nhtml , accessed on 28 January 2008. \n" }, { "page_number": 714, "text": "Chapter | 39 Information Warfare\n681\n Technical Weapons \n The technical weapons in IW can be subdivided into net-\nwork, hardware, and software weapons. Network-based \nweapons relate to DoS attacks as well as the DDoS attacks. \nIn plain language, it means that a specific workstation that \ncontains required information cannot be accessed because \nof the intense volume of simultaneous requests sent from \ndifferent workstations that have access to that workstation \noverload the system. It is unable to deal with the sheer vol-\nume of requests for access and so crashes. \n Hardware-based weapons include putting defective \nparts into computer hardware, aiming to physically dam-\nage the whole unit. An example is the Exocets, French \nmissiles that were reportedly sold to the Iraqis and \ncontained backdoors to their workstation guidance sys-\ntems, 46 which would ultimately render them harmless to \nFrench-allied troops. 47 \n Software-based technical weapons are quite varied and \nhave been recognized as important techniques of strategic \nIW. 48 Such malicious software includes Trojans, (distrib-\nuted) DoS attacks, viruses, worms, and time bombs. \n Trojans \n Trojans have been defined as an advanced technique in \nhacking and pieces of code hidden in a benign/useful \nprogram — for example, a “ green saver. ” Trojans require \ncooperation from the user, and in this sense this weapon \noverlaps the psyops. Trojans can also, however, be intro-\nduced by an executable script that is forcefully implanted \ninto a user’s workstation. Well-known scripts used for \nthese purposes are, among others, JavaScript or Active \nControl. Trojans exploit the security deficiencies of the \nsystem to gain unauthorized access to sources such as an \nFBI database. Trojans represent a perfect accomplice for \nDoS or DDoS attacks. \n It is therefore no surprise that five of the top 10 new \nmalicious code families discovered in the latter half of \n2007 were all Trojans. The Farfli Trojan77, for instance, \ntook third place on the list of the new and most-spread \nmalicious codes in that period. The distinct feature of \nthis Trojan was not just its capability to download and \ninstall other threats onto the compromised workstation \nbut the fact that the affected browsers were developed \nby Chinese programmers and were specifically aimed at \nChinese users. 49 Since a Trojan creates an opportunity \nfor a two-stage attack, it has become uniquely useful \nas a stepping stone for putting in place DoS and DDoS \nattacks in the IW context. \n Denial-of-Service Attacks \n Considering network attacks, it is important to remem-\nber that networks use a layered approach in moving \ndata from one point to another. These layers are con-\nceptual, and the model most commonly used is the ISO \nOpen Systems Interconnect (OSI) model (Information \nProcessing Systems, 1994). Lower layers are nearer the \nhardware and deal with transporting bytes across net-\nworks and applications; upper layers are concerned with \npresenting and adapting information for the user. \n For attacks directed at lower levels in the OSI model, \nit is often the case that actual network components are \ntargeted. For example, one crude but highly effective \nstrike is the DoS attack. \n DoS attacks constitute a serious threat to any \nnetwork-dependant environment. 50 Both Windows and \nUnix platforms are susceptible to this type of attack. 51 \n Although no data is usually destroyed 52 in a DoS \nassault, the service to legitimate users, networks, sys-\ntems, or other resources is completely disrupted and/or \ndenied. 53 A network is therefore rendered inaccessible \ndue to the type or amount of traffic generated, which, \nin turn, “ crashes servers, overwhelms the routers, or \notherwise prevents the network’s devices from func-\ntioning properly. ” 54 There are a number of ways a DoS \nattack can be carried out. A server’s resources could, for \n 46 Lough, B. C., and Mungo, P. (1992), Approaching Zero: Data \nCrime and the Workstation Underworld , Faber and Faber, 186–187. \n 47 These back doors would allow the French military to send a \nradio signal to the on-board workstation. See also B. C. Lough, and \nP. Mungo, (1992), Approaching Zero: Data Crime and the Workstation \nUnderworld , Faber and Faber, 187. \n 48 Lonsdale, D.J., The Nature of War and Information Age: \nClausewitzian Future , Frank Cass, 2004. \n 49 Symantec Global Internet Security Threat Report Trends for July –\n December 07 (2008), Vol. 13, p. 47–48, http://eval.symantec.com/\nmktginfo/enterprise/white_papers/b-whitepaper_internet_security_\nthreat_report_xiii_04-2008.en-us.pdf , accessed on 21 April 2008. \n 50 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking \nExposed: Network Security Secrets & Solutions, 4th ed., McGraw-Hill/\nOsborne. \n 51 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking \nExposed: Network Security Secrets & Solutions, 4th ed., McGraw-Hill/\nOsborne. \n 52 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing. \n 53 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne. \n504. See also Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education. \n 54 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation Forensics \nHandbook, Syngress Publishing. \n" }, { "page_number": 715, "text": "PART | VI Physical Security\n682\ne xample, be tied up or a particular workstation may need \nto be rebooted all the time. 55 \n What makes this attack even more dangerous is that \noften there are no skill requirements on the part of the \nhacker because most, if not all, of the necessary tools are \nreadily available on the Internet. 56 In other words, many \nDoS tools are classified as “ point and click ” and require \n “ very little technical skill to run. ” 57 The fact that so-\ncalled “ most-skilled hackers ” frown 58 on the use of DoS \nattacks by amateurs does not detract from its malicious \nnature or destructive potential. \n In recent years, profit-taking and invisibility dominated \nthe underworld of the Internet. This is evidenced by the \nuse of bots, programs covertly installed on an end user’s \nmachine to “ allow an unauthorized user to remotely control \nthe targeted system through a communication channel, such \nas IRC, peer-to-peer (P2P), or HTTP. ” 59 Once a substantial \nnumber of workstations are compromised, they can, in turn, \nbe used in a DoS attack against a specific network. \n The adoption of Supervisory Control and Data \nAcquisition (SCADA) network-connected systems for the \nU.S. infrastructure, such as power, water, and utilities, 60 \nhas made a DoS attack a lethal weapon of choice. The sus-\nceptibility of the networking protocols, such as TCP/IP, to \nDoS attacks is easily explained. First, these protocols were \ndesigned to be used in a trusted environment. 61 Furthermore, \nthe number of flaws on the network stack level in operat-\ning systems and network devices creates an environment in \nwhich attacks become irresistible and inevitable. 62 \n Whether it is bandwidth consumption, 63 resource starva-\ntion, 64 programming flaws, 65 routing, 66 or a generic DoS 67 \ndoes not affect the damage capabilities of this attack. \n An example of a DoS attack is a “ fraggle ” attack, \nwhich works as follows: Ping packets are sent to a subnet \nby the attacker, allegedly from the IP address of the tar-\ngeted victim. 68 As a result, all workstations on the subnet \nflood the victim’s machine with echo reply messages. 69 \nIn 1999, this type of DoS was used by hackers while the \nKosovo crisis was under way. 70 In particular, the pro-\nSerbian “ hacktivists ” on a continuous basis “ fraggled ” \nthe U.S. and NATO Web sites with the purposes of over-\nloading and bringing them down. 71 \n DoS attacks are not only used to prevent access to a \nsystem. A SYN flooding attack, for example, is employed \nto temporarily interrupt the flow of normal service so that \na trusted relationship between the compromised system \nand another system can be taken advantage of. 72 \n To protect a network from a DoS attack, continuous \nimprovement of a network’s security is required. This \nentails daily updates of antivirus definitions, use of intru-\nsion detection and intrusion prevention systems, and \nemployment of ingress and egress filtering 73 on all net-\nwork traffic, in combination with other behavior-blocking \ntechnologies. Most of the time, however, these techniques \nare outdated, and it may be that only certain types of DoS \n 57 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, \n504. See also Shinder, D. L. (2002), Scene of the Cybercrime: \nWorkstation Forensics Handbook , Syngress Publishing, 317. \n 58 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, 505. \n 59 Symantec Global Internet Security Threat Report Trends for July –\n December 07 (2008), Vol. 13, p. 20, http://eval.symantec.com/mktginfo/\nenterprise/white_papers/b-whitepaper_internet_security_threat_report_\nxiii_04-2008.en-us.pdf , accessed on 21 April 2008. \n 60 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, 505. \n 61 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, 505. \n 62 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, \n505. See also Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 396. \n 63 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, \n506. \n 64 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, \n506. \n 65 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, \n506. \n 66 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, \n507. \n 67 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, \n508–509. \n 68 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing, 320. \n 69 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing, 320. \n 70 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing, 320. \n 71 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing, 320. \n 72 Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 396. \n 73 Symantec Global Internet Security Threat Report Trends for \nJuly – December 07 (2008), Vol. 13, p. 23, http://eval.symantec.com/\nmktginfo/enterprise/white_papers/b-whitepaper_internet_security_\nthreat_report_xiii_04-2008.en-us.pdf , accessed on 21 April 2008. \n 55 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook, Syngress Publishing, 317. \n 56 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, \n504. See also Shinder, D. L. (2002), Scene of the Cybercrime: \nWorkstation Forensics Handbook , Syngress Publishing, 317. \n" }, { "page_number": 716, "text": "Chapter | 39 Information Warfare\n683\nattacks are guarded against at any given time as a result \nof new attacks that enter the realm of the Internet on a \ndaily basis. \n The picture would, however, remain incomplete \nwithout mentioning that the U.S. law enforcement agen-\ncies are hot on the heels of the intruders. In 2007 their \nactions, through Operation Bot Roast II, led to an 11% \ndecrease in botnet attacks and the indictment of eight \npeople for crimes related to botnet activity. 74 \n Distributed Denial-of-Service Attacks \n The DDoS attack is another type of attack at the OSI \nlevel of the network. The first mass DDoS attack took \nplace in February 2000. 75 Although the attack was remi-\nniscent of a by then well-known DoS strategy, its feroc-\nity justified placing it in a class of its own. 76 Seven major \nWeb sites at the time, 77 including eBay, CNN, Amazon, \nand Yahoo!, 78 fell victim to the first DDoS. In recent \nyears, DDoS assaults have become increasingly popular, \nlargely due to the availability of exploits, which require \nlittle knowledge and/or skills to implement. 79 \n Unlike a DoS attack, the source of a DDoS attack \noriginates from a multitude 80 of workstations, possibly \nlocated all across the world. 81 Its aim, however, is identi-\ncal: to bar legitimate parties from accessing and using a \nspecific system or service. 82 \n The use of intermediate workstations, sometimes \nreferred to as agents 83 or zombies , 84 presupposes that in \ncarrying out a DDoS assault, the perpetrator would go \nthrough a two-step process. 85 First, a number of worksta-\ntions (which may number in the thousands) 86 are com-\npromised to turn them into a weapon in the main action. \nTo achieve this goal, the attacker must either gain unau-\nthorized access to the system or induce an authorized \nuser or users to install software that is instrumental for \nthe purposes of the DDoS assault. 87 \n Thereafter, the hacker launches the attack against a third \nsystem by sending appropriate instructions with data 88 on \nthe specific target system to the compromised machines. 89 \nThus the attack is carried out against a third system \nthrough remote control of other systems over the Internet. 90 \nApplication, operating system, and protocol exploits could \nall be used to cause a system to overload and consequently \ncreate a DoS on an unprecedently large scale. 91 The end \nresult would depend on the size of the attack network in \nquestion, keeping in mind that even ordinary Web traffic \nmay be sufficient to speedily overwhelm even the largest of \nsites 92 and render them absolutely useless. 93 \n Tribe FloodNet (also known as TFN), TFN2K, Trinoo, \nand Stacheldraht are all examples of DDoS tools. Some \nof them (the TFN2K) can be used against both Unix and \nWindows systems. 94 \n 74 Symantec Global Internet Security Threat Report Trends for \nJuly – December 07 (2008), Vol. 13, p. 22, http://eval.symantec.com/\nmktginfo/enterprise/white_papers/b-whitepaper_internet_security_\nthreat_report_xiii_04-2008.en-us.pdf , accessed on 21 April 2008. \n 75 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions , 4th ed., McGraw-Hill/Osborne, 518. \n 76 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions , 4th ed., McGraw-Hill/Osborne, 504. \n 77 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions , 4th ed., McGraw-Hill/Osborne, 518. \n 78 Harrison A (2000) “ Cyber assaults hit Buy.com, eBay, CNN and \nAmazon, ” available online at www.workstationworld.com/news/2000/\nstory/0,11280,43010,00.html and accessed on 16 May 2005. See \nalso Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 397. \n 79 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, 525. \n 80 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions, 4th ed., McGraw-Hill/Osborne, \n518. See also Conklin, W. A., White, G. B., Cothren, C., Williams, \nD., and Davis, R. L. (2004), Principles of Workstation Security: \nSecurity \u0002 and Beyond , McGraw-Hill Technology Education, 397. \n 81 Shinder DL (2002) Scene of the Cybercrime: Workstation Forensics \nHandbook (Syngress Publishing, Inc, Rockland) 317. \n 82 Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 397. \n 83 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing, 317. \n 84 Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 397. \n 85 Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 397. \n 86 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing, 317. \n 87 Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 397. \n 88 Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 397. \n 89 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing, 317. \n 90 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions , 4th ed., McGraw-Hill/Osborne, 518. \n 91 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing, 317. \n 92 Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 397. \n 93 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions , 4th ed., McGraw-Hill/Osborne, 525. \n 94 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing, 317. \n" }, { "page_number": 717, "text": "PART | VI Physical Security\n684\n What is interesting is that not only hackers have con-\ntemplated the use of DoS and DDoS attacks to further \ntheir aims. Information recently leaked from military \nsources indicates that warfare capabilities are currently \nofficially classified to include DoS attacks as opposed to \njust conventional weapons of massive destruction. 95 The \nlogical conclusion, therefore, is that a DDoS assault, as \npart of IW, represents a type of attack that is considered \ncapable of bringing any country to a standstill. \n In considering technical remedies for a DDoS, it \nshould be kept in mind that DDoS attacks pose a dual \nthreat 96 to both a primary or secondary victim network \nin that a network could be either the target or the vehicle \nof the DDoS assault. As is the case with DoS, the most \neffective precaution against a DDoS attack is ensuring that \nthe latest patches and upgrades are installed on the system \nconcerned. 97 Finally, the system administrator may also \nadjust “ the timeout option for TCP connections ” 98 as well \nas try to intercept or block the attack messages. 99 \n Viruses \n A virus is defined as “ a program designed to spread \nitself by first infecting the executable files or the system \nareas of hard or floppy disks and then making copies of \nitself. Viruses usually operate without the knowledge or \ndesire of the workstation user. ” 100 \n Three types of viruses are prevalent: boot sector, \nprogram, and macro viruses. Boot-sector viruses attack \nthe boot sector and render the whole system inoperable, \nwhether temporarily or permanently. Program-type viruses \ncome in executable format and load automatically and do \nnot need to be launched, whether by the user or otherwise. \nMacro viruses are small applications embedded in a file, \nsuch as a Microsoft Word document, and automate “ the \nperformance of some task or sequence. ” 101 \n In the last six months of 2007, 15% of the top 50 poten-\ntial malicious code infections consisted of viruses, up from \n10% in the previous six-month period and 5% in the last \nhalf of 2006. 102 Recently, however, viruses have been used \nas an introductory agent or “ payload ” for a worm, which \ncauses more harm than the virus would cause by itself. \nIn 2007, Symantec recorded a 5% increase in viruses but \nnoted that the popularity of a virus as a technique could \nbe solely attributed to its use as an introductory agent for \nworms that incorporate a “ viral infection component. ” 103 \n Worms \n The difference between viruses and worms is that worms \nare automated programs that penetrate the network and \neach and every workstation that is on it, subject to any \nantivirus programs that may be installed on them. As \nwith any workstation program, worms range from very \nsimple to most sophisticated, making them some of \nthe most security-destructive articles available on the \nNet. The classic examples of worms are Love-bug and \nCodeRed, which are well-known in the IT world. \n It is no surprise that in 2007, half of the top 10 mali-\ncious code families were worms. The variation among \ntheir types speaks for itself: Two of the 10 were straight-\nforward worms, two were worms with a backdoor com-\nponent, and one had a virus component. 104 \n The first and second most popular malicious code \nfamilies in 2007 were the Invadesys worm.75 and the \nNiuniu76 worms. Their distinct features lie in the fact \nthat they prepend their codes to Web pages that the user \nof the compromised workstation visits regularly, giv-\ning rise to a new and rapidly growing trend in malicious \ncodes. 105 The popularity of the trend is largely attributed \nto the fact that the author usually redirects the victim’s \nInternet Explorer to his own page and thus earns money \nfrom the sponsors who place links on his Web page. \n 95 McClure, S., Scambray, J., and Kurtz, G. (2003), Hacking Exposed: \nNetwork Security Secrets & Solutions , 4th ed., McGraw-Hill/Osborne, \n525. \n 96 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing, 317. \n 97 Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 397. \n 98 Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 398. \n 99 Conklin, W. A., White, G. B., Cothren, C., Williams, D., and \nDavis, R. L. (2004), Principles of Workstation Security: Security \u0002 and \nBeyond , McGraw-Hill Technology Education, 398. \n 100 Young, S., and Aitel, D., The Hackers Handbook: The strategy \nbehind breaking into and defending networks , p. 529. \n 101 Shinder, D. L. (2002), Scene of the Cybercrime: Workstation \nForensics Handbook , Syngress Publishing, 337. \n 102 Symantec Global Internet Security Threat Report Trends for \nJuly – December 07 (2008), Vol. 13, p. 50, http://eval.symantec.com/\nmktginfo/enterprise/white_papers/b-whitepaper_internet_security_\nthreat_report_xiii_04-2008.en-us.pdf , accessed on 21 April 2008. \n 103 Symantec Global Internet Security Threat Report Trends for \nJuly – December 07 (2008), Vol. 13, p. 50, http://eval.symantec.com/\nmktginfo/enterprise/white_papers/b-whitepaper_internet_security_\nthreat_report_xiii_04-2008.en-us.pdf , accessed on 21 April 2008. \n 104 Symantec Global Internet Security Threat Report Trends for \nJuly – December 07 (2008), Vol. 13, p. 47, http://eval.symantec.com/\nmktginfo/enterprise/white_papers/b-whitepaper_internet_security_\nthreat_report_xiii_04-2008.en-us.pdf , accessed on 21 April 2008. \n 105 Symantec Global Internet Security Threat Report Trends for \nJuly – December 07 (2008), Vol. 13, p. 47, http://eval.symantec.com/\nmktginfo/enterprise/white_papers/b-whitepaper_internet_security_\nthreat_report_xiii_04-2008.en-us.pdf , accessed on 21 April 2008. \n" }, { "page_number": 718, "text": "Chapter | 39 Information Warfare\n685\n Time Bombs \n Time bombs are another category of malicious software \nthat may be employed for purposes of IW. Time bombs \nare intentionally installed by illegitimate users who fully \nunderstand the implications of their actions and deliber-\nately seek the outcome thereof. In the commercial sense \nof the word, a time bomb may be installed for the pur-\nposes of bringing down the whole network infrastructure \nof a company that trades on a national level at various \nshopping centers, thereby causing great loss of sales \nand, ultimately, loss of profits. 106 In the conduct of IW, \nthe implementation of malicious software that can crash \na banking system, electricity source, and water supply at \nthe same time may be very effective, together with the \ndistribution of pamphlets that say that these types of fail-\nures are only the beginning of the offensive. The other \nexamples of malicious software with nefarious purposes \nare those nonautomated hacking tools that are available \nfor download from the Internet. Such tools include, for \ninstance, a tool one could use to hack into one worksta-\ntion on a system, such as an FBI database. \n All of these types of malicious software, used in \ncombination with psychological weapons, may prove \nvery effective at attacking in a high-tech country such as \nthe United States or the United Kingdom and rendering \nit defenseless. For these reasons the aforesaid countries \nhave put in place stringent policies and institutions to \ndeal with IW attacks of the present or future. \n 5. PREVENTATIVE STRATEGIES \n As far as prevention is concerned, experts agree that \n “ there is no silver bullet against IW attacks. ” 107 The U.S. \nDepartment of Defence in Information Operation Roadmap \nrecommends that a special unit be created with a budget of \nUS$15 million to improve the current psyops force struc-\nture that exists in the United States. 108 The roadmap specif-\nically acknowledges that these remedies are necessary but \nrequire an advanced “ 21st-century technology ” to attain \n “ the long-range dissemination of psyops messages by new \ninformation venues such as solid satellites, the Internet, \npersonal digital assistants and cell phones. ” 109 \n For the most part, the attacks listed here are prevent-\nable and detectable. The problem facing a large target \nentity such as a sovereign nation is to coordinate its \ndefense of many possible individual targets. Policies and \nprocedures must be consistent and thoroughly followed. \nThis is a mammoth task, given the heterogeneous nature \nof large computing systems. \n Current solutions are of an organizational nature. \nMany developed countries have response teams such as \nthe Computer Emergency Response Teams (CERT), but \nthese deal only with technicalities of attacks. Higher-level \ninvolvement from government is required to act as a line \nof defense for IW. The U.S. Department of Homeland \nSecurity has forged a link with the private and public \nsector in the form of the US-CERT, with the blessing of \na national strategy for cyber defense. In the U.K., a simi-\nlar role is played by the National Infrastructure Security \nCo-ordination Centre. \n South Africa, as an example country of the develop-\ning world, does not yet have a high-level commitment \nto digital defense; however, there are initiatives in the \npipeline to address IW issues. A number of international \nefforts that aim at securing the Internet and prevent-\ning attacks such as the ones mentioned here have been \nimplemented. One such initiative is the adoption of the \nEuropean Convention of Cybercrime 2001, which deals \nwith the commercial aspects of the Internet transactions. \nAs far as the military aspects of IW are concerned, there \nhave been calls from a number of countries, notably \nRussia, that the Internet be placed under control of the \nUnited Nations. 110 \n As far as offensive weapons are concerned, the coun-\ntermeasures that can be employed to prevent the attacks \nmostly consist of countermeasures that resemble the type \nof the original attack. For example, to avoid information \nby country B inflicting damage, country A may very \nwell start with an offensive psyops itself, which may \nlead to the confusion of country B’s government and the \neventual cancellation of the original psyops it itself had \nplanned. \n Other technical measures usually involve informa-\ntion security services such as proactive creation of anti-\nviruses, network security suits, and extensive use of \nencryption. The only problem with encryption, however, \nis that control over it is not centralized and it is therefore \nnot possible for a government to govern and control the \nInternet from a global perspective in this manner. \n 106 Eloff, J., and Granova, A. (2003) (Computer Crime Case), \n Computer Fraud and Security , October 2003, p 14. \n 107 Lonsdale, D. J., The Nature of War and Information Age: \nClausewitzian Future at 140. \n 108 U.S. Department of Defense Information Operation Roadmap \ndated 30 October 2003, p. 64. \n 109 U.S. Department of Defense Information Operation Roadmap \ndated 30 October 2003, p. 65. \n 110 (2003) “ Russia wants the UN to take control over the Internet, ” \n www.witrina.ru/witrina/internet/?file_body \u0003 20031118oon.htm , \naccessed on 10 May 2005. \n" }, { "page_number": 719, "text": "PART | VI Physical Security\n686\n The Echelon system is well known as the system the \nU.S. implemented for purposes of monitoring the infor-\nmation flow on the Internet. It is well documented, and \nfor purposes of this discussion it suffices to say that the \nEchelon system may not be the best system to use. Some \nestimate that a new Web site is created every four sec-\nonds, which makes the Internet almost impossible to \nmonitor, especially if law enforcement is, as is true in \nsome instances, 10 years behind the current transnational \ncrime curve. 111 \n In this light, it is clear that IW is a reality that exists \nas a direct result of the various psychological and tech-\nnological advances of the human race. The question, \nhowever, remains: Is IW war or just an act of terrorism? \n 6. LEGAL ASPECTS OF IW \n The fact that the Internet is, by definition, international \nimplies that any criminal activity that occurs within its \ndomain is almost always of an international nature. 112 \nThe question that raises concern, however, is the degree \nof severity of the cyber attacks. This concern merits the \nfollowing discussion. \n Terrorism and Sovereignty \n Today more than 110 different definitions of terrorism \nexist and are in use. There is consensus only on one part \nof the definition, and that is that the act of terrorism must \n “ create a state of terror ” in the minds of the people. 113 \n The following definition of “ workstation terrorism ” \nas a variation of IW is quite suitable: “ Computer terror-\nism is the act of destroying or of corrupting worksta-\ntion systems with an aim of destabilizing a country or \nof applying pressure on a government, ” 114 because the \ncyber attack’s objective, inter alia , is to draw immediate \nattention by way of causing shock in the minds of a spe-\ncific populace and thus diminishing that populace’s faith \nin government. \n Incidents such as hacking into energy plants, telecom-\nmunications facilities, and government Web sites cause \na sense of instability in the minds of a nation’s people, \nthereby applying pressure on the government of a par-\nticular country; therefore, these acts do qualify as terror-\nism and should be treated as such. Factual manifestations \nof war, that is, use of force and overpowering the enemy, \nceased to be part of the classical definition of “ war ” \nafter World War I, 115 and international writers began to \npay more attention to the factual circumstances of each \ncase to determine the status of an armed conflict. This \nis very significant for current purposes because it means \nthat, depending on the scale and consequences of a cyber \nattack, the latter may be seen as a fully fledged war, 116 \nand the same restrictions — for example, prohibition of an \nattack on hospitals and churches — will apply. 117 \n IW may seem to be a stranger to the concepts of pub-\nlic international law. This, however, is not the case, for \nthere are many similarities between IW and the notions \nof terrorism and war as embodied in international \ncriminal law. \n The impact of the aforesaid discussion on sover-\neignty is enormous. Admittedly a cornerstone of the \ninternational law, the idea of sovereignty, was officially \nentrenched in 1945 in article 2(1) of the United Nations \n(UN) Charter. 118 This being so, any IW attack, whatever \nform or shape it may take, will no doubt undermine the \naffected state’s political independence, because without \norder there is no governance. \n Furthermore, the prohibition of use of force 119 places \nan obligation on a state to ensure that all disputes are \nsolved at a negotiation table and not by way of crash-\ning of the other state’s Web sites or paralyzing its tele-\ncommunications facilities, thereby obtaining a favorable \noutcome of a dispute under duress. Finally, these rights \nof nonuse of force and sovereignty are of international \ncharacter and therefore “ international responsibility ” 120 \nfor all cyber attacks may undermine regional or even \ninternational security. \n Liability Under International Law \n There are two possible routes that one could pursue \nto bring IW wrongdoers to justice: using the concept \nof “ state responsibility, ” whereby the establishment \n 111 Webster, W. H. (1998), Cyber crime, Cyber terrorism, Cyber war-\nfare: Averting an Electronic Waterloo , CSIS Press: p. xiv. \n 112 Corell \n(2002), \n www.un.org/law/counsel/english/remarks.pdf \n(accessed on 20 September 2002). \n 113 J. Dugard, International Law: A South African Perspective 149 \n(2nd ed. 2000). \n 114 Galley (1996), http://homer.span.ch/~spaw1165/infosec/sts_en/ \n(accessed on 20 September 2002). \n 115 P. Macalister-Smith , Encyclopaedia of Public International Law \n1135 (2000). \n 116 Barkham, Informational Warfare and International Law , 34 \n Journal of International Law and Politics , Fall 2001, at 65. \n 117 P. Macalister-Smith , Encyclopaedia of Public International Law \n1400 (2000). \n 118 www.unhchr.ch/pdf/UNcharter.pdf (accessed on 13 October 2002). \n 119 www.unhchr.ch/pdf/UNcharter.pdf (accessed on 13 October 2002). \n 120 Spanish Zone of Morocco claims 2 RIAA, 615 (1923) at 641. \n" }, { "page_number": 720, "text": "Chapter | 39 Information Warfare\n687\nof a material link between the state and the individual \nexecuting the attack is imperative, or acting directly \nagainst the person, who might incur individual criminal \nresponsibility. \n State Responsibility \n Originally, states were the only possible actors on the \ninternational plane and therefore a substantial amount of \njurisprudence has developed concerning state responsi-\nbility. There are two important aspects of state respon-\nsibility that are important for our purposes: presence of \na right on the part of the state claiming to have suffered \nfrom the cyber attack and imputation of the acts of indi-\nviduals to a state. \n Usually one would doubt that such acts as cyber \nattacks, which are so closely connected to an individual, \ncould be attributable to a state, for no state is liable for \nacts of individuals unless the latter acts on its behalf. 121 \nThe situation, however, would depend on the concrete \nfacts of each case, as even an ex post facto approval of \nstudents ’ conduct by the head of the government 122 may \ngive rise to state responsibility. Thus, this norm of inter-\nnational law has not become obsolete in the technology \nage and can still serve states and their protection on the \ninternational level. \n Individual Liability \n With the advent of a human rights culture after the \nSecond World War, there is no doubt that individuals \nhave become participants in international law. 123 There \nare, however, two qualifications to the statement: First, \nsuch participation was considered indirect in that nation-\nals of a state are only involved in international law if \nthey act on the particular state’s behalf. Second, individ-\nuals were regarded only as beneficiaries of the protec-\ntion offered by the international law, specifically through \ninternational human rights instruments. 124 \n Individual criminal responsibility, however, has been \na much more debated issue, for introduction of such \na concept would make natural persons equal players in \ninternational law. This, however, has been done in cases \nof Nuremberg, the former Yugoslavia, and the Rwanda \ntribunals, and therefore, 125 cyber attacks committed dur-\ning the time of war, such as attacks on NATO web sites in \nthe Kosovo war, should not be difficult to accommodate. \n The difficulty is encountered with justification of use \nof the same terms and application of similar concepts to \nacts of IW, where the latter occurs independently from \na conventional war. Conventionally, IW as an act of war \nsounds wrong, and to consider it as such requires a con-\nventional classification. The definition of “ international \ncrimes ” serves as a useful tool that saves the situation: \nbeing part of jus cogens , 126 crimes described by terms \nsuch as “ aggression, ” “ torture, ” and “ against humanity ” \nprovide us with ample space to fit all the possible vari-\nations of IW without disturbing the very foundation of \ninternational law. Thus, once again there is support for \nthe notions of individual criminal responsibility for cyber \ncrimes in general public international law, which stand as \nan alternative to state responsibility. \n In conclusion, it is important to note that international \ncriminal law offers two options to an agreed state, and it \nis up to the latter to decide which way to go. The fact \nthat there are no clear pronouncements on the subject \nby an international forum does not give a blank amnesty \nto actors on an international plane to abuse the apparent \n lacuna , ignore the general principles, and employ unlaw-\nful measures in retaliation. \n Remedies Under International Law \n In every discussion, the most interesting part is the one \nthat answers the question: What are we going to do about \nit? In our case there are two main solutions or steps that \na state can take in terms of international criminal law in \nthe face of IW: employ self-defense or seek justice by \nbringing the responsible individual before an interna-\ntional forum. \n Self-Defense \n States may only engage in self-defense in cases of an \narmed attempt 127 which in itself has become a hotly \ndebated issue. This is due to recognition of obligation of \nnonuse of force in terms of Art.2(4) of the UN Charter \nas being not only customary international law but also \n jus cogens . 128 \n 121 M.N. Shaw , International Law 414 (2nd ed. 1986). \n 122 For example, in Tehran Hostages Case ( v. ) I.C.J. Reports , 1980 \nat 3, 34–35. \n 123 J. Dugard, International Law: a South African Perspective (2nd \ned. 2000), p. 1. \n 124 J. Dugard, International Law: a South African Perspective 1 (2nd \ned. 2000), p. 234. \n 125 M.C. Bassiouni , International Criminal Law (2nd ed. 1999), p. 26. \n 126 M.C. Bassiouni , International Criminal Law (2nd ed. 1999), p. 98. \n 127 U.N. Charter art. 51. \n 128 M.Dixon , Cases and Materials on International Law 570 (3rd ed., \n2000). \n" }, { "page_number": 721, "text": "PART | VI Physical Security\n688\n Armed attack, however, can be explained away by ref-\nerence to the time when the UN Charter was written, there-\nfore accepting that other attacks may require the exercise \nof the right to self-defense. 129 What cannot be discarded is \nthe requirement that this inherent right may be exercised \nonly if it aims at extinguishing the armed attack to avoid \nthe conclusion of it constituting a unilateral use of force. 130 \nFinally, a state may invoke “ collective self-defense ” in the \ncases of IW. Though possible, this type of self-defense \nrequires, first, an unequivocal statement by a third state \nthat it has been a victim of the attack, and second, such a \nstate must make a request for action on its behalf. 131 \n Therefore, invoking self-defense in cases of IW today, \nthough possible, 132 might not be a plausible option, \nbecause it requires solid proof of an attack, obtained \npromptly and before the conclusion of such an attack, 133 \nwhich at this stage of technological advancement is quite \ndifficult. The requirement that the attack should not be \ncompleted by the time the victim state retaliates hinges \non the fact that once damage is done and the attack is fin-\nished, states are encouraged to turn to international courts \nand through legal debate resolve their grievances without \ncausing more loss of life and damage to infrastructure. \n International Criminal Court \n The International Criminal Court (ICC) established by \nthe Rome Statute of 1998 is not explicitly vested with a \njurisdiction to try an individual who committed an act of \nterrorism. Therefore, in a narrow sense, cyber terrorism \nwould also fall outside the competence of the ICC. \n In the wide sense, however, terrorism, including \ncyber terrorism, could be and is seen by some authors as \ntorture. 134 That being so, since torture is a crime against \nhumanity, the ICC will, in fact, have a jurisdiction over \ncyber attacks, too. 135 \n Cyber terrorism could also be seen as crime against \npeace, should it take a form of fully fledged “ war on the \nInternet, ” for an “ aggressive war ” has been proclaimed an \ninternational crime on a number of occasions. 136 Though \nnot clearly pronounced on by the Nuremberg Trials, 137 \nthe term “ crime of aggression ” is contained in the ICC \nStatute and therefore falls under its jurisdiction. 138 \n Cyber crimes can also fall under crimes against nations, \nsince in terms of customary international law states are \nobliged to punish individuals committing crimes against \nthird states. 139 Furthermore, workstation-related attacks \nevolved into crimes that are universally recognized to \nbe criminal and therefore against nations. 140 Therefore, \nthanks to the absence of travaux pr é paratoires of the \nRome Statute, the ICC will be able to interpret provisions \nof the statute to the advantage of the international com-\nmunity, allow prosecutions of cyber terrorists, and ensure \ninternational peace and security. \n Other Remedies \n Probably the most effective method of dealing with IW is \nby way of treaties. At the time of this writing, there has \nbeen only one such convention on a truly international \nlevel, the European Convention on Cybercrime 2001. \n The effectiveness of the Convention can be easily seen \nfrom the list of states that have joined and ratified it. By \ninvolving such technologically advanced countries as the \nUnited States, Japan, the United Kingdom, Canada, and \nGermany, the Convention can be said to have gained the \nstatus of instant customary international law, 141 as it adds \n opinio juris links to already existing practice of the states. \n Furthermore, the Convention also urges the member \nstates to adopt uniform national legislation to deal with \nthe ever-growing problem of this century 142 as well as \n 129 P. Macalister-Smith , Encyclopaedia of Public International Law \n362 (2000). \n 130 Military and Paramilitary Activities in and against Nicaragua \n( Nic. v. U.S.A. ), www.icj-cij.org/icjwww/Icases/iNus/inus_ijudgment/\ninus_ijudgment_19860627.pdf (accessed on 11 October 2002). \n 131 M.Dixon , Cases and Materials on International Law 575 (3rd ed. \n2000). \n 132 Barkham , Informational Warfare and International Law , Journal \nof International Law and Politics (2001), p. 80. \n 133 otherwise a reaction of a state would amount to reprisals, that are \nunlawful; see also Nic. v. U.S.A . case in this regard, www.icj-cij.org/\nicjwww/Icases/iNus/inus_ijudgment/inus_ijudgment_19860627.pdf \n(accessed on 11 October 2002). \n 134 J. Rehman, International Human Rights Law: A Practical Approach \n464–465 (2002). \n 135 Rome Statute of the International Criminal Court of 1998 art.7, \n www.un.org/law/icc/statute/english/rome_statute(e).pdf (accessed on \n13 October 2002). \n 136 League of Nations Draft Treaty of Mutual Assistance of 1923, www.\nmazal.org/archive/imt/03/IMT03-T096.htm (accessed on 13 October \n2002); Geneva Protocol for the Pacifi c Settlement of International \nDisputes 1924, www.worldcourts.com/pcij/eng/laws/law07.htm (accessed \non 13 October 2002). \n 137 P. Macalister-Smith , Encyclopaedia of Public International Law \n873–874 (1992). \n 138 Art.5(1)(d) of the Rome Statute of the International Criminal Court \n1998, www.un.org/law/icc/statute/english/rome_statute(e).pdf (accessed \non 13 October 2002). \n 139 P. Macalister-Smith , Encyclopaedia of Public International Law \n876 (1992). \n 140 P. Macalister-Smith , Encyclopaedia of Public International Law \n876 (1992). \n 141 http://conventions.coe.int/Treaty/en/Treaties/Html/185.htm \n(accessed on 9 October 2002). \n 142 European Convention on Cybercrime of 2001 art. 23, http://conven-\ntions.coe.int/Treaty/en/Treaties/Html/185.htm (accessed on 9 October \n2002). \n" }, { "page_number": 722, "text": "Chapter | 39 Information Warfare\n689\nprovide a platform for solution of disputes on the inter-\nnational level. 143 Finally, taking the very nature of IW \ninto consideration, “ hard ” international law may be the \nsolution to possible large-scale threats in future. \n The fact that remedies bring legitimacy of a rule can-\nnot be overemphasized, for it is the remedies available to \nparties at the time of a conflict that play a decisive role \nin the escalation of the conflict to possible loss of life. \nBy discussing the most pertinent remedies under inter-\nnational criminal law, the authors have shown that its old \nprinciples are still workable solutions, even for such a \nnew development as the Internet. \n Developing Countries Response \n The attractiveness of looking into developing countries ’ \nresponse to an IW attack lies in the fact that usually \nthese are the countries that appeal to transnational crimi-\nnals due to lack of any criminal sanctions for crimes they \nwant to commit. For purposes of this chapter, the South \nAfrican legal system will be used to answer the question \nof how a developing country would respond to such an \ninstance of IW. \n In a 1989 “ end conscription ” case, South African \ncourts defined war as a “ hostile contest between nations, \nstates or different groups within a state, carried out by \nforce of arms against the foreign power or against an \narmed and organised group within the state. ” 144 In the \n1996 Azapo case, the Constitutional Court, the highest \ncourt of the land, held that it had to consider interna-\ntional law when dealing with matters like these. 145 In the \n2005 Basson case, the Constitutional Court further held \nthat South African courts have jurisdiction to hear cases \ninvolving international crimes, such as war crimes and \ncrimes against humanity. 146 \n A number of legislative provisions in South Africa \nprohibit South African citizens from engaging, directly \nor indirectly, in IW activities. These Acts include the \nInternal Security Intimidation Act 13 of 1991 and the \nRegulation of Foreign Military Assistance Act 15 of \n1998. The main question here is whether the South \nAfrican courts would have jurisdiction to hear matters \nin connection therewith. A number of factors will play \na role. First, if the incident takes place within the air, \nwater, or terra firma space of South Africa, the court \nwould have jurisdiction over the matter. 147 \n The implementation of the Rome Statute Act will \nfurther assist the South African courts to deal with the \nmatter because it confers jurisdiction over the citizens \nwho commit international crimes. It is well known that \ninterference with the navigation of a civil aircraft, for \nexample, is contrary to international law and is clearly \nprohibited in terms of the Montreal Convention. 148 \n A further reason for jurisdiction is found in the 2004 \nWitwatersrand Local Division High Court decision of \n Tsichlas v Touch Line Media, 149 where Acting Judge \nKuny held that publication on a Web site takes place \nwhere it is accessed. In our case, should the sites in \nquestion be accessed in South Africa, the South African \ncourts would have jurisdiction to hear the matter, pro-\nvided that the courts can effectively enforce its judgment \nagainst the members of the group. \n Finally, in terms of the new Electronic Commu-\nnications and Transactions (ECT) Act, 150 any act or pre-\nparation taken toward the offense taking place in South \nAfrica would confer jurisdiction over such a crime, \nincluding interference with the Internet. This means that \nSouth African courts can be approached if preparation \nfor the crime takes place in South Africa. Needless to \nsay, imprisonment of up to five years would be a com-\npetent sentence for each and every participant of IW, \nincluding coconspirators. 151 \n 7. HOLISTIC VIEW OF INFORMATION \nWARFARE \n This chapter has addressed the four axes of the IW \nmodel 152 presented at the beginning of this discussion: \ntechnical, legal, offensive, and defensive. Furthermore, \nthe specific subgroups of the axes have also been dis-\ncussed. For the complete picture of IW as relevant to \nthe discussion at hand, however, Figure 39.2 places each \nsubgroup into its own field. 153 \n 143 European Convention on Cybercrime of 2001 art. 45, http://conven\ntions.coe.int/Treaty/en/Treaties/Html/185.htm (accessed on 9 October \n2002). \n 144 Transcription Campaign and Another v Minister of Defence and \nAnother 1989 (2) SA 180 (C). \n 145 Azanian People’s Organisation (AZAPO) v Truth and Reconciliation \nCommission 1996 (4) SA 671 (CC). \n 146 State v Basson 2005, available at www.constitutionalcourt.org.za . \n 147 Supreme Court Act 59 of 1959 (South Africa). \n 148 Montreal Convention of 1971. \n 149 Tsichlas v Touch Media 2004 (2) SA 211 (W). \n 150 Electronic Communications and Transactions Act 25 of 2002. \n 151 Electronic Communication and Transaction Act 25 of 2002. \n 152 Supreme Court Act 59 of 1959 (South Africa). \n 153 Implementation of the Rome Statute of the International Criminal \nCourt Act 27 of 2002 (South Africa). \n" }, { "page_number": 723, "text": "PART | VI Physical Security\n690\n 8. CONCLUSION \n This discussion clearly demonstrated that IW is not only \npossible, it has already taken place and is growing inter-\nnationally as a preferred way of warfare. It is clearly \ndemonstrated that successful strategies, offensive or \ndefensive, are dependent on taking a holistic view of the \nmatter. Information security professionals should refrain \nfrom focusing only on the technical aspects of this area, \nsince it is shown that legal frameworks, national as well \nas international, also have to be considered. The prevail-\ning challenge for countries around the globe is to foster \ncollaboration among lawyers, information security pro-\nfessionals, and technical IT professionals. They should \ncontinue striving to at least keep the registry of IW arse-\nnal and remedies updated, which may, in turn, incite \nadversaries to provide us with more material for research. \nInternational\ncourts\nSelf-defense\nPsyops, viruses,\nworms, Trojans,\nDoS, DDoS,\ntime bombs\nIntrusion\ndetection/\nprevention\nsystems\nTechnical\nLegal\nDefensive\nOffensive\n FIGURE 39.2 Holistic view of IW. \n" }, { "page_number": 724, "text": " Advanced Security \n Part VII \n CHAPTER 40 Security Through Diversity \n Kevin Noble \n CHAPTER 41 Reputation Management \n Dr. Jean-Marc Seigneur \n CHAPTER 42 Content Filtering \n Peter Nicoletti \n CHAPTER 43 Data Loss Protection \n Ken Perkins \n" }, { "page_number": 725, "text": "This page intentionally left blank\n" }, { "page_number": 726, "text": "693\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Security Through Diversity \n Kevin Noble \n Terremark Worldwide, Inc. \n Chapter 40 \n Security through diversity is a calculated and measured \nresponse to attacks against the mainstream and is usu-\nally used to survive and withstand uniform attacks. This \nresponse involves intentionally making things slightly \ndifferent to completely different, forcing an attacker to \nalter a standard attack vector, tactics, and methodology. \nIt can also be a matter of survival; having very different \nenvironments from everyone else might keep you opera-\ntional. Intentionally going against the trend might be the \nright amount of diversity that puts your efforts ahead of \nthe game or way behind it. Implementing diversity as a \nmatter of doctrine can be scaled to having a completely \ndifferent array of equipment for your production, test, \nand development or creating enough of a difference to \nprotect information in the event of a disaster. \n Where does one choose to deploy the diversity strat-\negy? The most common diversity strategy is in use today \nby most large-scale businesses that choose to store data \nfar enough away from the original site as to be unaffected \nby natural disasters and phenomena such as power out-\nages. But is this enough? Natural disasters cover only one \nthreat vector to sustainability and operational readiness. \n This chapter covers some of the industry trends in \nadapting diversity in hardware, software, and application \ndeployments. It also covers the risk of uniformity, con-\nformity, and the ubiquitous impact of adapting standard \norganizational principals without the consideration of \nsecurity. \n 1. UBIQUITY \n Most modern attacks take advantage of the fact that the \nmajority of personal computers on the Internet are quite \nnearly in the same state. That is, those home computers \ntend to be a few patches behind in many applications and \nwith the operating system itself. Home users also tend \nnot to patch software unless prompted by the self-updat-\ning software. Most home computers also tend to have an \nantivirus product installed with updated antivirus sig-\nnatures. Working with this knowledge, an attacker will \nseek to automate an attack by taking advantage of the \navailable common systems. This way, getting the most \nout of an attack seems reasonable. \n Imagine for a moment that you’re an attacker. You’d \nselect the most frequently used operating systems with \nthe most common software packages for your target list. \nThe only thing that changes is the IP address itself. An \neconomist might see the concept as a “ probability den-\nsity function, ” or what is simply referred to as “ getting \nthe most bang for the buck. ” It is certain that as a given \ncomputer system moves away from the densest pool of \ncommon systems, an attacker needs to work harder to \naccommodate the difference, which thus can reduce the \nlikelihood of compromise. \n When does diversity work against you? It might not \nbe possible to quantify the advantage of selecting diver-\nsity over ubiquity other than the cost in procurement, \ntraining, and interoperability. It is quite possible that an \ninvestment in “ bucking the system ” and using uncom-\nmon systems and services won’t just cause issues; it \nwould drive your competitive advantage into oblivion. \nThis is the single biggest reason to avoid security through \ndiversity, and it will be pointed out repeatedly. Security \nthrough diversity starts early and is embraced as a mat-\nter of survival. Military and financial institutions abide \nby the diversity principals in investments and decision \nmaking. Though a threat is not always understood, the \nlessons learned during an attack tend to live on, forcing \nchange and adaptation that require diversity as a funda-\nmental principal of survival. \n The introduction of complexity over engineering \nrarely leads to any sort of natural diversity. Engineering \nto requirements with anticipated tolerances might still be \n" }, { "page_number": 727, "text": "PART | VII Advanced Security\n694\nthe best approach for any design. “ All security involves \ntrade-offs. ” 1 To that end diversity is not a security strat-\negy in itself but is an aspect of defense in depth, part of \na holistic approach to security and possibly an insurance \npolicy against unenforceable odds . \n 2. EXAMPLE ATTACKS AGAINST \nUNIFORMITY \n Ubiquitous systems are good, cheap, replaceable, and \nreliable — until mass failure occurs. It certainly pays to \nknow that ubiquity and uniformity are the absolute right \nchoices in the absence of threats. That is not the world \nwe occupy, even if not acknowledged. What would it \ntake to survive an attack that had the potential to effec-\ntively disrupt a business or even destroy it? \n If you operate in a service industry that is Internet \nbased, this question is perhaps what keeps you up at \nnight — an attack against all your systems and services, \nfrom which you might not recover. Denial of Service \n(DoS) is a simple and straightforward attack that involves \nan attacker making enough requests to saturate your net-\nwork or service to the point at which legitimate business \nand communications fails. The distributed denial-of-\nservice (DDoS) attack is the same type of attack against \na uniform presence in the Internet space but with many \nattacking hosts operating in unison against a site or \nservice. \n Businesses with real bricks-and-mortar locations in \naddition to selling goods and services over the Internet can \nsurvive a sustainable DDoS attack against the Internet-\nbased business because the bricks-and-mortar transactions \ncan carry the company’s survival. Inversely, businesses \nwith the ubiquitous use of credit cards that require the \nmerchant authorization process can suffer when the point-\nof-sale system can’t process credit cards. That business \nwill simply and routinely have to turn away customers \nwho can only pay by credit card. Yet a business with both \na strong Internet and a solid bricks-and-mortar presence \ncan survive an outage through diversity. \n For years DDoS was used as a form of extortion. \nInternet-based gambling businesses were frequent targets \nof this type of attack and frequently made payouts to crim-\ninal attackers. It is common for Internet-based businesses \nto utilize DDoS mitigation services to absorb or offload \nthe undesired traffic. The various means that an attacker \ncan use in combination to conduct DDoS have escalated \ninto a shifting asymmetrical warfare, with each side adapt-\ning to new techniques deployed by the other side. \n Companies have failed to counter the straightforward \nattack and have gone out of business or ceased opera-\ntions. A company called Blue Security Inc. that special-\nized in combating unsolicited email messages (spam) \nby automating a reply message to the senders had its \nsubscribers attacked. Blue Security’s antispam model \nfailed in 2006 when spammers attacked its very custom-\ners, causing the company to shut down the service in the \ninterest of protecting the customers. 2 \n Successfully mitigating and combating an attack is \nachieved by either having enough resources to absorb \nthe attack or offloading the attack at some point prior to \nreaching a site or service. The DDoS attack is partially \nsuccessful where the target is uniformly presented and \nresponsive and does not react fast enough to the attacks. \nA common means to protect against DDoS or Internet-\nbased outages is to have a diverse business model that \ndoes not rely only on the Internet as a means to conduct \nbusiness. Diverse business models may hold considera-\nble cost and bring in differential revenue, but they ensure \nthat one model may in fact support the other through sus-\ntained attacks, disasters, or even tough times. Employing \nthe concept of security through diversity gives decision \nmakers more immediate options. \n Though the DDoS attack represents the most sim-\nplistic and basic attack against an Internet-based institu-\ntion, it does require an attacker to use enough resources \nagainst a given target to achieve bottlenecking or satu-\nration. This represents the immediate and intentional \nattack, with one or more attackers making a concentrated \neffort against a target. \n 3. ATTACKING UBIQUITY WITH \nANTIVIRUS TOOLS \n Attackers use obfuscation, encryption, and compres-\nsion to install malicious code such as viruses, worms, \nand Trojans. These techniques are tactical responses to \nbypass common antivirus solutions deployed by just \nabout everyone. The number of permutations possible \non a single executable file while retaining functionality \nis on the order of tens of thousands, and these changes \ncreate just enough diversity in each iteration to achieve \na successful infection. An attacker needs to mutate \nan executable only enough to bypass detection from \n 2 “ Blue Security, spam victim or just a really bad idea, ” InfoWorld/\nTech Watch article, May 19, 2006. \n 1 Bruce Schneier, Beyond Fear, Springer-Verlag, 2003. \n" }, { "page_number": 728, "text": "Chapter | 40 Security Through Diversity\n695\ns ignature-based antivirus tools, the most common antivi-\nrus solutions deployed. 3 \n It is possible for anyone to test a given piece of \nmalicious code against a litany of antivirus solutions. \nIt is common practice for attackers and defenders both \nto submit code to sites such as www.virustotal.com for \ninspection and detection against 26 or more common \nantivirus solutions. Attackers use the information to ver-\nify whether a specific antivirus product will fail to detect \ncode while defenders inspect suspected binaries. \n At the root of the problem is the signature-based \nmethod used to inspect malicious code. Not all antivi-\nrus solutions use signature-based detection exclusively, \nbut in essence, it is the fast, cheap, and, until recently, \nmost effective method. If malware sample A looks like \nsignature X, it is most likely X. A harder and less reli-\nable way to detect malicious code is through a technique \nknown as heuristics . Each antivirus will perform heuris-\ntics in a different manner, but it makes a guess or best-\neffort determination and will classify samples ranging \nfrom safe to highly suspect. Some antivirus solutions \nmay declare a sample malicious, increasing the chances \nof finding a false positive. \n Another factor in analysis is entropy, or how random \nthe code looks upon inspection. Here again you can have \nfalse positives when following the trend to pack, dis-\ntort, encrypt, and otherwise obfuscate malicious code. \nThe measurement of entropy is a good key indicator that \nsomething has attempted to hide itself from inspection. \nKeep in mind that because commercial software uses \nthese techniques as well, you have a chance of false \npositives. \n At the 2008 annual DEFCON conference in Las \nVegas, a new challenge was presented: Teams were pro-\nvided with existing malicious code and modified the code \nwithout changing functionality, to bypass all the antivirus \nsolutions. The team that could best defeat detection won. \nThe contest was called “ race to zero. ” The organizers \nhoped to raise awareness about the decreased reliability \non signature-based antivirus engines. It is very important \nto say again that the code is essentially the same except \nthat it can avoid detection (or immediate detection). \n Given that it is possible and relatively easy for attack-\ners to modify existing malicious code enough to bypass \nall signature-based solutions yet retain functionality, it \ncan be considered a threat against any enterprise that has \na ubiquitous antivirus solution deployment. Infections \nare possible and likely and increasing; the risk is not a \ndoomed enterprise, but usually information disclosure \ncould lead to other things. \n Those who choose Apple’s OS X over PC operating \nsystems enjoy what might be coined in some circles as \nimmunity or resistance in the arena of malicious code. An \nattacker would have to invest time and effort into attack \nstrategies against the Apple platform over and above the \nefforts of attacking Windows, for example. Without get-\nting caught up in market share percentages, Apple is not \nas attractive for attackers (at the time of this writing) from \nthe operating system perspective. \n One might think the application of diversity can be \napplied simply by hosting differential operating systems. \nThis is true but rarely works if derived from a strict secu-\nrity perspective. It could be beneficial to switch entirely \nto a less attacked platform or host a differential operat-\ning system in a single environment. On the sliding scales \nof diversity and complexity, you might have immune \nhosts to one attack type against a given operating sys-\ntem. This also means that internal information technol-\nogy and support teams would have to maintain support \nskills that can support each different platform hosted. \nUsually business decisions make better drivers for using \nand supporting diverse operating systems than a strategy \nof diversity alone. In my observations, routine support \nwill be applied where individuals have stronger skills \nand abilities while other platforms are neglected. \n 4. THE THREAT OF WORMS \n In 2003 and 2004, the fast-spreading worm’s probabil-\nity represented the bigger threat for the Internet commu-\nnity at large because patching was not as commonplace \nand interconnectivity was largely ignored compared to \ntoday. A number of self-replicating viruses propagating to \nnew hosts autonomously are known as worms . A worm-\ninfected host would seek to send the worm to other hosts, \nusing methods that would allow for extremely aggressive \nand rapid spread. Infected hosts did not experience much \nin the way of damage or intentional data loss but simply \nwere not able to communicate with other hosts on heav-\nily infected networks. Essentially, the worms became an \nuncontrolled DDoS tool, causing outages and performance \nissues in networks in which the worm gained enough hosts \nto saturate the networks by seeking yet more hosts. \n While responding to the outbreaks of both the \nNachi and MSBlas worms in 2003 for a large business, \nthere was enough statistical information about both \nworms from the various logs, host-based protection, and \npacket captures to reconstruct the infection process (see \n 3 G. Ollmann, “ X-morphic exploitation, ” IBM Global Technology \nServices, May 2007. \n" }, { "page_number": 729, "text": "PART | VII Advanced Security\n696\n Figure 40.1 ). The Nachi worm in particular appeared \nto have an optimized IP address-generating algorithm \nallowing nodes closer on the network to become infected \nfaster. When you consider that the worm could only \ninfect Windows-based systems against the total variety of \nhosts, you end up with a maximum threshold of targets. \nThe network-accessible Windows hosts at the time of the \nworm attack were about 79.8% of the total network; other \nplatforms and systems were based on Unix, printers, and \nrouters. Certainly some of the hosts were patched against \nthis particular vulnerability. Microsoft had released a \npatch prior to the outbreak. But at the time of the Nachi \nworm outbreak, patching was deployed at set intervals \nin excess of 60 days, and the particular vulnerability \nexploited by Nachi was not patched. The theoretical esti-\nmated number of systems that could succumb to infection \nwas right at the total number of Windows systems on the \nnetwork — just under 80%, as shown in Figure 40.2 —\n meaning that the network was fairly uniform. 4 \n Given this scenario, it would stand to reason that the \ninfection would achieve 100% and only depend on each \nnewly infected system coming up with the appropriate \nIP address to infect new hosts. Yet the data reveals that \nthe total infection was only 36% of the total Windows \nsystems and took 17 days to really become effective, as \nshown in Figure 40.3 . It seemed that enough vulnerable \nWindows hosts had to be infected to seed the next wave \nof infected hosts. \n The worm’s pseudorandom IP address selection was \na factor in the success of its spread; you had to have a \nvulnerable system online with a specific IP that was tar-\ngeted at that time. Once enough seed systems became \ninfected, even a poor pseudorandom IP generator would \nhave been successful in allowing the worm to spread at \nspeeds similar to a chemical chain reaction. Systems \nwould become infected within seconds rather than min-\nutes or hours of connecting to a network. \n Other factors for a successful defense against the \nworm included a deployment of host-based firewalls that \nblocked port traffic. The Nachi worm traffic was mostly \n 4 K. Noble, “ Profi le of an intrusion. ” research paper, Aug. 2003. \n500\n0\n08/10/2003\n08/11/2003\n08/12/2003\n08/13/2003\n08/14/2003\n08/15/2003\n08/16/2003\n08/17/2003\n08/18/2003\n08/19/2003\n08/20/2003\n08/21/2003\n08/22/2003\n08/23/2003\n08/24/2003\n08/25/2003\n08/26/2003\n08/27/2003\n08/28/2003\nDate\n# of attacks against a given system\n1000\n1500\n2000\n2500\n3000\n3500\n FIGURE 40.1 Tracking infected hosts as part of the Nachi outbreak. \nDetected New Infections Per Day\n10\n0\n08/10/2003\n08/17/2003\n08/24/2003\n20\n30\n40\n50\n60\n70\n80\n90\n FIGURE 40.2 Nachi finding new hosts to infect. \n" }, { "page_number": 730, "text": "Chapter | 40 Security Through Diversity\n697\nfrom nonlocal untrusted networks making port filtering \nan easy block for the attack. Other factors that reduced \nthe spread included the fact that a number of systems, \nsuch as mobile laptop computers, did not remain con-\nnected after business hours (users taking the laptops \nhome, for example). Emergency patching and efforts \nto contain the spread by various groups was also a big \nfactor in reducing the number of vulnerable systems. \nThe Nachi worm itself also prevented infection to some \nsystems by simply generating too much traffic, causing \na DoS within the network. Network engineers immedi-\nately began blocking traffic as generated by the Nachi \nworm and blocking entire subnets altogether. The speed \nof the infection caused a dramatic increase in traffic on \ninfected networks and induced a reaction by many net-\nwork engineers looking for the root cause and attacking \nthe problem by blocking specific traffic. \n Additional resistance to infection from the Nachi \nworm in this case was achieved through unintended \ndiversity in time, location, and events surrounding \nthe vulnerable systems. Perhaps we can call this being \nlucky; vulnerable hosts were protected by not being con-\nnected or not needing to connect during the potential \ninfection window. The last interesting thing about worms \nthat achieved mass infection during 2003 is that those \nworms still generate traffic on the Internet today — per-\nhaps an indication of a sustained infection, reinfection, \nor intentional attacks. \n Though unintentional diversity and rapid response to \nthe outbreak were success factors, an emerging response \nto automated attacks was intrusion detection systems \n(IDSs) being deployed in greater numbers. Though pre-\nvention is ideal, detection is the absolute first and neces-\nsary step in the process of defense. A natural transition \nto automating defense was the mass implementation of \nintrusion protection systems (IPSs), which detect and \nblock based on predefined understandings of past attacks \nand in some cases block based on attack behavior. \n Making a choice to be diverse as a means to improve \nsecurity alone might not be beneficial. If you choose two \ndifferent backup methodologies or split offices between \ntwo different operating systems just for the sake of secu-\nrity and not business as the driver, then cost in theory \nnearly doubles without gaining much. Ideally diversity \nis coupled with other concepts, such as security. \n 5. AUTOMATED NETWORK DEFENSE \n Computer systems that transact information at great speed \nhave similar properties to chemical reactions that, once \nstarted, are quite difficult or impossible to stop. Though \nan IPS will trade speed for security, it ultimately tries \nnot to impede the traffic overall and has a default thresh-\nold for when to allow traffic unabated. This seems to be \na default objective for IPSs — trying to keep out most of \nthe bad things while allowing everything else to traverse \nwithout interference, reflecting customer demand. \n If an IPS took time to inspect every anomaly to the \npoint where overall traffic performance were degraded, \nit would be advantageous for an attacker to exploit the \nheavy inspection process to create a performance issue up \nto the point of denying service. This very consideration \ncauses both the default policies on IPS and IPS custom-\ners to acquiesce and to allow a percentage of bad traffic \nas a trade for performance or connectivity in general. \n At first you might be surprised at the position taken \nby IPS vendors and customers. Denying all anomalous \nand malicious traffic might be the sales pitch from IPS \nvendors, but the dichotomy of having a device designed \nfor automated protection being used for automated DoS \nby an attacker against the very business it was suppose \nto protect would scare off anyone. IDS as an industry \n4\nAverage duration of infection (where detectable)\n3.5\n3\n2.5\n2\n1.5\nDays\n1\n0.5\n0\n08/11/2003\n08/12/2003\n08/13/2003\n08/14/2003\n08/15/2003\n08/16/2003\n08/17/2003\n08/18/2003\n08/19/2003\n08/20/2003\n08/21/2003\n08/22/2003\n08/23/2003\n08/24/2003\n08/25/2003\n08/26/2003\n08/27/2003\n08/28/2003\nAverage duration of infection\n FIGURE 40.3 The Nachi worm requires continual infection to achieve network saturation. \n" }, { "page_number": 731, "text": "PART | VII Advanced Security\n698\ndoes nothing more than strike a balance between com-\nmon attack prevention while allowing everything else —\n perhaps a worthy goal on some fronts. \n 6. DIVERSITY AND THE BROWSER \n An evolutionary adaptation to security has driven attack-\ners to exploit the Web browser. Because everyone has one, \nthe Web browser represents the most promising avenue to \ninformation exploitation. Straightforward attacks involv-\ning simply spraying an attack across the Internet are not \nhow one exploits browers. Attacks usually take advantage \nof vulnerabilities only after a browser retrieves code from \na malicious Web site. The remote attack is not a sustain-\nable attack. Most popular browsers that host the means \nfor automatic patching allow the browsers to be secured \nagainst known vulnerabilities. Still, not everyone takes \nadvantage of new browser releases that include patches \nfor remote exploits and serious issues. Attacking the user \nbehind the browser along with attacking the browser \nseems to be a winning combination yielding a higher per-\ncentage of compromised hosts for attackers. \n Conceptually, could the browser be the weapon of \nchoice that collapses an entire enterprise? It is possible \nbut highly unlikely. The two most common browsers are \nMicrosoft’s Internet Explorer and Mozilla’s Firefox, mak-\ning them the most targeted. Most browsers are not mono-\nlithic; adding code to view pages and perform animation \nare common practices. Browser extensibility allows any-\none to integrate software components. Serious vulnerabil-\nities and poor implementation in some of these extensions \nhas led to exploitable code but usually demands a visit \nfrom a vulnerable browser. Since many of us don’t use \nthe same software package added to our browsers, this \nsort of threat, though risky to any enterprise from the \nstandpoint of information disclosure, does not represent \nthe sort of survivability issues security diversity seeks to \nremedy. \n The application of diversity to browsers could be \nremedied by the selection of an uncommon browser or \nby choosing extensibility options offered by vendors \nother than the most common ones. The tradeoff is gain-\ning a host of compatibility issues for the ability to thwart \nbrowser-specific attacks. \n Operating systems (OSs) are the Holy Grail of attacked \nplatforms. All attacks in some form or fashion seek to \ngain authority within the OS or to dominate the OS alto-\ngether. Rootkits seek to hide the behavior of code and \nallow code (usually malicious) to operate with impu-\nnity on a given system. Even when the browser or other \nservices are attacked, it would probably be more advan-\ntageous for the attacker to seek a way into the OS with-\nout detection. \n The Windows OS, with the lion’s share of sys-\ntems, is the ubiquitous platform of choice for attackers. \nMicrosoft has made considerable efforts in the past few \nyears to combat the threats and has introduced technolo-\ngies such as stack cookies, safe handler and chain valida-\ntion, heap protection, data execution prevention (DEP), \nand address space layout randomization (ASLR). Each \nis designed to thwart or deter attacks and protect code \nexecution from exploitation through any vulnerability. \n Of each of these low-level defenses implemented in \ncode today, ASLR seeks to increase security through \ndiversity in the Vista OS. Theoretically, on a given sys-\ntem on which an attack is possible, it will not be possible \nagain on another system or even the same system after a \nlayout change that occurs after a reboot. Certainly alter-\ning the behavior between systems reduces risk. \n The current implementation of ASLR on Vista \nrequires complete randomized process address spacing \nto offer complete security from the next wave of attacks. 5 \nCertainly the balance of security falters and favors the \nattacker when promiscuous code meets any static state or \neven limited entropy of sorts. \n 7. SANDBOXING AND VIRTUALIZATION \n The technique of sandboxing is sometimes used to con-\ntain code or fault isolation. Java, Flash, and other lan-\nguages rely heavily on containing code as a security \nmeasure. Though sandboxing can be effective, it has \nthe same issues as anything else that is ubiquitous — one \nflaw that can be exploited for a single sandbox can be \nexploited for all sandboxes. \n Expanding the concept in a different way is the vir-\ntual hosting of many systems on a single system through \nthe use of a hypervisor . Each instance of an OS con-\nnects to the physical host through the hypervisor, which \nacts as the hardware gateway and as a kernel of sorts. \nFrequently the concept of virtualization is coupled to \nsecurity as the layer of abstraction and offers quite a bit \nof protection between environments in the absence of \nvulnerabilities. \n The concept of virtualization has considerable long-\nterm benefits by offering diversity within a single host \nbut requires the same diligence as any physical system \n 5 M. Dowd and A. Sotirov, 2008 Black Hat paper, “ Bypassing browser \nmemory protections, ” http://taossa.com/archive/bh08sotirovdowd.pdf . \n" }, { "page_number": 732, "text": "Chapter | 40 Security Through Diversity\n699\ncompounded by the number of virtual systems hosted. \nIt is fair to say that each host that contains vulnerabili-\nties may therefore put other hosts or the entire core of \nthe hosting physical system at risk. The risk is no dif-\nferent than an entire room full of interconnected sys-\ntems that have emergent properties. Frequently, security \nprofessionals and attackers alike use virtualization as a \nplatform for testing code and the ability to suspend and \nrecord activity, similar to a VCR. \n In many cases the push to virtualize systems and serv-\nices is driven by sharing unused resources as a business \ndecision. However, with increased frequency, security is \nconsidered a benefit of virtualization. This is true only in \nthe context of virtual environments achieving isolation \nbetween guests or host and guest. This is a clear goal \nof all the hypervisors on the market, from the VMware \nproduct line to Windows Virtualization products. \n For quite some time it was possible for security \nresearchers to work with malicious samples in a virtu-\nalized state. This allowed researchers to essentially use \nthe context of a computer running on a computer with \nfeatures similar to a digital video recorder, where time \n(for the malicious sample) can be recorded and played \nback at a speed of their choosing. That was true until the \nadvent of malicious code with the ability to test whether \nit was hosted in a virtualized environment or not. Recent \nresearch has shown that it is possible for malicious code \nto escape the context of the virtual world and attack the \nhost system or at least glean information from it. \n Two competing factors nullify using virtualized envi-\nronments as a means of archiving simple security through \ndiversity. It will continue to be possible to detect hosting \nin a virtual environment, and it is possible to find the \nmeans to exploit virtual environments, even if the diffi-\nculty increases. However, this just means that you can’t \nrely on the hypervisor alone. Vulnerabilities are at the \nheart of all software, and the evolutionary state of attack-\ning the virtualized environments and the hypervisor will \ncontinue to progress. \n The decline of virtualized environments as a security \ntool was natural as so many in the security field became \ndependent on hypervisors. In response, the security field \nwill essentially adapt new features and functions to off-\nset vulnerabilities and detect attacks. Examples include \nimproved forensics and hosting virtualized environments \nin ways to avoid detection. \n In nature, colorful insects represent a warning to oth-\ners of toxicity or poison if eaten. Similarly, some in the \nsecurity field have taken to setting virtualization flags on \nreal, physical machines simply to foil malicious code. \nThe more hostile malicious code will shut itself down \nand delete itself when it determines it is in a virtual host, \nthus preventing some infections. \n 8. DNS EXAMPLE OF DIVERSITY \nTHROUGH SECURITY \n It is fair to say that the Internet requires a means to resolve \nIP addresses to names and names back to IP addresses. \nThe resolving capabilities are solved by Domain Name \nServices with a well-defined explanation on how any \nDNS is supposed to function being published and publicly \navailable. In most cases you may choose to use a provid-\ner’s implementation of DNS as a resolver or deploy your \nown to manage internal names and perform a lookup from \nother domains. \n If prior to 2008 you had selected the less popular \nDJBDNS 6 over the more popular BIND, you would have \nbeen inoculated (for the most part 7 ) against the DNS \ncache-poisoning attack made famous by Dan Kaminski. \nDepending on your understanding of the attack against \nwhat you are trying to protect, theoretically all information \ntransacted over the Internet was at the complete mercy of \nan attacker. Nothing could be trusted. This is actually not \nthe limit of the capabilities, but it is the most fundamen-\ntal. The threat to survivability was real and caused many \nto consider secure DNS alternatives after the attack was \nmade public. \n Rapid and automated response is the most common \ndefense technique. Making the decision to act late is still \nbeneficial but risky because the number of attackers capa-\nble of performing the attack increases with time. Reaction \nis both a good procedural defense and offers immunity and \nlessons learned. As part of the reaction, you could assume \nthat the DNS is not trusted and continue to operate with \nthe idea that some information not be from the intended \nsources. You could also make a decision to disconnect \nfrom the Internet until the threat is mitigated to a satisfac-\ntory level. Neither would seem reasonable, but it is impor-\ntant to know what threat would constitute such a reaction. \n 9. RECOVERY FROM DISASTER IS \nSURVIVAL \n Disaster recovery is often thought of as being able to \nrecover from partial data loss or complete data loss by \n 6 D. J. Bernstein, http://cr.yp.to/djbdns.html . \n 7 Though DNS queries might be verifi ed where DJBDNS was \ndeployed, the upstream DNS server could still be vulnerable, making it \nimportant to know from where you get your DNS names. \n" }, { "page_number": 733, "text": "PART | VII Advanced Security\n700\nrestoring data from tape. With the considerably lower cost \nof dense media, it is now possible to continuously stream \na copy of the data and recover at any point in time rather \nthan the scheduled time associated with evening backup \nevents. In many cases the backup procedure is tested fre-\nquently, whereas the recovery procedure is not. \n Unfortunately, many assume that simply having \nbackups means that recovery is inevitable or a foregone \nconclusion. For others, the risk associated with recovery \nhas led to many organizations never testing the backups \nfor fear of disruption or failure. Perhaps in the interest \nof diversity from the norm it is beneficial to frequently \nand procedurally test restore operations. Though secu-\nrity diversity is a survival technique, recovery is the par-\namount survival tool in everyone’s arsenal. \n 10. CONCLUSION \n Disaster recovery seems the ideal area in which to insti-\ntute a separate and diverse architecture from that of a \nproduction environment. Segregation between envi-\nronments offers a tangible boundary that accounts for \nattacks, unintended consequences, and disasters, but it \nis only part of a solution. As Dan Greer indicated in his \nessay on the evolution of security, 8 we already have an \nevolutionary approach to systems by centralizing enter-\nprises into safe, climate-controlled environments. We \nprotect systems with IDS all while making copies of \ncritical data and systems, just in case. Making changes \nto systems as we acquire them is rarely undertaken to \nthe level necessary to ensure survival; most systems have \nvery few changes from the “ out-of-box ” state or factory \ndefault because we fear that the changes will make the \nsystem unstable or ineffective. \n The best ways to make the leap to complete diversity \nwill inevitably be performed on as many fronts as possi-\nble and at the lowest levels — differential hardware, plat-\nforms, and interfaces. At the higher levels, consideration \nfor a process to apply hygiene to each protocol and every \npiece of data, to change states from the original for-\nmat into anything other than the original, to ensure that \nany native attack code becomes inoculated. Adaptation \nand rapid response might just be enough diversity for \nsurvival. \n In selecting diversity and all the investment and \nissues that go along with it, an instant beneficial byprod-\nuct is a rich set of choices in many areas, not just secu-\nrity. Decision makers have options not available to \nmonocultural networks and systems. For example, if \nyou have deployed in production at least two different \nmanufacturers, routers or firewalls, you will have people \ntrained specifically for each or both. Instead of a com-\npetitive nature of driving out competition, you have the \nability to match the appropriate models to various parts \nof a given network and not depend on the product cat-\nalog of a single vendor. In the immediate situation of \nthreats to an entire product line, a diverse decision proc-\ness such as exchanging routers is available. Someone \nwith a single affected vendor has a limited choice \nbracket of solutions. Additionally, anyone trained or cer-\ntified in more than one company’s equipment portfolio \ncan more easily adapt to any additional needs increasing \nchoices and options. \n Of all the security diversity solutions available, per-\nhaps having a skilled and adaptable workforce trained in \nall the fundamental aspects of computer security offers \nthe best solution. The simplistic statement of “ the best \ndefense is a good offense ” means that security profes-\nsionals should be able to defend from attacks, under-\nstand attacks, and be prepared to perform the forensic \nanalysis and reverse-engineering needed to understand \nattacks. \n Having both an archive of skills and knowledge \nalong with just-in-time knowledge is the true application \nof diversity in a security situation. Execution of a diverse \nskill set during a given threat to survival makes all the \ndifference in the world. \n \n 8 D. Greer, “ The evolution of security, ” ACM Queue , April 2007 \n" }, { "page_number": 734, "text": "701\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Reputation Management \n Dr. Jean-Marc Seigneur \n University of Geneva \n Chapter 41 \n Reputation management is a rather new field in the \nrealm of computer security, mainly due to the fact that \ncomputers and software are more and more involved in \nopen networks. In these open networks, any encountered \nusers may become part of the user base and associated \ninteractions, even if there is no a priori information \nabout those new users. The greater openness of these \nnetworks implies greater uncertainty regarding the out-\ncomes of these interactions. Traditional security mecha-\nnisms, for example, access control lists, working under \nthe assumption that there is a priori information about \nwho is allowed to do what, fail in these open situations. \n Reputation management has been used for ages in \nthe real world to mitigate the risk of negative interaction \noutcomes among an open number of potentially highly \nprofitable interaction opportunities. Reputation manage-\nment is now being applied to the open computing world \nof online interactions and transactions. After the intro-\nduction, this chapter discusses the general understanding \nof the notion of reputation. Next, we explain where this \nconcept of reputation fits into computer security. Then \nwe present the state of the art of robust computational \nreputation. In addition, we give an overview of the cur-\nrent market for online reputation services. The conclu-\nsion underlines the need to standardize online reputation \nfor increased adoption and robustness. \n During the past three decades, the computing environ-\nment has changed from centralized stationary computers \nto distributed and mobile computing. This evolution has \nprofound implications for the security models, policies, \nand mechanisms needed to protect users ’ information \nand resources in an increasingly globally interconnected, \nopen computing infrastructure. In centralized station-\nary computer systems, security is typically based on the \nauthenticated identity of other parties. Strong authenti-\ncation mechanisms, such as Public Key Infrastructures \n(PKIs), 1 , 2 have allowed this model to be extended to dis-\ntributed systems within a single administrative domain \nor within a few closely collaborating domains. However, \nsmall mobile devices are increasingly being equipped \nwith wireless network capabilities that allow ubiquitous \naccess to corporate resources and allow users with simi-\nlar devices to collaborate while on the move. Traditional, \nidentity-based security mechanisms cannot authorize \nan operation without authenticating the claiming entity. \nThis means that no interaction can take place unless both \nparties are known to each others ’ authentication frame-\nwork. Spontaneous interactions would therefore require \nthat a single or a few trusted Certificate Authorities \n(CAs) emerge, which, based on the inability of a PKI \nto emerge over the past decade, seems highly unlikely \nfor the foreseeable future. In the current environment, \na user who wants to partake in spontaneous collabora-\ntion with another party has the choice between enabling \nsecurity and thereby disabling spontaneous collaboration \nor disabling security and thereby enabling spontaneous \ncollaboration. \n The state of the art is clearly unsatisfactory; instead, \nmobile users and devices need the ability to autonomously \nauthenticate and authorize other parties that they encounter \non their way, without relying on a common authentication \ninfrastructure. The user’s mobility implies that resources \nleft in the home environment must be accessed via inter-\nconnected third parties. When the user moves to a for-\neign place for the first time, it is highly probable that the \nthird parties of this place are a priori unknown strangers. \nHowever, it is still necessary for a user to interact with \n 1 C. Ellison, and B. Schneier, “ Ten risks of PKI: What you’re not \nbeing told about Public Key Infrastructure ” , Computer Security \nJournal , Winter (2000). \n 2 R. Housley, and T. Polk, Planning for PKI: Best Practices Guide for \nDeploying Public Key Infrastructure , Wiley (2001). \n" }, { "page_number": 735, "text": "PART | VII Advanced Security\n702\nthese strangers to, for example, access her remote home \nenvironment. It is a reality that users can move to poten-\ntially harmful places; for example, by lack of information \nor due to uncertainty, there is a probability that previously \nunknown computing third parties used to provide mobile \ncomputing in foreign places are malicious. The assump-\ntion of a known and closed computing environment held \nfor fixed, centralized, and distributed computers until the \nadvent of the Internet and more recently mobile comput-\ning. Legacy security models and mechanisms rely on the \nassumption of closed computing environments, where it \nis possible to identify and fortify a security perimeter that \nprotects against potentially malicious entities. However, in \nthese models, there is no room for “ anytime, anywhere ” \nmobility. Moreover, it is supposed that inside the secu-\nrity perimeter there is a common security infrastructure, \na common security policy, or a common jurisdiction in \nwhich the notion of identity is globally meaningful. It \ndoes not work in the absence of this assumption. \n A fundamental requirement for Internet and mobile \ncomputing environments is to allow for potential interac-\ntion and collaboration with unknown entities. Due to the \npotentially large number of previously unknown entities \nand for simple economic reasons, it makes no sense to \nassume the presence of a human administrator who con-\nfigures and maintains the security framework for all users \nin the Internet, for example, in an online auction situa-\ntion or even in proximity, as when a user moves within \na city from home to workplace. This means that either \nthe individuals or their computing devices must decide \nabout each of these potential interactions themselves. This \napplies to security decisions, too, such as those concern-\ning the enrollment of a large number of unknown entities. \nThere is an inherent element of risk whenever a comput-\ning entity ventures into collaboration with a previously \nunknown party. One way to manage that risk is to develop \nmodels, policies, and mechanisms that allow the local \nentity to assess the risk of the proposed collaboration and \nto explicitly reason about the trustworthiness of the other \nparty, to determine whether the other party is trustworthy \nenough to mitigate the risk of collaboration. Formation of \ntrust may be based on previous experience, recommen-\ndations from reachable peers, or the perceived reputa-\ntion of the other party. Reputation, for example, could be \nobtained through a reputation system such as the one used \non eBay. 3 This chapter focuses on this new approach to \ncomputer security — namely, reputation management. \n 1. THE HUMAN NOTION OF \nREPUTATION \n Reputation is an old human notion; the Romans called it \n reputatio , as in “ reputatio est vulgaris opinio ubi non est \nveritas . ” 4 Reputation may be considered a social con-\ntrol mechanism, 5 where it is better to tell the truth than \nto have the reputation of being a liar. That social control \nmechanism may have been challenged in the past by the \nfact that people could change their physical locale to clear \ntheir reputation. However, as we move toward an informa-\ntion society world, changing locales should have less and \nless impact in this regard because reputation information \nis no longer bound to a specific location, which is also \ngood news for reputable people who have to move to other \nregions for other reasons such as job relocation. For exam-\nple, someone might want to know the reputation of a per-\nson he does not know, especially when this person is being \nconsidered to carry out a risky task among a set of poten-\ntial new collaborators. Another case may be that the repu-\ntation of a person is simply gossiped about. The reputation \ninformation may be based on real, biased, or faked “ facts, ” \nperhaps faked by a malicious recommender who wants to \nharm the target person or positively biased by a recom-\nmender who is a close friend of the person to be recom-\nmended. The above Latin quote translates to “ reputation \nis a vulgar opinion where there is no truth. ” 6 The target \nof the reputation may also be an organization, a product, \na brand, or a location. The source of the reputation infor-\nmation may not be very clear; it could come from gossip \nor rumors, the source of which is not exactly known, or \nit may come from a known group of people. When the \nsource is known, the term recommendation can be used. \nReputation is different from recommendation, which is \nmade by a known specific entity. Figure 41.1 gives an \noverview of the reputation primitives. \n As La Rochefoucauld wrote 7 a long time ago, rec-\nommending is also a trusting behavior. It has not only \nan impact on the recommender’s overall trustworthiness \n(meaning it goes beyond recommending trustworthiness) \n 3 P. Resnick, R. Zeckhauser, J. Swanson, and K. Lockwood, The \nValue of Reputation on eBay: A Controlled Experiment , Division of \nResearch, Harvard Business School (2003). \n 4 M. Bouvier, “ Maxims of law, ” Law Dictionary (1856). \n 5 K. Kuwabara, “ Reputation: Signals or incentives? ” In The Annual \nMeeting of the American Sociological Association (2003). \n 6 M. Bouvier, “ Maxims of law, ” Law Dictionary (1856). \n 7 Original quotation in French: La confi ance ne nous laisse pas tant de \nlibert é , ses r è gles sont plus é troites, elle demande plus de prudence et \nde retenue, et nous ne sommes pas toujours libres d’en disposer: il ne \ns’agit pas de nous uniquement, et nos int é r ê ts sont m ê l é s d’ordinaire \navec les int é r ê ts des autres. Elle a besoin d’une grande justesse pour \nne livrer pas nos amis en nous livrant nous-m ê mes, et pour ne faire \npas des pr é sents de leur bien dans la vue d’augmenter le prix de ce \nque nous donnons. \n" }, { "page_number": 736, "text": "Chapter | 41 Reputation Management\n703\nbut also on the overall level of trust in the network of \nthe involved parties. La Rochefoucauld highlighted that \nwhen one recommends another, they should be aware \nthat the outcome of their recommendation will reflect on \ntheir own trustworthiness and reputation, since they are \npartly responsible for this outcome. Benjamin Franklin \nnoted that each time he made a recommendation, his \nrecommending trustworthiness was impacted: “ In conse-\nquence of my crediting such recommendations, my own \nare out of credit. ” 8 However, his letter underlines that still \nhe had to make recommendations about not very well-\nknown parties because they made the request and not \nmaking recommendations could have upset them. This \nis in line with Covey’s “ Emotional Bank Account, ” 9 , 10 \nwhere any interaction modifies the amount of trust \nbetween the interacting parties and can be seen as favor \nor disfavor — a deposit or withdrawal. As Romano under-\nlined in her thesis, there are many definitions of trust in a \nwide range of domains, 11 for example, psychology, eco-\nnomics, or sociology. In this chapter, we use Romano’s \ndefinition of trust, which is supposed to integrate many \naspects of previous work on trust research: \n Trust is a subjective assessment of another’s influence in \nterms of the extent of one’s perceptions about the quality \nand significance of another’s impact over one’s outcomes in \na given situation, such that one’s expectation of, openness \nto, and inclination toward such influence provide a sense of \ncontrol over the potential outcomes of the situation. 12 \n In social research, there are three main types of trust: \ninterpersonal trust, based on the outcomes of past interac-\ntions with the trustee; dispositional trust, provided by the \ntrustor’s general disposition toward trust, independent of \nthe trustee; and system trust, provided by external means \nsuch as insurance or laws. 13 Depending on the situation, \na high level of trust in one of these types can become suf-\nficient for the trustor to make the decision to trust. When \nthere is insurance against a negative outcome or when the \nlegal system acts as a credible deterrent against undesir-\nable behavior, it means that the level of system trust is \nhigh and the level of risk is negligible; therefore the levels \nof interpersonal and dispositional trust are less important. \nIt is usually assumed that by knowing the link to the real-\nworld identity, there is insurance against harm that may \nbe done by this entity. In essence, this is security based \non authenticated identity and legal recourse. In this case, \nthe level of system trust seems to be high, but one may \nargue that in practice the legal system does not provide a \ncredible deterrent against undesirable behavior, that is, it \nmakes no sense to sue someone for a single spam email, \nsince the effort expended to gain redress outweighs the \nbenefit. \n The information on the outcomes of past interactions \nwith the trustee that are used for trust can come from dif-\nferent sources. First, the information on the outcomes \nmay be based on direct observations — when the trus-\ntor has directly interacted with the requesting trustee and \npersonally experienced the observation. Another type \nof observation is when a third party himself observes an \ninteraction between two parties and infers the type of out-\ncome. Another source of information may be specific \nrecommenders who report to the trustor the outcomes of \n 8 B. Franklin, The Life and Letters of Benjamin Franklin. , G. M. Hale & \nCo., 1940. \n 9 S. R. Covey, The seven habits of highly effective people (1989). \n 10 J. Seigneur, J. Abendroth, and C. D. Jensen, “ Bank accounting \nand ubiquitous brokering of trustos ” (2002) 7th Cabernet Radicals \nWorkshop . \n 11 D. M. Romano, The Nature of Trust: Conceptual and Operational \nClarifi cation (2003). \n 12 D. M. Romano, The Nature of Trust: Conceptual and Operational \nClarifi cation (2003). \n1: Hear or request\nthe reputation of\na person or the ranking of\npeople according to their\nreputation\n2: Gossip or answer about\nthe reputation of\na person or rank people\naccording to their reputation\n3: Someone is\nchosen to carry out a\nrisky task \n FIGURE 41.1 Overview of the reputation primitives. \n 13 D. H. McKnight, and N. L. Chervany, “ What is trust? A conceptual \nanalysis and an interdisciplinary model, ” In The Americas conference \non information systems (2000). \n" }, { "page_number": 737, "text": "PART | VII Advanced Security\n704\ninteractions that have not been directly observed by the trus-\ntor but by themselves or other recommenders. In this case, \ncare must be taken not to count twice or many more times \nthe same outcomes reported by different recommenders. \n Finally, reputation is another source of trust informa-\ntion but more difficult to analyze because generally it is \nnot exactly known who the recommenders are, and the \nchance to count many times the same outcomes of inter-\nactions is higher. As said in the introduction, reputation \nmay be biased by faked evidence or other controversial \ninfluencing means. Reputation evidence is the riskiest \ntype of evidence to process. When the evidence recom-\nmender is known, it is possible to take into account the \nrecommender trustworthiness. Since some recommend-\ners are more or less likely to produce good recommenda-\ntions, even malicious ones, the notion of recommending \ntrustworthiness mitigates the risk of bad or malicious rec-\nommendations. Intuitively, recommendations must only \nbe accepted from senders that the local entity trusts to \nmake judgments close to those that it would have made \nabout others. We call the trust in a given situation the \n trust context . For example, recommending trustworthi-\nness happens in the context of trusting the recommen-\ndation of a recommender. Intuitively, recommendations \nmust only be accepted from senders the local entity trusts \nto make judgments close to those that it would have made \nabout others. For our discussion in the remainder of this \nchapter, we define reputation as follows: \n Reputation is the subjective aggregated value, as per-\nceived by the requester, of the assessments by other people, \nwho are not exactly identified, of some quality, character, \ncharacteristic or ability of a specific entity with whom the \nrequester has never interacted with previously. \n To be able to perceive the reputation of an entity is \nonly one aspect of reputation management. The other \naspects of reputation management for an entity consist \nof the following: \n ● Monitoring the entity reputation as broadly as pos-\nsible in a proactive way \n ● Analyzing the sources spreading the entity reputation \n ● Influencing the number and content of these sources \nto spread an improved reputation \n Therefore, reputation management involves some mar-\nketing and public relations actions. Reputation manage-\nment may be applied to different types of entities: personal \nreputation management, which is also called “ personal \nbranding, ” 14 or business reputation management. It is now \ncommon for businesses to employ full-time staff to influ-\nence the company’s reputation via the traditional media \nchannels. Politicians and stars also make use of public \nrelations services. For individuals, in the past few years \nmedia have become available to easily retrieve informa-\ntion, but as more and more people use the Web and leave \ndigital traces, it now becomes possible to find information \nabout any Web user via Google. For example, in a recent \nsurvey of 100 executive recruiters, 15 77% of these execu-\ntive recruiters declared that they use search engines to \nlearn more about candidates. \n 2. REPUTATION APPLIED TO THE \nCOMPUTING WORLD \n Trust engines, based on computational models of the \nhuman notion of trust, have been proposed to make secu-\nrity decisions on behalf of their owners. For example, the \nEU-funded SECURE project 16 has built a generic and \nreusable trust engine that each computing entity would \nrun. These trust engines allow the entities to compute \nlevels of trust based on sources of trust evidence, that \nis, knowledge about the interacting entities: local obser-\nvations of interaction outcomes or recommendations. \nBased on the computed trust value and given a trust pol-\nicy, the trust engine can decide to grant or deny access to \na requesting entity. Then, if access is given to an entity, \nthe actions of the granted entity are monitored and the \noutcomes, positive or negative, are used to refine the trust \nvalue. The computed trust value represents the interper-\nsonal trust part and is generally defined as follows: \n ● A trust value is an unenforceable estimate of the \nentity’s future behavior in a given context based on \npast evidence. \n ● A trust metric consists of the different computations \nand communications carried out by the trustor (and \nher network) to compute a trust value in the trustee. \n Figure 41.2 depicts the high-level view of a compu-\ntational trust engine called when: \n ● A requested entity has to decide what action should \nbe taken due to a request made by another entity, the \nrequesting entity \n ● The decision has been decided by the requested \nentity \n 14 T. Peters, “ The brand called you, ” Fast Company (1997). \n 15 Execunet, www.execunet.com . \n 16 J. M. Seigneur, Trust, Security and Privacy in Global Computing \n(2005). \n" }, { "page_number": 738, "text": "Chapter | 41 Reputation Management\n705\n ● Evidence about the actions and the outcomes is \nreported \n ● The trustor has to select a trustee among several \npotential trustees \n A number of subcomponents are used for the preced-\ning cases: \n ● A component that is able to recognize the con-\ntext, especially to recognize the involved entities. \nDepending on the confidence level in recognition of \nthe involved entities, for example, the face has only \nbeen recognized with 82% of confidence, which may \nimpact the overall trust decision. Context informa-\ntion may also consist of the time, the location, and \nthe activity of the user. 17 \n ● Another component that can dynamically compute \nthe trust value, that is, the trustworthiness of the \nrequesting entity based on pieces of evidence (for \nexample, direct observations, recommendations or \nreputation). \n ● A risk module that can dynamically evaluate the risk \ninvolved in the interaction based on the recognized \ncontext. Risk evidence is also needed. \n The chosen decision should maintain the appropriate \ncost/benefit ratio. In the background, another component \nis in charge of gathering and tracking evidence: recom-\nmendations and comparisons between expected outcomes \nof the chosen actions and real outcomes. This evidence is \nused to update risk and trust information. Thus, trust and \nrisk follow a managed life cycle. \n Depending on dispositional trust and system trust, \nthe weight of the trust value in the final decision may be \nsmall. The level of dispositional trust may be set due to \ntwo main facts. First, the user manually sets a general level \nof trust, which is used in the application to get the level of \ntrust in entities, independently of the entities. Second, the \ncurrent balance of gains and losses is very positive and the \nrisk policy allows any new interactions as long as the bal-\nance is kept positive. Marsh uses the term basic trust 18 for \ndispositional trust; it may also be called self-trust. \n Generally, as introduced by Rahman and Hailes, 19 there \nare two main contexts for the trust values: direct , which is \nabout the properties of the trustee, and recommend , which \nis the equivalent of recommending trustworthiness. In \ntheir case, recommending trustworthiness is based on \nconsistency on the “ semantic distance ” between the real \noutcomes and the recommendations that have been made. \nThe default metric for consistency is the standard devia-\ntion based on the frequency of specific semantic distance \nvalues: the higher the consistency, the smaller the standard \ndeviation and the higher the trust value in recommending \ntrustworthiness. \n As said previously, another source for trust in human \nnetworks consists of real-world recourse mechanisms \nsuch as insurance or legal actions. Traditionally, it is \nassumed that if the actions made by a computing entity \nare bound to a real-world identity, the owner of the faulty \ncomputing entity can be brought to court and reparations \nare possible. In an open environment with no unique \nauthority, the feasibility of this approach is question-\nable. An example where prosecution is ineffective occurs \nwhen email spammers do not mind moving operations \nabroad where antispam laws are less developed, to escape \nany risk of prosecution. It is a fact that worldwide there \nare multiple jurisdictions. Therefore, security based on \nauthenticated identity may be superfluous. Furthermore, \n 17 A. K. Dey, “ Understanding and using context ” , Personal and \nUbiquitous Computing Journal (2001). \n 18 S. Marsh, Formalizing Trust as a Computational Concept (1994). \n 19 A. Rahman, and S. Hailes, Using Recommendations for Managing \nTrust in Distributed Systems (1997). \nRequest\nSelection\nDecision\nEvidence\nEvidence\nTracking\nDecision- \nMaking\nTrust\nValue \nComputation\nRisk \nComputation\nTrust Engine Security Perimeter\nContext\nRecognition\nEvidence\nRequest\nDecision\n FIGURE 41.2 High-level view of a computational trust engine. \n" }, { "page_number": 739, "text": "PART | VII Advanced Security\n706\nthere is the question of which authority is in charge of \ncertifying the binding with the real-world identity, since \nthere are no unique global authorities. “ Who, after all, can \nauthenticate U.S. citizens abroad? The UN? Or thousands \nof pair wise national cross-certifications? ” 20 \n More important, is authentication of the real-world \nidentity necessary to be able to use the human notion \nof trust? Indeed, a critical element for the use of trust is \nto retrieve trust evidence on the interacting entities, but \ntrust evidence does not necessarily consist of informa-\ntion about the real-world identity of the owner; it may \nsimply be the count of positive interactions with a pseu-\ndonym, as defended in 21 . As long as the interacting \ncomputing entities can be recognized, direct observa-\ntions and recommendations can be exchanged to build \ntrust, interaction after interaction. This level of trust can \nbe used for trusting decisions. Thus, trust engines can \nprovide dynamic protection without the assumption that \nreal-world recourse mechanisms, such as legal recourse, \nare available in case of harm. \n The terms trust/trusted/trustworthy , which appear \nin the traditional computer science literature, are not \ngrounded on social science and often correspond to an \nimplicit element of trust. For example, we have already \nmentioned the use of trusted third parties, called CAs, \nwhich are common in PKIs. Another example is trusted \ncomputing , 22 the goal of which is to create enhanced hard-\nware by using cost-effective security hardware (more or \nless comparable to a smart-card chip) that acts as the “ root \nof trust. ” They are trusted means that they are assumed \nto make use of some (strong) security protection mecha-\nnisms. Therefore they can/must implicitly be blindly \ntrusted and cannot fail. This cannot address security when \nit is not known who or whether or not to blindly trust. The \nterm trust management has been introduced in computer \nsecurity by Blaze et al., 23 but others have argued that their \nmodel still relies on an implicit notion of trust because it \nonly describes “ a way of exploiting established trust rela-\ntionships for distributed security policy management with-\nout determining how these relationships are formed. ” 24 \nThere is a need for trust formation mechanisms from \nscratch between two strangers. Trust engines build trust \nexplicitly based on evidence either personal, reported by \nknown recommenders or through reputation mechanisms. \n As said previously, reputation is different than a rec-\nommendation made by a known specific entity. However, \nin the digital world, it is still less easy to exactly certify \nthe identity of the recommender and in many cases the \nrecommender can only be recognized to some extent. \nThe entity recognition occurs with the help of a con-\ntext recognition module, and the level of confidence in \nrecognition may be taken into account in the final trust \ncomputation — for example, as done in advanced compu-\ntational trust engines. 25 In the remainder of this chapter, \nfor simplicity’s sake (since this chapter focuses on reputa-\ntion rather than trust), we assume that each entity can be \nrecognized with a perfect confidence level in recognition. \nThus, we obtain the two layers depicted in Figure 41.3 : \nthe identity management layer and the reputation man-\nagement layer. In these layers, although we mention the \nworld identity , we do not mean that the real-world identity \nbehind each entity is supposed to be certified; we assume \nthat it is sufficient to recognize the entity at a perfect level \nof confidence in recognition — for example, if a recom-\nmendation is received from an eBay account, it is sure \nthat it comes from this account and that it is not spoofed. \nThere are different round-edged rectangles at the top of \nthe reputation layer that represent a few of the different \nreputation services detailed later in the chapter. There are \nalso a number of round-edged rectangles below the iden-\ntity layer that represent the various types of authentication \nschemes that can be used to recognize an entity. Although \npassword-based or OpenID 26 -based 27 authentication may \nbe less secure than multimodal authentication combining \nbiometrics, smart cards, and crypto-certificates, 28 since \nwe assume that the level of confidence in recognition is \nperfect, as stated previously, the different identity man-\nagement technologies are abstracted to a unique identity \nmanagement layer for the remainder of the chapter. \n As presented previously, reputation management goes \nbeyond mere reputation assessment and encompasses \nmonitoring, analysis, and influence of reputation sources. \nIt is the reason that we introduce the following categories, \ndepicted in Figure 41.4 , for online reputation services: \n ● Reputation calculation . Based on evidence gathered \nby the service, the service either computes a value \n 25 J. M. Seigneur, Trust, Security and Privacy in Global Computing \n(2005). \n 26 Refer to Chapter 17 of this book on identity management to learn \nmore about OpenID. \n 27 OpenID, http://openid.net/ . \n 28 J. M. Seigneur, Trust, Security and Privacy in Global Computing \n(2005). \n 20 R. Khare, What’s in a Name? Trust (1999). \n 21 J. M. Seigneur, Trust, Security and Privacy in Global Computing \n(2005). \n 22 Trusted Computing Group, https://www.trustedcomputinggroup.org . \n 23 M. Blaze, J. Feigenbaum, and J. Lacy, “ Decentralized trust \nmanagement, ” In The 17th IEEE Symposium on Security and Privacy \n(1996). \n 24 S. Terzis, W. Wagealla, C. English, A. McGettrick, and P. Nixon, \n The SECURE Collaboration Model (2004). \n" }, { "page_number": 740, "text": "Chapter | 41 Reputation Management\n707\nrepresenting the reputation of a specific entity or \nsimply presents the reputation information without \nranking. \n ● Reputation monitoring, analysis, and warnings. The \nservice monitors Web-based media (Web sites, blogs, \nsocial networks, digitalized archived of paper-based \npress and trademarks) to detect any information \nimpacting the entity reputation and warns the user in \ncase of important changes. \n ● Reputation influencing, promotion, and rewards. \nThe service takes actions to influence the perceived \nreputation of the entity. The service actively \npromotes the entity reputation, for example, by \npublishing Web pages carefully designed to reach \na high rank in major search engines or paid online \nadvertisements, such as, Google AdWords. Users \nreaching a higher reputation may gain other rewards \nthan promotion, such as discounts. Based on the \nmonitoring services analysis, the service may be \nable to list the most important reputation sources \nand allow the users to influence these sources. For \nexample, in a 2006 blog bribe case, it was reported \nthat free laptops preloaded with a new commercial \noperating system were shipped for free to the most \nimportant bloggers in the field of consumer-oriented \nsoftware, to improve the reputation of the new \noperating software. \n ● Interaction facilitation and follow-up. The service \nprovides an environment to facilitate the interaction \nand its outcome between the trustor and the trustee. \nFor example, eBay provides an online auction \nsystem to sellers and buyers as well as monitors the \nfollow-up of the commercial transaction between the \nbuyer and the seller. \n ● Reputation certification and assurance . That type \nof service is closer to the notion of system trust than \nthe human notion of reputation because it relies \non external means to avoid ending up in a harmful \nsituation. For example, an insurance is paid as part of \na commercial transaction. These services might need \nthe certification of the link between the entity and \nits real-world identity in case of prosecutions. Our \nassumption does not hold for the services that require \nthat kind of link, but that category of services had to \nbe covered because a few services we surveyed make \nuse of them. \n ● Fraud protection, mediation, cleaning, and recovery. \nThese promotion services aim at improving the \nranking of reputation information provided by the \nuser rather than external information provided by \nthird parties. However, even if external information \nis hidden behind more controlled information, it \ncan still be found. It is the reason that some services \ntry to force the owners of the external sites hosting \ndamaging reputation information to delete the \ndamaging information. Depending on where the \nserver is located, this goal is more or less difficult \nto achieve. It may be as simple as filling an online \nform on the site hosting the defaming information \nto contact the technical support employee who \nwill check to see whether the information is really \nproblematic. In the case of a reluctant administrator, \nReputation Layer\nIdentity Layer\nGoogle\neBay\nNaymz\nLinkedIn\n...\nVenyo\nSmart\nCards\nOpenID\n...\nCrypto-\nCertificates\nBiometrics\nPasswords\n FIGURE 41.3 Identity management and reputation management layers. \nReputation Calculation\nMonitoring, Analysis and Warnings\nInfluencing, Promotion and Rewards\nInteraction Facilitation and Follow-up\nReputation Certification and Assurance\nFraud Protection, Mediation, Cleaning and \nRecovery\n FIGURE 41.4 Online reputation management services categories. \n" }, { "page_number": 741, "text": "PART | VII Advanced Security\n708\nlawyers or mediators specialized in online \ndefamation laws have to be commissioned, which is \nmore or less easy depending on the legislation in the \ncountry hosting the server. Generally, in countries \nwith clear defamation laws, the administrators prefer \ndeleting the information rather than going into a \nlengthy and costly legal process. Depending on the \nmediation and the degree of defamation, the host \nmay have to add an apology in place of the defaming \ninformation, pay a fine, or more. Fraud protection is \nalso needed against reputation calculation attacks. \nThere are different types of attacks that can be \ncarried out to flaw reputation calculation results. 29 \nThe art of attack-resistant reputation computation is \ncovered later in the chapter. \n 3. STATE OF THE ART OF \nATTACK-RESISTANT REPUTATION \nCOMPUTATION \n In most commercial reputation services surveyed in this \nchapter, the reputation calculation does not take into \naccount the attack resistance of its algorithm. This is a \npity because there are many different types of attacks \nthat can be carried out, especially at the identity level. In \naddition, most of these reputation algorithms correspond \nmore to a trust metric algorithm rather than reputation \nas we have defined it earlier in the chapter, because they \naggregate ratings submitted by recommenders or the \nrater itself rather than rely on evidence for which rec-\nommenders are unknown. Based on these ratings that \nwe can consider as either direct observations or recom-\nmendations, the services compute a reputation score \nthat we can consider a trust value, generally represented \non a scale from 0% to 100% or from 0 to 5 stars. The \nexact reputation computation algorithm is not publicly \ndisclosed by all services providers, and it is difficult to \nestimate the attack resistance of each of these algorithms \nwithout their full specification. However, it is clear that \nmany of these algorithms do not provide a high level of \nattack-resistance for the following reasons: \n ● Besides eBay, where each transaction corresponds \nto a very clear trust context with quite well-\nauthenticated users and a real transaction that is con-\nfirmed by real money transfers, most services occur \nin a decentralized environment and allow for the \nrating of unconfirmed transactions (without any real \nevidence that the transaction really happened, and \neven worse, by anonymous users). \n ● Still, eBay experiences difficulties with its reputa-\ntion calculation algorithm. In fact, eBay has recently \nchanged its reputation calculation algorithm: the \nsellers on eBay are no longer allowed to leave \nunfavorable or neutral messages about buyers, to \ndiminish the risk that buyers fear leaving negative \nfeedback due to retaliatory negative feedback from \nsellers. Finally, accounts on eBay that are protected \nby passwords may be usurped. According to Twigg \nand Dimmock, 30 a trust metric is γ -resistant if more \nthan γ nodes must be compromised for the attacker \nto successfully drive the trust value. For example, the \nRahman and Hailes ’ 31 trust metric is not γ -resistant \nfor γ \u0005 1 (a successful attack needs only one victim). \n In contrast to the centralized environment of eBay, \nin decentralized settings there are a number of specific \nattacks. First, real-world identities may form an alliance \nand use their recommendation to undermine the reputation \nof entities. On one hand, this may be seen as collusion. \nOn the other hand, one may argue that real-world iden-\ntities are free to vote as they wish. However, the impact \nis greater online. Even if more and more transactions \nand interactions are traced online, the majority of trans-\nactions and interactions that happen in the real world are \nnot reported online. Due to the limited number of traced \ntransactions and interactions, a few faked transactions and \ninteractions can have a high impact on the computed repu-\ntation, which is not fair. \n Second, we focus here on attacks based on vulner-\nabilities in the identity approach and subsequent use of \nthese vulnerabilities. The vulnerabilities may have dif-\nferent origins, for example, technical weaknesses in the \nauthentication mechanism. These attacks commonly \nrely on the possibility of identity multiplicity, meaning \nthat a real-world identity uses many digital pseudonyms. \nA very well-known identity multiplicity attack in the field \nof computational trust is Douceur’s Sybil attack. 32 Douceur \nargues that in large-scale networks where a centralized \nidentity authority cannot be used to control the creation of \npseudonyms, a powerful real-world entity may create as \nmany digital pseudonyms as it likes and recommend one \n 30 A. Twigg, and N. Dimmock, “ Attack-resistance of computational \ntrust models ” (2003) Proceedings of the Twelfth International \nWorkshop on Enabling Technologies: Infrastructure for Collaborative \nEnterprises. \n 31 A. Rahman, and S. Hailes, Using Recommendations for Managing \nTrust in Distributed Systems (1997). \n 32 J. R. Douceur, “ The sybil attack ” , Proceedings of the 1st International \nWorkshop on Peer-to-Peer Systems (2002). \n 29 J. M. Seigneur, Trust, Security and Privacy in Global Computing \n(2005). \n" }, { "page_number": 742, "text": "Chapter | 41 Reputation Management\n709\nof these pseudonyms to fool the reputation calculation \nalgorithm. This is especially important in scenarios where \nthe possibility to use many pseudonyms is facilitated — for \nexample, in scenarios where pseudonym creation is pro-\nvided for better privacy protection. \n In his Ph. D. thesis, Levien 33 says that a trust met-\nric is attack resistant if the number of faked pseudonyms, \nowned by the same real-world identity and that can be \nintroduced, is bounded. Levien argues that to mitigate \nthe problem of Sybil-like attacks, it is required to com-\npute “ a trust value for all the nodes in the graph at once, \nrather than calculating independently the trust value inde-\npendently for each node. ” Another approach proposed \nto protect against the Sybil attack is the use of manda-\ntory “ entry fees ” 34 associated with the creation of each \npseudonym. This approach raises some issues about its \nfeasibility in a fully decentralized way and the choice of \nthe minimal fee that guarantees protection. Also, “ more \ngenerally, the optimal fee will often exclude some play-\ners yet still be insufficient to deter the wealthiest players \nfrom defecting. ” 35 \n An alternative to entry fees may be the use of once-\nin-a-lifetime (1L 36 ) pseudonyms, whereby an elected \nparty per “ arena ” of application is responsible to certify \nonly 1L to any real-world entity that possesses a key pair \nbound to this entity’s real-world identity. The technique \nof blind signature 37 is used to keep the link between the \nreal-world identity and its chosen pseudonym in the arena \nunknown to the elected party. However, there are still \ntwo unresolved questions about this approach: how the \nelected party is chosen and how much the users would \nagree to pay for this approach. More important, a Sybil \nattack is possible during the voting phase, so the concept \nof electing a trusted entity to stop Sybil attacks does not \nseem practical. However, relying on real money turns the \ntrust mechanism into a type of system trust where the \nuse of reputation becomes almost superfluous. In the real \nworld, tax authorities are likely to require traceability of \nmoney transfers, which would completely break privacy. \nThus, when using pseudonyms, another means must be \npresent to prevent users from taking advantage of the fact \nthat they can create as many pseudonyms as they want. \n Trust transfer 38 has been introduced to encourage \nself-recommendations without attacks based on the crea-\ntion and use of a large number of pseudonyms owned by \nthe same real-world identity. In a system where there are \npseudonyms that can potentially belong to the same real-\nworld entity, a transitive trust process is open to abuse. \nEven if there is a high recommendation discounting fac-\ntor due to recommending trustworthiness, the real-world \nentity can diminish the impact of this discounting factor \nby sending a huge number of recommendations from his \narmy of pseudonyms in a Sybil attack. When someone \nrecommends another person, she has influence over the \npotential outcome of interaction between this person and \nthe trustor. The inclination of the trustor with regard to \nthis influence “ provides a goal-oriented sense of control \nto attain desirable outcomes. ” 39 So, the trustor should \nalso be able to increase or decrease the influence of the \nrecommenders according to his goals. \n Moreover, according to Romano, trust is not multi-\nple constructs that vary in meaning across contexts but a \nsingle construct that varies in level across contexts. The \noverall trustworthiness depends on the complete set of dif-\nferent domains of trustworthiness. This overall trustwor-\nthiness must be put in context: It is not sufficient to strictly \nlimit the domain of trustworthiness to the current trust \ncontext and the trustee; if recommenders are involved, \nthe decision and the outcome should impact their over-\nall trustworthiness according to the influence they had. \nKinateder et al. 40 also take the position that there is a \ndependence between different trust contexts. For exam-\nple, a chef known to have both won cooking awards and \nmurdered people may not be a trustworthy chef after all. \nTrust transfer introduces the possibility of a dependence \nbetween trustworthiness and recommending trustworthi-\nness. Trust transfer relies on the following assumptions: \n ● The trust value is based on direct observations or rec-\nommendations of the count of event outcomes from \nrecognized entities (for example, the outcome of an \neBay auction transaction with a specific seller from \na specific buyer recognized by their eBay account \npseudos). \n ● A pseudonym can be neither compromised nor \nspoofed; an attacker can neither take control of a \n 33 R. Levien, Attack Resistant Trust Metrics (2004). \n 34 E. Friedman, and P. Resnick, The Social Cost of Cheap Pseudonyms \n(2001): pp. 173 – 199. \n 35 E. Friedman, and P. Resnick, The Social Cost of Cheap Pseudonyms \n(2001): pp. 173 – 199. \n 36 E. Friedman, and P. Resnick, The Social Cost of Cheap Pseudonyms \n(2001): pp. 173 – 199. \n 37 D. Chaum, “ Achieving Electronic Privacy ” , Scientifi c American \n(1992): pp. 96 – 100. \n 38 J. M. Seigneur, Trust, Security and Privacy in Global Computing \n(2005). \n 39 D. M. Romano, The Nature of Trust: Conceptual and Operational \nClarifi cation (2003). \n 40 M. Kinateder, and K. Rothermel, “ Architecture and Algorithms for \na Distributed Reputation System ” , Proceedings of the First Conference \non Trust Management (2003). \n" }, { "page_number": 743, "text": "PART | VII Advanced Security\n710\n 3. If the contact has directly interacted with the subject \nand the contact’s RP allows it to permit the trustor to \ntransfer an amount (A \b TA) of the recommender’s \ntrustworthiness to the subject, the contact agrees \nto recommend the subject. It queries the subject \nwhether it agrees to lose A of trustworthiness on the \nrecommender side. \n 4. The subject returns a signed statement, indicating \nwhether it agrees or not. \n 5. The recommender sends back a signed recommen-\ndation to the trustor, indicating the trust value it is \nprepared to transfer to the subject. This message \nincludes the signed agreement of the subject. \n Both the RSP and RP can be as simple or complex as \nthe application environment demands. The trust transfer \nprocess is illustrated in Figures 41.5 , where the subject \nrequests an action that requires 10 positive outcomes. We \nrepresent the trust value as a tree of (s,i,c) -triples, cor-\nresponding to a mathematical event structure 41 : an event \noutcome count is represented as a (s,i,c) -triple, where \n s is the number of events that supports the outcome, i \nis the number of events that have no information or are \ninconclusive about the outcome, and c is the number of \nevents that contradict the expected outcome. This format \ntakes into account the element of uncertainty via i . \npseudonym nor send spoofed recommendations; \nhowever, everyone is free to introduce as many \npseudonyms as they wish. \n ● All messages are assumed to be signed and \ntimestamped. \n Trust transfer implies that recommendations cause trust \non the trustor ( T ) side to be transferred from the recom-\nmender ( R ) to the subject ( S ) of the recommendation. A \nsecond effect is that the trust on the recommender side for \nthe subject is reduced by the amount of transferred trust-\nworthiness. If it is a self-recommendation, that is, rec-\nommendations from pseudonyms belonging to the same \nreal-world identity, then the second effect is moot, since it \ndoes not make sense for a real-world entity to reduce trust \nin his own pseudonyms. Even if there are different trust \ncontexts (such as trustworthiness in delivering on time or \nrecommending trustworthiness), each trust context has \nits impact on the single construct trust value: they cannot \nbe taken separately for the calculation of the single con-\nstruct trust value. A transfer of trust is carried out if the \nexchange of communications depicted is successful. A \nlocal entity’s Recommender Search Policy ( RSP ) dictates \nwhich contacts can be used as potential recommenders. Its \nRecommendation Policy ( RP ) decides which of its contacts \nit is willing to recommend to other entities and how much \ntrust it is willing to transfer to an entity. Trust transfer (in \nits simplest form) can be decomposed into five steps: \n 1. The subject requests an action, requiring a total \namount of trustworthiness TA in the subject, in order \nfor the request to be accepted by the trustor; the \nactual value of TA is contingent upon the risk accept-\nable to the user, as well as dispositional trust and the \ncontext of the request; so the risk module of the trust \nengine plays a role in the calculation of TA . \n 2. The trustor queries its contacts, which pass the RSP , \nin order to find recommenders willing to transfer \nsome of their positive event outcomes count to the \nsubject. Recall that trustworthiness is based on event \noutcomes count in trust transfer. \n 41 M. Nielsen, G. Plotkin, and G. Winskel, “ Petri nets, event structures \nand domains, ” Theoritical Computer Science (1981): pp. 85 – 108. \n Note: In Figure 41.5a , the circles represent the different \ninvolved entities: S corresponds to the sender, which is \nthe subject of the recommendation and the requester; T \nis the trustor, which is also the target; and R is the rec-\nommender. The directed black arrows indicate a mes-\nsage sent from one entity to another. The arrows are \nchronologically ordered by their number. \n In Figure 41.5b , an entity E associated with a SECURE \ntriple (s-i-c) is indicated by E(s-i-c) . \n FIGURE 41.5 (a) Trust transfer process; (b) trust transfer process example. \nStart S(32,0,2)\nEnd S(22,0,2)\nStart: \n R(22,2,2) \nEnd \n R(12,2,2)\n S(10,0,0)\nT\nT(10)\nYes\nYes\nS(10)\nS\nR\n10 positive\noutcomes\nneeded\n(b)\nT\n1\n3\n4\n5\n2\nS\nR\n(a)\n" }, { "page_number": 744, "text": "Chapter | 41 Reputation Management\n711\n The RSP of the trustor is to query a contact to pro-\npose to transfer trust if the balance (s-i-c) is strictly \ngreater than 2TA . This is because it is sensible to require \nthat the recommender remains more trustworthy than the \nsubject after the recommendation. The contact, having a \nbalance passing the RSP ( s-i-c \u0003 32-0-2 \u0003 30), is asked \nby the trustor whether she wants to recommend 10 good \noutcomes. The contact’s RP is to agree to the transfer if \nthe subject has a trust value greater than TA . The balance \nof the subject on the recommender’s side is greater than \n 10 (s-i-c \u0003 22-2-2 \u0003 18) . The subject is asked by the \nrecommender whether she agrees 10 good outcomes to \nbe transferred. Trustor T reduces its trust in recommender \n R by 10 and increases its trust in subject S by 10 . Finally, \nthe recommender reduces her trust in the subject by 10 . \n The recommender could make requests to a number \nof recommenders until the total amount of trust value is \nreached (the search requests to find the recommenders \nare not represented in the figures). For instance, in the \nprevious example, two different recommenders could be \ncontacted, with one recommending three good outcomes \nand the other one seven. \n A recommender chain in trust transfer is not explic-\nitly known to the trustor. The trustor only needs to know \nhis contacts who agree to transfer some of their trust-\nworthiness. This is useful from a privacy point of view \nsince the full chain of recommenders is not disclosed. \nThis is in contrast to other recommender chains such as \na public key web of trust. 42 Because we assume that the \nentities cannot be compromised, we leave the issue sur-\nrounding the independence of recommender chains in \norder to increase the attack resistance of the trust met-\nric for future work. The reason for searching more than \none path is that it decreases the chance of a faulty path \n(either due to malicious intermediaries or unreliable \nones). If the full list of recommenders must be detailed \nto be able to check the independence of recommender \nchains, the privacy protection is lost. This can be an \napplication-specific design decision. \n Thanks to trust transfer, although a real-world \nidentity has many pseudonyms, the Sybil attack can-\nnot happen because the number of direct observations \n(and hence, total amount of trust) remains the same on \nthe trustor side. One may argue that it is unfair for the \nrecommender to lose the same amount of trustworthi-\nness as specified in his/her recommendation, moreover \nif the outcome is ultimately good. It is envisaged that a \nmore complex sequence of messages can be put in place \nin order to revise the decrease of trustworthiness after a \nsuccessful outcome. This has been left for future work, \nbecause it can lead to vulnerabilities (for example, based \non Sybil attacks with careful cost/benefit analysis). The \ncurrent trust transfer approach is still limited to scenar-\nios where there are many interactions between the rec-\nommenders and where the overall trustworthiness in the \nnetwork (that is, the global number of good outcomes) \nis large enough that there is no major impact to entities \nwhen they agree to transfer some of their trust (such as \nin the email application domain 43 ). Ultimately, without \nsacrificing the flexibility and privacy enhancing poten-\ntial of limitless pseudonym creation, Sybil attacks are \nguaranteed to be avoided. \n 4. OVERVIEW OF CURRENT ONLINE \nREPUTATION SERVICE \n As explained previously, most of the current online repu-\ntation services surveyed in this part of the chapter do not \nreally compute reputation as we defined it earlier in the \nchapter. Their reputation algorithms correspond more to \na trust metric because they aggregate direct observations \nand recommendations of different users rather than base \ntheir assessment on evidence from a group of an unknown \nnumber of unknown users. However, one may consider \nthat these services present reputations to their users if we \nassume that their users do not take the time to understand \nhow it was computed and who made the recommendations. \n The remainder of this part of the chapter starts by sur-\nveying the current online reputation services and finishes \nwith a recapitulating table. If not mentioned otherwise, the \nreputation services do not require the users to pay a fee. \n eBay \n Founded in 1995, eBay has been a very successful online \nauction marketplace where buyers can search for prod-\nucts offered by sellers and buy them either directly or \nafter an auction. After each transaction, the buyers can \nrate the transaction with the seller as positive, negative, or \nneutral. Since May 2008, sellers have only the choice to \nrate the buyer experience as positive, nothing else. Short \ncomments of a maximum of 80 characters can be left \nwith the rating. User reputation is based on the number \nof positive and negative ratings that are aggregated in \nthe Feedback Score as well as the comments. Buyers or \nsellers can affect each other’s Feedback Scores by only \none point per week. Each positive rating counts for 1 \n 42 P. R. Zimmermann, The Offi cial PGP User’s Guide (1995). \n 43 J. M. Seigneur, Trust, Security and Privacy in Global Computing \n(2005). \n" }, { "page_number": 745, "text": "PART | VII Advanced Security\n712\npoint and each negative counts for -1 point. The balance \nof points is calculated at the end of the week and the \nFeedback Score is increased by 1 if the balance is posi-\ntive or decreased by 1 if the balance is negative. Buyers \ncan also leave anonymous Detailed Seller Ratings. com-\nposed of various criteria such as “ Item as described, ” \n “ Communication, ” and “ Shipping Time, ” displayed as a \nnumber of stars from 0 to 5. Different image icons are \nalso displayed to quickly estimate the reputation of the \nuser — for example, a star whose color depends on the \nFeedback Score, as depicted in Figure 41.6 . After 90 \ndays, detailed item information is removed. \n From a privacy point of view, on one hand it is pos-\nsible to use a pseudonym; on the other hand, a pretty \nexhaustive list of what has been bought is available, \nwhich is quite a privacy concern. There are various auc-\ntion Insertion and Final Value fees depending on the item \ntype. eBay addresses the reputation service categories as \nfollows: \n ● Reputation calculation . As detailed, reputation is \ncomputed based on transactions that are quite well \ntracked, which is important to avoid faked evidence. \nHowever, eBay’s reputation calculation still has some \nproblems. For example, as explained, the algorithm \nhad to be changed recently; the value of the transac-\ntion is not taken into account at time of Feedback \nScore update (a good transaction of 10 Euros should \ncount less than a good transaction of 10 kEuros); it is \nlimited to the ecommerce application domain. \n ● Monitoring, analysis, and warnings. eBay does not \nmonitor the reputation of its users outside of its \nservice. \n ● Influencing, promotion, and rewards . eBay rewards \nits users through their public Feedback Scores \nand their associated icon images. However, eBay \ndoes not promote the user reputation outside its \nsystem and does not facilitate this promotion due \nto a strict access to its full evidence pool, although \nsome Feedback Score data can be accessed through \neBay software developer Application Programming \nInterface (API). \n ● Interaction facilitation and follow-up. eBay provides \na comprehensive Web-based site to facilitate online \nauctions between buyers and sellers, including a \ndedicated messaging service and advanced tools \nto manage the auction. The follow-up based on the \nFeedback Score is pretty detailed. \n ● Reputation certification and assurance. eBay \ndoes not certify a user reputation per se but, given \nits leading position, eBay Feedback Score can \nbe considered, to some extent, as some certified \nreputation evidence. \n ● Fraud protection, mediation, cleaning, and recovery. \neBay facilitates communication between the buyer \nand the seller as well as a dispute console with eBay \ncustomer support employees. A rating and comment \ncannot be deleted since the Mutual Feedback \nWithdrawal has been removed. In extreme cases, if \nthe buyer had paid through PayPal, which is now \n FIGURE 41.6 eBay’s visual reputation representation. \n" }, { "page_number": 746, "text": "Chapter | 41 Reputation Management\n713\npart of eBay, the item might be refunded after some \ntime if the item is covered and depending on the \nitem price. Finally, eBay works with a number of \nescrow services that act as third parties and that do \nnot deliver the product until the payment is made. \nAgain, if such third-party services are used, the \nuse of reputation is less useful because these third-\nparty services decrease a lot the risk of a negative \noutcome. eBay does not offer to clean a reputation \noutside its own Web site. \n Opinity \n Founded in 2004, Opinity 44 has been one of the first \ncommercial efforts to build a decentralized online repu-\ntation for users in all contexts beyond eBay’s limited \necommerce context (see Figure 41.7 ). However, at time \nof writing (March 2009), Opinity has been inactive for \nquite a while. After creating an account, the users had the \npossibility to specify their login and passwords of other \nWeb sites, especially eBay, to retrieve and consolidate all \nevidence in the user’s Opinity account. Of course, ask-\ning users to provide their passwords was risky and seems \nnot a good security practice. Another, safer option was \nfor the users to put hidden text in the HTML pages of \ntheir external services, such as eBay. Opinity was quite \nadvanced at the identity layer, since it supported OpenID \nand Microsoft Cardspace. In addition, Opinity could \nretrieve professional or education background and ver-\nify it to some extent using public listings or for a fee. \nOpinity users could rate other users in different contexts, \nsuch as plumbing or humor. Opinity addressed the differ-\nent reputation service categories as follows: \n ● Reputation calculation. The reputation was calcu-\nlated based on all the evidence sources and could be \naccessed by other Opinity partner sites. The reputa-\ntion could be focused to a specific context called a \n reputation category . \n ● Monitoring, analysis, and warnings. Opinity did not \nreally cover this category of services; most evidence \nwas pointed out by the users as they added the \nexternal accounts that they owned. \n ● Influencing, promotion, and rewards. Opinity had a \nbase Opinity Reputation Score, and it was possible to \n FIGURE 41.7 Opinity OpenID support. \n 44 Opinity, www.opinity.com . \n" }, { "page_number": 747, "text": "PART | VII Advanced Security\n714\ninclude a Web badge representation of that reputation \non external Web sites. \n ● Interaction facilitation and follow-up. Opinity did \nnot really cover this category of services besides \nthe fact that users could mutually decide to disclose \nmore details of their profiles via the Exchange \nProfile feature. \n ● Reputation certification and assurance. Opinity \ncertified educational, personal, or professional \ninformation to some extent via public listings or for a \nfee to check the information provided by the users. \n ● Fraud protection, mediation, cleaning, and recovery. \nOne of Opinity’s relevant features in this category \nis its reputation algorithm. However, it is not known \nhow strongly this algorithm was resistant to attacks, \nfor example, against a user who creates many \nOpinity accounts and uses them to give high ratings \nto a main account. Another relevant feature was that \nusers could appeal bad reviews via a formal dispute \nprocess. Opinity did not offer to clean the reputation \noutside its own Web site. \n Rapleaf \n Founded in 2006, Rapleaf 45 builds reputations around \nemail addresses. Any Rapleaf user is able to rate any \nother email address, which may be open to defamation or \nother privacy issues because the users behind the email \naddresses may not have given their consent. If the email \naddress to be rated has never been rated before, Rapleaf \ninforms the potential rater that it has started crawling the \nWeb to search for information about that email address \nand that once the crawling is finished, it will invite the \nrater to add a rating. As depicted in Figure 41.8 , differ-\nent contexts are possible: Buyers, Sellers, Swappers, and \nFriends. Once a rating is entered, it cannot be removed. \nHowever, new comments are possible and users can rate \nan email address several times. \n Online social networks are also crawled, and any \nexternal profile linked to the search email address are \nadded to the Rapleaf profile. The email address owners \nmay also add the other email addresses that they own to \ntheir profile to provide a unified view of their reputation. \nRapleaf’s investors are also involved in two other related \nservices: Upscoop.com, which allows users to import their \nlist of social network friends after disclosing their online \nsocial networks passwords (a risky practice, as already \nmentioned) and see in which other social networks their \nfriends are subscribed (at time of writing Upscoop already \nhas information about over 400 million profiles), and \nTrustFuse.com, a business that retrieves profile information \nfor marketing businesses that submit their lists of email \naddresses to TrustFuse. Officially, Rapleaf will not sell its \nbase of email addresses. However, according to its August \n2007 policy, “ information captured via Rapleaf may be \nused to assist TrustFuse services. Additionally, informa-\ntion collected by TrustFuse during the course of its busi-\nness may also be displayed on Rapleaf for given profiles \nsearched by email address, ” which is quite worrisome from \na privacy point of view. Rapleaf has addressed the different \nreputation service categories as follows: \n ● Reputation calculation. The Rapleaf Score takes \ninto account ratings evidence in all contexts as well \n FIGURE 41.8 Rapleaf rating interface. \n 45 Rapleaf, www.rapleaf.com . \n" }, { "page_number": 748, "text": "Chapter | 41 Reputation Management\n715\nas how the users have rated others and their social \nnetwork connections. Unfortunately, the algorithm is \nnot public and thus its attack resistance is unknown. \nIn contrast to eBay, the commercial transactions \nreported in Rapleaf are not substantiated by other \nfacts than the rater rating information. Thus, the \nchance of faked transactions is higher. Apparently, \na user may rate an email address several times. \nHowever, a user rating counts only once in the over-\nall reputation of the target email address. \n ● Monitoring, analysis, and warnings. Rapleaf warns \nthe target email address when a new rating is added. \n ● Influencing, promotion, and rewards. The Rapleaf \nScore can be embedded in a Web badge and \ndisplayed on external Web pages. \n ● Interaction facilitation and follow-up. At least it is \npossible for the target email address to be warned of \na rating and to rate the rater back. \n ● Reputation certification and assurance. There is no \nreal feature in this category. \n ● Fraud protection, mediation, cleaning, and recovery. \nThere is a form that allows the owner of a particular \nemail address to remove that email address from \nRapleaf. For more important issues, such as \ndefamation, a support email address is provided. \nRapleaf does not offer to clean the reputation outside \nits own Web site. \n Venyo \n Founded in 2006, Venyo 46 provides a worldwide peo-\nple reputation index, called the Vindex, based on either \ndirect ratings through the user profile on Venyo Web \nsite or indirect ratings through contributions or profiles \non partner Web sites (see Figure 41.9 ). Venyo is privacy \nfriendly because it does not ask users for their external \npasswords and it does not crawl the Web to present a \nuser reputation without a user’s initial consent. Venyo \nhas addressed the different reputation service categories \nas follows: \n ● Reputation calculation. Venyo’s reputation algo-\nrithm is not public and therefore its attack resistance \nis unknown. At time of rating, the rater specifies a \nvalue between 1 and 5 as well as keywords corre-\nsponding to the tags contextualizing the rating. The \nrating is also contextualized according to where the \nrating has been done. For example, if the rating is \ndone from a GaultMillau restaurant blog article, the \ntag “ restaurant recommendation ” is automatically \nadded to the list of tags. \n ● Monitoring, analysis, and warnings. Venyo provides \na reputation history chart, as depicted, to help users \n FIGURE 41.9 Venyo reputation user interface. \n 46 Venyo, www.venyo.org . \n" }, { "page_number": 749, "text": "PART | VII Advanced Security\n716\nmonitor the evolution of their reputation on Venyo’s \nand partner’s Web sites. Venyo does not monitor \nexternal Web pages or information. \n ● Influencing, promotion, and rewards. The Vindex \nallows users to search for the most reputable users \nin different domains specified by tags and it can be \ntailored to specific locations. In addition, the Venyo \nWeb badge can be embedded in external Web sites. \nThere is also a Facebook plug-in to port Venyo \nreputation into a Facebook profile. \n ● Interaction facilitation and follow-up. The Vindex \nfacilitates finding the most reputable user for the \nrequest context. \n ● Reputation certification and assurance. There is no \nVenyo feature in this category yet. \n ● Fraud protection, mediation, cleaning, and recovery. \nAs mentioned, Venyo’s reputation algorithm attack \nresistance cannot be assessed because Venyo’s \nalgorithm is not public. The cleaning feature is less \nrelevant because the users do not know who has \nrated them. An account may be closed if the user \nrequests it. As noted, Venyo is more privacy friendly \nthan other services that request passwords or display \nreputation without their consent. \n TrustPlus \u0002 XING \u0002 ZoomInfo \u0002 SageFire \n Founded in 1999, ZoomInfo is more a people (and com-\npany) search directory than a reputation service. However, \nZoomInfo, with its 42 million-plus users, 3.8 million \ncompanies, and partnership with Xing.com (a business \nsocial network similar to LinkedIn.com), has recently \nformed an alliance with Trustplus, 47 an online reputa-\ntion service founded in 2006. The main initial feature of \nTrustPlus is a Web browser plug-in that allows users to \nsee the TrustPlus reputation of an online profile appear-\ning on Web pages on different sites, such as, craiglist.\norg. At the identity layer, TrustPlus asks users to type \ntheir external accounts passwords, for example, eBay’s \nor Facebook’s, to validate that they own these external \naccounts as well as to create their list of contacts. This list \nof contacts can be used to specify who among the con-\ntacts can see the detail of which transactions or ratings. \n As depicted in Figure 41.10 , the TrustPlus rating user \ninterface is pretty complex. There are different contexts: \na commercial transaction, a relationship, and an interac-\ntion, for example, a chat or a date. \n The TrustPlus score is also pretty complex, as depicted \nin Figure 41.11 . Thanks to its partnership with SageFire, \nwhich is a trusted eBay Certified Solution Provider that \nhas access to historical archives of eBay reputation data, \nTrustPlus is able to display and use eBay’s reputation evi-\ndence when users agree to link their TrustPlus accounts \nwith their eBay accounts. TrustPlus has addressed the dif-\nferent reputation service categories as follows: \n ● Reputation calculation. The TrustPlus reputation \nalgorithm combines the different sources of reputation \n FIGURE 41.10 TrustPlus Rating User Interface. \n 47 Trustplus, www.trustplus.com . \n" }, { "page_number": 750, "text": "Chapter | 41 Reputation Management\n717\nevidence reported to TrustPlus by its users and partner \nsites. However, the TrustPlus reputation algorithm \nis not public and thus again it is difficult to assess \nits attack resistance. At time of rating a commer-\ncial transaction, it is possible to specify the amount \ninvolved in the transaction, which is interesting from \na computational trust point of view. Unfortunately, \nthe risk that this transaction is faked is higher than in \neBay because there are no other real facts that \ncorroborate the information given by the rating user. \n ● Monitoring, analysis, and warnings. TrustPlus warns \nthe user when a new rating has been entered or a new \nrequest for rating has been added. However, there \nis no broader monitoring of the global reputation of \nthe user. \n ● Influencing, promotion, and rewards. TrustPlus \nprovides different tools to propagate the user \nreputation: a Web badge that may include the eBay \nreputation, visibility once the TrustPlus Web browser \nplug-in viewer has been installed, and link with the \nZoomInfo directory. \n ● Interaction facilitation and follow-up. TrustPlus \nprovides an internal messaging service that increases \nthe tracking quality of the interactions between the \nrated users and the raters. \n ● Reputation certification and assurance. No \nreputation certification is done by TrustPlus per \nse, but the certification is a bit more formal when \nthe users have chosen the option to link their eBay \nreputation evidence. \n ● Fraud protection, mediation, cleaning, and \nrecovery. As noted, the attack resistance of the \nTrustPlus reputation algorithm is unknown. In case \nof defamation or rating disputes, a Dispute Rating \nbutton is available in the Explore Reputation part of \nthe TrustPlus site. When a dispute is initiated, the \nusers have to provide substantiating evidence to the \nTrustPlus support employees via email. TrustPlus \ndoes not offer to clean the reputation outside its own \nWeb site. \n Naymz \u0002 Trufina \n Founded in 2006, Naymz 48 has formed an alliance with \nTrufina.com, which is in charge of certifying identity \nand background user information. Premium features cost \n $ 9.95 per month at time of writing. The users are asked \nfor their passwords on external sites such as LinkedIn to \ninvite their list of contacts on these external sites, which is \na bad security practice, as we’ve mentioned several times. \nUnfortunately, it seems that Naymz has been too aggres-\nsive concerning its emails policy and a number of users \nhave complained about receiving unsolicited emails from \nNaymz — for example, “ I have been spammed several times \nover the past several weeks by a service called Naymz. ” 49 \n FIGURE 41.11 The TrustPlus score. \n 48 Naymz, www.naymz.com . \n 49 www.igotspam.com/50226711/naymz_sending_spamas_you.php , \naccessed 16 July 2008. \n" }, { "page_number": 751, "text": "PART | VII Advanced Security\n718\nNaymz has addressed the different reputation service \ncategories as follows: \n ● Reputation calculation. Naymz RepScore combines \na surprising set of information — not only the \nratings given by other users but also points for the \nuser profile completeness and identity verifications \nfrom Trufina. Each rating is qualitative and \nfocused on the professional contexts of the target \nuser ( “ Would you like to work on a team with the \nuser? ” ), as depicted in Figure 41.12 . The answers \ncan be changed at any time. Only users who are \npart of the target user list of contacts are allowed \nto rate the user. There is no specific transaction-\nbased rating, for example, for an ecommerce \ntransaction. \n ● Monitoring, analysis, and warnings. Naymz has \nmany monitoring features for both inside and \noutside Naymz reputation evidence. There is a free \nlist of Web sources (Web sites, forum, blogs) that \nmention the user’s name. A premium monitoring \ntool allows the user to see on a worldwide map who \nhas accessed the user Naymz profile, including the \nvisitor’s IP address. It is possible to subscribe to \nother profiles, recent Web activities if they are part of \nthe user’s confirmed contacts. \n ● Influencing, promotion, and rewards. In addition \nto the RepScore and Web badges, the users can get \nranking in Web search engines for a fee or for free if \nthey maintain a RepScore higher than 9. The users \ncan also get other features for free if they maintain \na certain level of RepScore — for example, free \ndetailed monitoring above 10. For a fee of $ 1995 at \nthis writing, they also propose to shoot and produce \nhigh-quality, professional videos, to improve the user \n “ personal brand. ” \n ● Interaction facilitation and follow-up. It is \npossible to search for users based on keywords but \nthe search options and index are not advanced at this \nwriting. \n ● Reputation certification and assurance. As said \nabove, Trufina is in charge of certifying identity and \nbackground user information. \n ● Fraud protection, mediation, cleaning, and \nrecovery. There are links to report offensive or \ndefaming information to the support employees. \nThe attack resistance of the RepScore cannot be \nassessed because the algorithm detail is not public. \nThey have also launched a new service called \nNaymz Reputation Repair whereby a user can \nindicate the location of external Web pages that \ncontain embarrassing information as well as some \ninformation regarding the issues, and after analysis \nNaymz may offer to take action to remove the \nembarrassing information for a fee. \n FIGURE 41.12 Reputation calculation on Naymz. \n" }, { "page_number": 752, "text": "Chapter | 41 Reputation Management\n719\n The GORB \n Founded in 2006, The GORB 50 allows anybody to \nrate any email address anonymously. Users can create \nan account to be allowed to display their GORB score \nwith a Web badge and be notified of their and others ’ \nreputation evolution. The GORB is very strict regarding \ntheir rule of anonymity: Users are not allowed to know \nwho has rated them (see Figure 41.13 ). The GORB has \naddressed the different reputation service categories as \nfollows: \n ● Reputation calculation. The GORB argues that, \nalthough they only use anonymous ratings, their \nreputation algorithm is attack resistant, but it \nis impossible to assess because it is not public. \nApparently, they allow a user to rate an email \naddress several times, but they warn that multiple \nratings may decrease the GORB score of the rater. \nThe rating has two contexts on a 0 to 10 scale, as \ndepicted in personal and professional. Keywords \ncalled tags can be added to the rating as well as a \ntextual comment. \n ● Monitoring, analysis, and warnings. The users can \nbe notified by email when their reputation evolves or \nwhen the reputation of user-defined email addresses \nevolve. However, it does not monitor evidence \noutside The GORB. \n ● Influencing, promotion, and rewards. A Web browser \nplug-in can be installed to visualize the reputation of \nemail addresses appearing on Web pages in the Web \nbrowser. There is a ranking of the users based on the \nGORB score. \n ● Interaction facilitation and follow-up. There is no \nfollow-up because the ratings are anonymous. \n ● Reputation certification and assurance. There is no \nfeature in this category. \n ● Fraud protection, mediation, cleaning, and recovery. \nThe GORB does not allow users to remove their \nemail address from their list of emails. The GORB \nasks for user passwords to import email addresses \nfrom the list of contacts of other Web sites. Thus, \none may argue that The GORB, even if the ratings \nare anonymous, is not very privacy friendly. \n FIGURE 41.13 The GORB rating user interface. \n 50 The GORB, www.thegorb.com . \n" }, { "page_number": 753, "text": "PART | VII Advanced Security\n720\n ReputationDefender \n Founded in 2006, ReputationDefender 51 has the follow-\ning products at this writing: \n ● MyReputation and MyPrivacy, which crawl the Web \nto find reputation information about users for $ 9.95 \na month and allow them to ask for the deletion of \nembarrassing information for $ 39.95 per item. \n ● MyChild, which does the same but for $ 9.95 a month \nand per child. \n ● MyEdge, starting from $ 99 to $ 499, allows the \nuser with the help of automated and professional \ncopywriters to improve her online presence — for \nexample, in search engines such as Google and with \nthird-person biographies written by professional \ncopywriters. \n ReputationDefender has addressed the different repu-\ntation service categories as follows: \n ● Reputation calculation. There is no feature in this \ncategory. \n ● Monitoring, analysis, and warnings. As noted, the \nwhole Web is crawled and synthetic online reports \nare provided. \n ● Influencing, promotion, and rewards. The user \nreputation may be improved based on expert advice \nand better positioned in Web search engines. \n ● Interaction facilitation and follow-up. There is no \nfeature in this category. \n ● Reputation certification and assurance. There is no \nfeature in this category. \n ● Fraud protection, mediation, cleaning, and \nrecovery. There is no reputation algorithm. Cleaning \nmay involve automated software or real people \nspecialized in legal reputation issues. \n Summarizing Table \n Table 41.1 summarizes how well each current online \nreputation service surveyed in this chapter addresses \neach reputation service category. \n 5. CONCLUSION \n Online reputation management is an emerging comple-\nmentary field of computer security whose traditional \nsecurity mechanisms are challenged by the openness of \nthe World Wide Web, where there is no a priori informa-\ntion of who is allowed to do what. Technical issues remain \nto be overcome: the attack resistance of the reputation \nalgorithm is not mature yet; it is difficult to represent in \na consistent way reputation built from different contexts \n(ecommerce, friendship). Sustainable business models \nare still to be found: Opinity seems to have gone out of \nbusiness; Rapleaf had to move from a reputation service \nto a privacy-risky business of email address profiling; \n TABLE 41.1 Table Summarization \n \n eBay \n Opinity \n Rapleaf \n TrustPlus \n Venyo \n The GORB \n Naymz \n Reputation \nDefender \n Founding Year \n 1995 \n 2004 \n 2006 \n 2006 \n 2006 \n 2006 \n 2006 \n 2006 \n Fee \n % \n \n N \n P \n N \n N \n P \n Y \n Reputation Calculation \n ** \n * \n * \n * \n * \n * \n * \n \n Monitoring, Analysis and \nWarnings \n \n \n * \n * \n * \n * \n ** \n ** \n Influencing, Promotion and \nRewards \n ** \n * \n * \n ** \n ** \n ** \n *** \n ** \n Interaction Facilitation and \nFollow-up \n *** \n * \n * \n * \n ** \n \n \n \n Reputation Certification and \nAssurance \n * \n ** \n \n \n \n \n * \n \n Fraud Protection, Mediation, \nCleaning and Recovery \n ** \n * \n * \n * \n ** \n \n ** \n *** \n %: transaction percentage fee, P: premium services fee, N: No fee, Y: paid service \n 51 ReputationDefender, www.reputationdefender.com . \n" }, { "page_number": 754, "text": "Chapter | 41 Reputation Management\n721\nTrustPlus had to form an alliance with ZoomInfo, XING, \nand SageFire; Naymz decreased its own reputation by \nspamming its base of users in hope of increasing its traffic. \n It seems that both current technical and commercial \nissues may be improved by a standardization effort of \nonline reputation management. The attack resistance of \nreputation algorithms cannot be certified to the degree it \ndeserves if the reputation algorithms remain private. It \nhas been proven in other security domains that security \nthrough obscurity gives lower results — for example, con-\ncerning cryptographic algorithms that are now open to \nreview by the whole security research community. Open \nreputation algorithms will also improve the credibility \nof the reputation results because it will be possible to \nclearly explain to users how the reputation has been cal-\nculated. Standardizing the representation of reputation \nwill also diminish confusion in the eyes of the users. The \nclearer understanding of which reputation evidence is \ntaken into account in reputation calculation will improve \nthe situation regarding privacy and will open the door to \nstronger regulation on the way in which reputation infor-\nmation flows. \n" }, { "page_number": 755, "text": "This page intentionally left blank\n" }, { "page_number": 756, "text": "723\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Content Filtering \n Peter Nicoletti \n Secure Information Systems \n Chapter 42 \n Content filtering is a powerful tool that, properly \ndeployed, can offer parents, companies, and local, state, \nand federal governments protection from Internet-based \ncontent. It is disparaged as Orwellian and simultaneously \nembraced as a short-term positive ROI project, depending \non who you are and how it affects your online behavior. In \nthis chapter we examine the many benefits and justifica-\ntions of Web-based content filtering, such as legal liability \nrisk reduction, productivity gains, and bandwidth usage. \nWe’ll explore the downside and unintended consequences \nand risks that improperly deployed or misconfigured sys-\ntems create. We’ll also look into methods to subvert and \nbypass these systems and the reasons behind them. \n It is important for people who are considering content \nfiltering to be aware of all the legal implications, and we’ll \nalso review these. Content filtering is straightforward to \ndeploy, and license costs are so reasonable they can offer \nextremely fast return on investment while providing a very \neffective risk reduction strategy. We’ll make sure that your \nproject turns out successfully, since we’ll look at all the \nangles: Executives will be happy with the project results \nand employees won’t key your car in the parking lot! \n 1. THE PROBLEM WITH CONTENT \nFILTERING \n Access to information and supplications on the World \nWide Web (WWW) has become a critical and integral part \nof modern personal and business life as well as a national \npastime for billions of users. With millions of applications \nand worldwide access to information, email, video, music, \ninstant messaging (IM), Voice over IP (VoIP), and more, \nthe way we conduct business, communicate, shop, and \nentertain is evolving rapidly. With all the advancements of \nincreased communications and productivity comes a bad \nside with a tangle of security risks. Productivity, acces-\nsibility, and conveniences of the World Wide Web have \nalso brought us spam, viruses, worms, Trojans, keystroke \nloggers, relentless joke forwarding, identity theft, Internet \nscams, and the dreaded chain letter. These are among the \nincreasing risks associated with simple WWW access that \nwas once taken for granted only a few years ago. \n Surfing the Web is the most common of all Internet \napplications. It offers global access to all types of infor-\nmation, banking, buying and selling goods and services \nfrom the comfort of our computer, and online bill paying, \nand it’s so entertaining in many different ways. For busi-\nness, access to Web apps is mission critical, and other \nsites are productivity tools. Accessing the Internet from \nthe office is constantly presenting new challenges to man-\nage. Some of the negative impacts of doing the wrong \nthing and going to the “ wrong places ” include: \n ● Lost productivity due to nonbusiness-related Internet \nuse \n ● Higher costs as additional bandwidth is purchased \nto support legitimate and illegitimate business \napplications \n ● Network congestion; valuable bandwidth is being \nused for nonbusiness purposes, and legitimate \nbusiness applications suffer \n ● Loss or exposure of confidential information through \nchat sites, nonapproved email systems, IM, peer-to-\npeer file sharing, etc. \n ● Infection and destruction of corporate information \nand computing resources due to increased exposure to \nWeb-based threats (viruses, worms, Trojans, spyware, \netc.) as employees surf nonbusiness-related Web sites. \n ● Legal liability when employees access/download \ninappropriate and offensive material (pornography, \nracism, etc.) \n ● Copyright infringement caused by employees \ndownloading and/or distributing copyrighted material \nsuch as music, movies, etc. \n ● Negative publicity due to exposure of critical \ncompany information, legal action, and the like \n" }, { "page_number": 757, "text": "PART | VII Advanced Security\n724\n Casual nonbusiness-related Web surfing has caused \nmany businesses countless hours of legal litigation as hos-\ntile work environments have been created by employees \nwho view and download and show off or share offensive \ncontent. RIAA takedown notices and copyright infringe-\nment threats, fines, and lawsuits are increasing as employ-\nees use file-sharing programs to download, serve up, and \nshare their favorite music and movie files. Government reg-\nulations and legal requirements are getting teeth with fines \nand consequences as company executives are accountable \nfor their employees ’ actions. Corporate executives and IT \nprofessionals alike are now becoming more concerned \nabout what their employees are viewing and downloading \nfrom the Internet. \n Government regulations on Internet access and infor-\nmation security are being enforced by many countries \nand individual states: the Children’s Internet Protection \nAct (CIPA) for schools and libraries, Japan’s Internet \nAssociation’s SafetyOnline2 to promote Internet filter-\ning, HIPAA, Sarbanes-Oxley, Gramm-Leach-Bliley, and \n “ duty of care ” legal obligation legislation. Many inde-\npendent reports and government agencies (such as the \nFBI) are now reporting that employees are the single \nhighest risk and are the most common cause of network \nabuse, data loss, and legal action. Because employers \ncan be ultimately held responsible for their employees ’ \nactions, many businesses are now working aggressively \nwith their Human Resources departments to define \nacceptable Internet usage. Reports of Internet abuse \ninclude: \n ● Global consulting firm IDC reports that 30 – 40% \nof Internet access is being used for nonbusiness \npurposes. \n ● The American Management Association reports that \n27% of Fortune 500 companies have been involved \nin sexual harassment lawsuits over their employees ’ \ninappropriate use of email and Internet. \n ● The Center of Internet Studies have reported that \nmore than 60% of companies have disciplined \nemployees over Internet and email use, with more \nthan 30% terminating employees. \n Numerous stories of employee dismissal, sexual har-\nassment, and discipline with regard to Internet use can \nbe found on the Internet 1 : \n ● The Recording Industry Association of America \n(RIAA) and the Motion Picture Association of \nAmerica (MPAA) have relentlessly pursued legal \naction against schools, corporations, and individu-\nals (even printers!) over the illegal downloading \nof music and movies from the Internet. The RIAA \nrecently won a case against an Arizona company for \n$1 million. \n ● An oil and gas company recently paid $2.2 million \nto settle a lawsuit for tolerating a hostile work envi-\nronment created by the downloading and sharing of \nInternet pornography. \n To address these issues, companies are creating or \nupdating their computer usage polices to include Web, \nemail, surfing, and general Internet usage security, to \nprotect themselves from legal action and financial losses. \nTo become compliant with many of these new policies, \ncorporations need to start monitoring and controlling \nWeb access. Web content filtering is gaining popular-\nity, and the industry providing the tools and technology \nin this area has been growing rapidly; IDC predicted a \ncompound annual growth rate of 27% year over year \nuntil 2007. \n 2. USER CATEGORIES, MOTIVATIONS, \nAND JUSTIFICATIONS \n Let’s look at the entities that should consider content \nfiltering: \n ● Schools \n ● Commercial businesses \n ● Libraries \n ● Local, state, and federal government \n ● Parents \n Each of these faces different risks, but they are all \ntrying to solve one or more of the following challenges: \n ● Maintain compliance \n ● Protect company and client sensitive data \n ● Maximize employee productivity \n ● Avoid costly legal liabilities due to sexual \nharassment and hostile work environment lawsuits \n ● Preserve and claw back network bandwidth \n ● Enforce company acceptable use policies (also \nknown as Internet access policies) \n ● Control access to customer records and private \ndata \n ● Monitor communications going into and out of a \ncompany \n ● Protect children \n 1 Stories of workers being dismissed for porn surfi ng: “ IT manager \nfi red for lunchtime Web surfi ng, ” www.theregister.co.uk/1999/06/16/it_\nmanager_fi red_for_lunchtime ; “ Xerox fi res 40 in porn site clampdown, ” \n www.theregister.co.uk/2000/07/15/xerox_fi res_40_in_porn ; “ 41 District \nworkers have been fi red/suspended for visiting pornographic Web sites, ” \n www.wtopnews.com/ ?sid \u0003 1331641 & nid \u0003 25. \n" }, { "page_number": 758, "text": "Chapter | 42 Content Filtering\n725\n Let’s look at the specific risks driving these entities \nto review their motivation to filter content. \n Schools \n When “ Little Johnny ” shows the latest starlet’s sex tape to \nhis buddies in high school media class, he is not following \na school board-approved agenda. His parents could argue \nthat the school computers “ allowed ” the access simply by \nbeing a click away from pornographic content. All school \ncomputer use policies have been updated to forbid visits \nto offensive sites, most with serious consequences. About \n95% of schools have deployed a content-filtering solution. \n Commercial Business \n Do you have abundant bandwidth? Are your employees \nproductive? Then your content-filtering project may be as \nsimple as blocking porn and logging all other behavior. \nContent filtering projects have to be well thought out and \ndeployed as an enforcement of an appropriate IT security \npolicy. Put in content filtering over the weekend and sur-\nprise your employees with a “ This page is blocked ” mes-\nsage when they check their fantasy sports team results \nand you’ll have a mutiny around the water cooler! \n Financial Organizations \n Financial organizations traffic in sensitive data and have \nspecific risks. Protecting credit card numbers, Social \nSecurity numbers, and other critical PII means they have \nno room for error. Employees can inadvertently or inten-\ntionally disclose sensitive information through Web-based \nemail. This increases risk and puts organizations in finan-\ncial and legal jeopardy. \n Healthcare Organizations \n In today’s fast-paced technological world, healthcare \norganizations must keep their costs and risks down. When \nemployees inadvertently disclose sensitive information \nthrough bad online behavior, they put health organizations \nin financial and legal jeopardy. With the Health Insurance \nPortability and Accountability Act (HIPAA) pushing to \nprotect sensitive client data, healthcare organizations can-\nnot afford to have less than the best in Internet security \nsolutions. \n Internet Service Providers \n ISPs have three motivations in regard to content filtering. \nThe first is bandwidth. If the ISP can throttle, constrain, \nor limit the streaming Web content, the ISP may be able \nto lower costs. Second, ISPs have to comply and produce \nlogs in response to lawful requests. Finally, ISPs must \nassist the government with the USA PATRIOT Act and \nCIPA and must have technology in place to monitor. \n U.S. Government \n In the United States, the military is attempting to reduce \nthe number of Internet connection points from thousands \nto hundreds, to reduce the risks we’ve described. National \nsecrets, military plans, and information about soldiers and \ncitizens cannot be exposed to our enemies by inadvert-\nent surfing to the wrong places and downloading Trojans, \nkeystroke loggers, or zombie code. \n Other Governments \n Other countries only allow their government employees \naccess to whitelisted sites, to control access to news and \nother sites that censors determine to be inappropriate. In \nRussia the ruling party routinely shuts off access to politi-\ncal rivals ’ Web sites. China has deployed and maintains a \nnew and virtual “ Great Firewall of China ” (also know as \nthe Golden Shield Project). Ministry of Public Security \nAuthorities determine sites that they believe represent an \nideological threat to the Chinese Communist Party and \nthen prevent their citizenry surfing, blogging, and email-\ning to blocked sites with sophisticated content-filtering \nmethods, including IP address blocking and even DNS \ncache poisoning. Russia, Tibet, North Korea, Australia, \nChina, Iran, Cuba, Thailand, Saudi Arabia, and many other \nrepressive governments use similar technology. The ironic \npart of these massive content-filtering efforts is that many \nU.S.-based technology companies have been involved in \ntheir construction, sometimes as a contingency for doing \nbusiness in China or other censoring countries. 2 \n Libraries \n The use of Internet filters or content-control software var-\nies widely in public libraries in the United States, since \nInternet use policies are established by local library boards. \nMany libraries adopted Internet filters after Congress \nconditioned the receipt of universal service discounts on \nthe use of Internet filters through the Children’s Internet \n 2 U.S. companies ’ involvement in the “ Golden Shield ” Chinese \ncontent-fi ltering project, www.forbes.com/forbes/2006/0227/090.html , \n www.businessweek.com/magazine/content/06_08/b3972061.htm . \n" }, { "page_number": 759, "text": "PART | VII Advanced Security\n726\nProtection Act (CIPA). Other libraries do not install con-\ntent control software, believing that acceptable use poli-\ncies and educational efforts address the issue of children \naccessing age-inappropriate content while preserving adult \nusers ’ right to freely access information. Some libraries \nuse Internet filters on computers used by children only. \nSome libraries that employ content-control software allow \nthe software to be deactivated on a case-by-case basis on \napplication to a librarian; libraries that are subject to CIPA \nare required to have a policy that allows adults to request \nthat the filter be disabled without having to explain the rea-\nson for their request. Libraries have other legal challenges \nas well. In 1998, a U.S. federal district court in Virginia \nruled that the imposition of mandatory filtering in a pub-\nlic library violates the First Amendment of the U.S. Bill \nof Rights. 3 About 50% of libraries have deployed content-\nfiltering solutions. \n Parents \n Responsible parents don’t leave their adult magazines on \nthe coffee table, their XXX DVDs in the player ready to \nplay, or their porn channels available and not PIN pro-\ntected on cable or satellite. If you allow minor children \nunfettered access to the Internet, you will have an incident. \nGood practices include teaching your kids about what is \non the Internet, what to do if they inadvertently click on a \nlink or a pop up, make sure they have their own personal \nlogin (not as an admin!) and profile on shared computers, \nand review browser logs ( “ Trust but Verify! ” ). Younger \nchildren can be allowed to surf with their parents ’ assist-\nance and direct supervision in a common area. PC-based \nsoftware apps will give parents control, blocking, logging, \nand piece of mind. About 30% of parents with teenage \nchildren have deployed a content-filtering solution. \n 3. CONTENT BLOCKING METHODS \n There are many ways to block content. Most commercial \nproducts use a number of these techniques together to \noptimize their capability. \n Banned Word Lists \n This method allows the creation of a blacklist dictionary \nthat contains words or phrases. URLs and Web content \nare compared against the blacklist to block unauthorized \nWeb sites. In the beginning this technology was largely \na manual process, with vendors providing blacklists as \nstarting points, requiring customers to manually update/\ntune the lists by adding or excluding keywords. This \nmethod has improved over the years and vendor lists \nhave grown to include millions of keywords and phrases. \nUpdates are usually performed manually, and filter-\ning accuracy may be impacted with specific categories. \nFor example, medical research sites are often blocked \nbecause they are mistaken for offensive material. 4 \n URL Block \n The URL block method is a blacklist containing known \nbad or unauthorized Web site URLs. Entire URLs can \nbe added to the blacklist and exemptions can usually be \nmade to allow portions of the Web site through. Many \nvendors provide URL blacklists with their products to \nsimplify the technology, giving the user the ability to \nadd new sites and perform URL pattern matching. With \nboth banned word lists and URL block lists, a customer \nmust perform manual updates of the vendors ’ blacklists. \nDepending on the frequency of the updates, the black-\nlists may fall out of compliance with the corporate policy \nbetween updates. \n Category Block \n Category blocking is the latest Web content-filtering tech-\nnology that greatly simplifies the management process of \nWeb inspection and content filtering. Category blocking \nutilizes external services that help keep suspect Web sites \nup to date, relying on Web category servers that contain the \nlatest URL ratings to perform Web filtering. With category \nblocking devices, there are no manual lists to install or \nmaintain. Web traffic is inspected against rating databases \ninstalled on the category servers, and the results (good or \nbad sites) are cached to increase performance. The advan-\ntage is up-to-date Web URL and category information at \nall times, eliminating the need to manually manage and \nupdate local blacklists. This method ensures accuracy and \nreal-time compliance with the company’s Internet use pol-\nicy in terms of: \n ● Wildcard pattern matching \n ● Multilanguage pattern matching \n 3 “ Library content fi ltering is unconstitutional, ” Mainstream Loudon \nv. Board of Trustees of the Loudon County Library, 24 F. Supp. 2d 552 \n(E.D. Va. 1998). \n 4 Benjamin Edelman, “Empirical analysis of Google SafeSearch” \nhttp://cyber.law.harvard.edu/archived_content/people/edelman/google-\nsafesearch/. \n" }, { "page_number": 760, "text": "Chapter | 42 Content Filtering\n727\n ● URL block lists \n ● Web pattern lists \n ● URL exemption lists \n ● Web application block (Java Applet, Cookie, \nActiveX) \n ● Category block \n Bayesian Filters \n Particular words and phrases have probabilities of occur-\nring on Web sites. For example, most Web surfers users \nwill frequently encounter the “ word ” XXX on a porn Web \nsite but seldom see it on other Web pages. The filter doesn’t \nknow these probabilities in advance and must first be \ntrained so it can build them up. To train the filter, the user \nor an external “ grader ” must manually indicate whether a \nnew Web site is a XXX porn site or not. For all words on \neach page, the filter will adjust the probabilities that each \nword will appear in porn Web pages versus legitimate \nWeb sites in its database. For instance, Bayesian content \nfilters will typically have learned a very high probability \nas porn content for the words big breasts and Paris Hilton \nsex tape but a very low probability for words seen only \non legitimate Web sites, such as the names of companies \nand commercial products. \n Safe Search Integration to Search Engines \nwith Content Labeling \n Content labeling is considered another form of content-\ncontrol software. The Internet Content Rating Association \n(ICRA), now part of the Family Online Safety Institute, \ndeveloped a content rating system to self-regulate online \ncontent providers. Using an online questionnaire, a Web-\nmaster describes the nature of his Web content. A small \nfile is generated that contains a condensed, computer-\nreadable digest of this description that can then be used by \ncontent-filtering software to block or allow that site. \n ICRA labels are deployed in a couple of formats. These \ninclude the World Wide Web Consortium’s Resource \nDescription Framework (RDF) as well as Platform \nfor Internet Content Selection (PICS) labels used by \nMicrosoft’s Internet Explorer Content Advisor. \n ICRA labels are an example of self-policing and \nself-labeling. Similarly, in 2006 the Association of Sites \nAdvocating Child Protection (ASACP) initiated the \nRestricted to Adults (RTA) self-labeling initiative. The RTA \nlabel, unlike ICRA labels, does not require a Webmaster \nto fill out a questionnaire or sign up to use. Like ICRA, \nthe RTA label is free. Both labels are recognized by a wide \nvariety of content-control software. \n Content-Based Image Filtering (CBIF) \n The latest in content-filtering technology is CBIF. All the \ntext-based content-filtering methods use knowledge of a \nsite and text matching to impose blocks. This technique \nmakes it impossible to filter visual and audio media. \nContent-based image filtering may resolve this issue, as \nshown in Figure 42.1 . The method consists of examining \nthe image itself for flesh tone patterns, detecting objec-\ntionable material and then blocking the offending site. \n Step 1: Skin Tone Filter \n First the images are filtered for skin tones. The color \nof human skin is created by a combination of blood \n(red) and melanin (yellow, brown). These combinations \nrestrict the range of hues that skin can possess (except \nfor people from the planet Rigel 7). In addition, skin has \nvery little texture. These facts allow us to ignore regions \nwith high-amplitude variations and design a skin tone \nfilter to separate images before they are analyzed. \n Step 2: Analyze \n Since we have already filtered the images for skin tones, \nany images that have very little skin tones will be accepted \ninto the repository. The remaining images are then automat-\nically segmented and their visual signatures are computed. \n Step 3: Compare \n The visual signatures of the potentially objectionable \nimages are then compared to a predetermined reference \ndata set. If the new image matches any of the images in \nthe reference set with over 70% similarity, the image is \nrejected. If the similarity falls in the range of 40 – 70%, \nthat image is set aside for manual intervention. An opera-\ntor can look at these images and decide to accept or reject \nthem. Images that fall below 40% are accepted and added \nto the repository. These threshold values are arbitrary and \nare completely adjustable. 5 \n Tip : Determine if the image contains large areas of \nskin color pixels. \n Tip : Automatically segment and compute a visual \nsignature for the image. \n 5 Envision Search Technologies (permission to use pending), www.\nevisionglobal.com/index.html . \n" }, { "page_number": 761, "text": "PART | VII Advanced Security\n728\n 4. TECHNOLOGY AND TECHNIQUES \nFOR CONTENT-FILTERING CONTROL \n There are several different technologies that help facili-\ntate Web monitoring, logging, and filtering of HTTP, FTP \nsites, and other Web-related traffic. The methods avail-\nable for monitoring and controlling Internet access range \nfrom manual and educational methods to fully automated \nsystems designed to scan, inspect, rate, and control Web \nactivity. \n Many solutions are software based and run on Intel-\nbased servers that are attached to the network through a \n “ mirrored ” network port. Other solutions are dedicated \nappliances that are installed inline with the network, allow-\ning it to see all the Internet traffic and allowing it to take \nfast, responsive action against unauthorized and malicious \ncontent. Common Web access control mechanisms include: \n ● Establishing a well-written usage policy \n ● Communicating the usage policy to every person in \nthe organization \n ● Educating everyone on proper Internet, email, and \ncomputer conduct \n ● Installing monitoring tools that record and report on \nInternet usage \n ● Implementing policy-based tools that capture, rate, \nand block URLs \n ● Taking appropriate action when complaints are regis-\ntered, policies violated, etc. \n Now let’s look at each enforcement point and the \nmethods used for content filtering (see Figure 42.2 ). \n This may be one of the most cost-effective and expe-\nditious methods: Teach your employees about the risks \nand train them on WWW surfing policies. Then trust \nthem, or not. Then select and use one of the solutions \ndiscussed next, to augment this important first step. \n Internet Gateway-Based Products/Unified \nThreat Appliances \n The fastest-growing area during the past several years has \nbeen the unified threat appliance market, as shown in Figure \n42.3 . These hardware platforms combine multiple security \ntechnologies working in concert to minimize threats. \n Fortinet \n Fortinet’s unique approach to protecting networks \nagainst the latest vulnerabilities involves several key \nsecurity components. By combining seven key security \n3 Steps to Content-Based Image Filtering using eVe\nNew Images\nLow Skin\nContent\nHigh Skin\nContent\nAccept\nAccept\nReview\nReject\nReference Set\nCompare\nAnalyze\nSkin Tone Filter\nRepository\n FIGURE 42.1 The process of content-based image filtering consists of three specific steps. \n Tip : Match the new image against a reference set of \nobjectionable images and object regions. \n" }, { "page_number": 762, "text": "Chapter | 42 Content Filtering\n729\nfunctions into one Aviation Security Identification Card \n(ASIC) accelerated security hardware and software plat-\nform, Fortinet has developed the world’s first Dynamic \nThreat Prevention System. By sharing information \nbetween each component and relying on powerful fire-\nwall and IPS capabilities, threats are identified quickly \nand proactively blocked at the network level before they \nreach the endpoints to cause damage. Customers can cre-\nate custom protection policies by turning on any of the \nseven security functions — Stateful Firewall, IPSec and \nSSL VPN, Antivirus, IDS & IPS, Web Content Filtering, \nAnti-Spam, and Bandwidth Shaping — in any combina-\ntion and applying them to the interfaces on the FortiGate \nmultifunction security platforms. Fortinet’s Web Content \nFiltering technology allows customers to take a wide \nvariety of actions to inspect, rate, and control Web traffic. \nIncluded with each FortiGate security platform is the \nability to control Web traffic via: \n ● Banned word lists \n ● Wildcard pattern matching \n ● Multilanguage pattern matching \n ● URL block lists \n ● Web pattern lists \n ● URL exemption lists \n ● Web application block (Java Applet, Cookie, \nActiveX) \n ● Category block (via FortiGuard Service) \n Customers can mix and match any of the Web \ncontent-filtering methods to achieve the correct balance \nfor enforcement or select Fortinet’s Automated Category \nBlocking option or FortiGuard Web Filtering Solution, \nWWW Content Server\nInternet\nISP\nCarrier\nMobile User\nSmart Phone\nUnified Threat Appliance: Firewall, CF, AV, AS VPN\nContent Filtering Server\nProxy Gateway\nNetwork User\nMail Content Filtering\n FIGURE 42.2 Deploy a policy for Internet use. \n" }, { "page_number": 763, "text": "PART | VII Advanced Security\n730\nby combining Fortinet’s Web Content Filtering system \nwith its Antivirus, IDS & IPS, and Anti-Spam functions. \nIncreasingly, companies are looking to conso lidate their \nsecurity services into one multifunction appliance for \nlicense, management, and complexity justifications. 6 \n Websense and Blue Coat \n Websense is probably the most deployed system in \ncommercial enterprises. Known for great flexibility in \ndeployment and the perhaps the most granular category \nlist, this product requires a third-party firewall to provide \nfor a complete solution. Blue Coat, on the other hand, is \na well-regarded leader in the secure Web gateway space. \n Surf Control \n Surf Control is a company purchased by Websense in \n2008 and currently has an unclear product direction. \nAlways well regarded, it has a large installed base and \nmarket percentage. Others include Secure Computing, \nAladdin Knowledge Systems, Finjan, Marshal, FaceTime \nCommunications, Webroot Software, Clearswift, CP \nSecure, IronPort Systems, ISS/IBM Proventia, Trend \nMicro, McAfee, MessageLabs, Barracuda Networks, \nContentKeeper Technologies, Computer Associates, \nCymphonix, Pearl Software, and St. Bernard. \n PC Based \n PC software such as Norton Internet Security includes \nparental controls. Operating Systems such as Mac OS \nX v10.4 offer parental controls for several applications. \nWWW Content Server\nInternet\nISP\nCarrier\nMobile User\nSmart Phone\nUnified Threat Appliance: Firewall, CF, AV, AS VPN\nContent Filtering Server\nProxy Gateway\nNetwork User\nMail Content Filtering\n FIGURE 42.3 The unified threat appliance market. \n 6 Fortinet Multi-Threat Security Solution (permission to use pending), \n www.fortinet.com/doc/whitepaper/Webfi lter_applicationNote.pdf . \n" }, { "page_number": 764, "text": "Chapter | 42 Content Filtering\n731\nMicrosoft’s Windows Vista operating system also includes \ncontent-control software. Other PC-based content-filtering \nsoftware products are CyberPatrol, Cybersitter, EnoLogic \nNetFilter, iProtectYou Pro Web Filter, Net Nanny, Norton \nInternet Security, Safe Eyes Platinum, SentryPC, Bess, \nCrayon Crawler, Cyber Snoop, Covenant Eyes, K9 Web \nProtection, Naomi, Scieno Sitter, Sentry Parental Controls, \nWebsense,Windows Live Family Safety, Windows Vista \nParental Control, WinGate, X3Watch, and PlanetView. On \nthe other hand, Mac software is available from Covenant \nEyes, DansGuardian, Intego, and Mac OS X Leopard \nParental Controls. \n Remote corporate PCs and now the ever-increasing \nsophistication and capabilities of smart phones make \nthem challenges for content-filtering deployments (see \n Figure 42.4 ). Usually a content-monitoring and con-\ntrol client is installed on a mobile PC or all surfing is \nonly allowed after establishing a VPN connection back \nto HQ, and then HTTP requests can be served up and \nmonitored. \n ISP-Based Solutions \n Many ISPs offer parental control browser-based options, \namong them Charter Communications, EarthLink, Yahoo!, \nand AOL (see Figure 42.5 ). Cleanfeed is offered by \nBritish Telecom in the U.K. and is an ISP administered \ncontent-filtering system that targets child sexual abuse \ncontent using offensive image lists from the Internet \nWatch Foundation. \n Internet Cloud-Based Solutions \n In 2008 the Google search engine adapted a software \nprogram to faster track child pornography accessible \nWWW Content Server\nInternet\nISP\nCarrier\nMobile User\nSmart Phone\nUnified Threat Appliance: Firewall, CF, AV, AS VPN\nContent Filtering Server\nProxy Gateway\nNetwork User\nMail Content Filtering\n FIGURE 42.4 Content-filtering deployments. \n" }, { "page_number": 765, "text": "PART | VII Advanced Security\n732\nthrough its site. The software is based on a pattern rec-\nognition engine. 7 Content distribution networks that are \nattached to the Internet, such as Akamai, Limelight, \nPanther Express, EdgeCast, CDNetworks, Level 3, and \nInternap, manage the content in their networks and will \nnot distribute offensive images (see Figure 42.6 ). Many \nwould also find it unacceptable that an ISP, whether \nby law or by the ISP’s own choice, should deploy such \nsoftware without allowing users to disable the filtering \nfor their own connections. In addition, some argue that \nusing content-control software may violate Sections 13 \nand 17 of the Convention on the Rights of the Child. 8 \n Proxy Gateway-Based Content Control \n Companies that look to implement content-control solu-\ntions (see Figure 42.7 ) are also implementing security \nappliances at specialized network gateways, such as \nthose from proprietary vendors like St. Bernard Software, \nBloxx, CensorNet, Kerio, Untangle, and Winroute \nFirewall and content-filtering Web proxies like SafeSquid. \nGateway-based content control solutions may be more \ndifficult to bypass than desktop software solutions since \nthey are less easily removed or disabled by the local user. \n 5. CATEGORIES \n By various counts there were approximately 150 million \nWeb sites in 2008. Most content-filtering companies have \nrated between 14 and 50 million Web sites. Facebook and \nWWW Content Server\nInternet\nISP\nCarrier\nMobile User\nSmart Phone\nUnified Threat Appliance: Firewall, CF, AV, AS VPN\nContent Filtering Server\nProxy Gateway\nNetwork User\nMail Content Filtering\n FIGURE 42.5 ISP-based solutions. \n 7 Google Block Child Porn Content in Searches, http://en.wikipedia.\norg/wiki/Child_pornography#cite_note-44#cite_note-44 . \n 8 Convention on the Rights of the Child, www.unhchr.ch/html/menu3/\nb/k2crc.htm . \n" }, { "page_number": 766, "text": "Chapter | 42 Content Filtering\n733\nother social networking sites are counted in the aggre-\ngate count but are generally not rated by content-filtering \ncompanies. \n These rankings are rated by content-filtering compa-\nnies into between 10 and 90 various categories. Typical \nsubjects of content-control software include: \n ● Illegal content with reference to the legal domain \nbeing served by that company. \n ● Promote, enable, or discuss system cracking, software \npiracy, criminal skills, or other potentially illegal acts. \n ● Sexually explicit content, such as pornography, \nerotica, nudity, and nonerotic discussions of sexual \ntopics such as sexuality or sex. Promote, enable, \nor discuss promiscuity, lesbian, gay, bisexual, \ntranssexual, sexual activity outside of marriage, or \nother lifestyles seen to be immoral or alternative. \n ● Contain violence or other forms of graphic or \n “ extreme ” content. \n ● Promote, enable, or discuss bigotry or hate speech. \n ● Promote, enable, or discuss gambling, recreational \ndrug use, alcohol, or other activities frequently \nconsidered to be vice. \n ● Are unlikely to be related to a student’s studies, an \nemployee’s job function, or other tasks for which the \ncomputer in question may be intended, especially if they \nare likely to involve heavy bandwidth consumption. \n ● Are contrary to the interests of the authority in \nquestion, such as Web sites promoting organized \nlabor or criticizing a particular company or industry. \n ● Promote or discuss politics, religion, health or other \ntopics. \n ● Prevent people who are hypochondriacs from \nviewing Web sites related to health concerns. \nWWW Content Server\nInternet\nISP\nCarrier\nMobile User\nSmart Phone\nUnified Threat Appliance: Firewall, CF, AV, AS VPN\nContent Filtering Server\nProxy Gateway\nNetwork User\nMail Content Filtering\n FIGURE 42.6 Internet cloud-based solutions. \n" }, { "page_number": 767, "text": "PART | VII Advanced Security\n734\n ● Include social networking opportunities that might \nexpose children to predators. \n ● Potentially liable: drug abuse, folklore, hacking, \nillegal or unethical, marijuana, occult, phishing, \nplagiarism, proxy avoidance, racism and hate, \nviolence, Web translation. \n ● Controversial: abortion, adult materials, advocacy \ngroups/organizations, alcohol, extremist groups, \ngambling, lingerie and swimwear, nudity, \npornography, sex education, sport hunting and war \ngames, tasteless, tobacco, weapons. \n ● Potentially nonproductive: Advertising, brokerage \nand trading, digital postcards, freeware, downloads, \ngames, instant messaging, newsgroups and message \nboards, Web chat, Web-based email. \n ● Potentially bandwidth consuming: Internet radio \nand TV, Internet telephony, multimedia \ndownload, peer-to-peer file sharing, personal \nstorage. \n ● Potential security risks: Malware, spyware. \n ● General interest: Arts and entertainment, child \neducation, culture, education, finance and banking, \ngeneral organizations, health and wellness, \nhomosexuality, job search, medicine, news and \nmedia, personal relationships, personal vehicles, \npersonal Web sites, political organizations, real \nestate, reference, religion, restaurants and dining, \nsearch engines, shopping and auction, society and \nlifestyles, sports, travel. \n ● Business oriented: Armed forces, business, \ngovernment and legal organizations, information \ntechnology, information/computer security. \n ● Others: Content servers, dynamic content, \nmiscellaneous, secure Web sites, Web hosting. \nWWW Content Server\nInternet\nISP\nCarrier\nMobile User\nSmart Phone\nUnified Threat Appliance: Firewall, CF, AV, AS VPN\nContent Filtering Server\nProxy Gateway\nNetwork User\nMail Content Filtering\n FIGURE 42.7 Proxy gateway-based content control. \n" }, { "page_number": 768, "text": "Chapter | 42 Content Filtering\n735\n 6. LEGAL ISSUES \n There are many legal issues to consider in content filter-\ning. And once you think you have a handle on your par-\nticular organizational requirements and have ensured that \nthey are legal, a court will make a ruling that changes \nthe game. A number of Internet technology issues and \nrelated challenges have not yet been fully addressed by \nlegislatures or courts and are subject to a wide range of \ninterpretation. For example, virtual child pornography, \npornographic images and text delivered by SMS mes-\nsages, sexual age-play in virtual game worlds, the soft \nporn Manga genre of Lolicon and Rorikon are all chal-\nlenges to current laws and issues that will need to be \naddressed as our society comes to grips with the Internet \nand what is “ out there. ” The following discussion centers \non the most relevant laws in the content-filtering space. \n Federal Law: ECPA \n The Electronic Communications Privacy Act (ECPA) 9 \nallows companies to monitor employees ’ communications \nwhen one of three provisions are met: one of the parties \nhas given consent, there is a legitimate business reason, or \nthe company needs to protect itself. \n If your company has no content access policy in \nplace, an employee could argue that he or she had a rea-\nsonable expectation of privacy. However, if the company \nhas implemented a written policy whereby employees are \ninformed about the possibility of Web site monitoring and \nwarned that they should not have an expectation of privacy, \nthe company is protected from this type of privacy claim. \n CIPA: The Children’s Internet \nProtection Act \n CIPA provisions have both the “ carrot and the stick. ” \nThe U.S. government will pay you for equipment to \naccess the Internet, but you have to play by its rules to \nget the money! Having been rebuffed by the courts in its \nprevious efforts to protect children by regulating speech \non the Internet, Congress took a new approach with \nthe Children’s Internet Protection Act (CIPA). See, for \nexample Reno v. ACLU , 521 U.S. 844 (1997) (overturn-\ning the Communications Decency Act of 1996 on First \nAmendment grounds). With CIPA, Congress sought to \ncondition federal funding for schools and libraries on the \ninstallation of filtering software on Internet-ready com-\nputers to block objectionable content. \n CIPA is a federal law enacted by Congress in \nDecember 2000 to address concerns about access to offen-\nsive content over the Internet on school and library com-\nputers. CIPA imposes certain types of requirements on any \nschool or library that receives funding for Internet access \nor internal connections from the E-rate program, which \nmakes certain communications technology more affordable \nfor eligible schools and libraries. In early 2001, the FCC \nissued rules implementing CIPA. CIPA made amendments \nto three federal funding programs: (1) the Elementary and \nSecondary Education Act of 1965, which provides aid to \nelementary and secondary schools; (2) the Library Services \nTechnology Act, which provides grants to states for sup-\nport of libraries; and (3) the E-Rate Program, under the \nCommunications Act of 1934, which provides Internet and \ntelecommunications subsidies to schools and libraries. The \nfollowing are what CIPA requires 10 : \n ● Schools and libraries subject to CIPA may not \nreceive the discounts offered by the E-Rate Program \nunless they certify that they have an Internet safety \npolicy and technology protection measures in place. \nAn Internet safety policy must include technology \nprotection measures to block or filter Internet access \nto pictures that (a) are obscene, (b) are child pornog-\nraphy, or (c) are harmful to minors (for computers \nthat are accessed by minors). \n ● Schools subject to CIPA are required to adopt and \nenforce a policy to monitor online activities of minors. \n ● Schools and libraries subject to CIPA are required to \nadopt and implement a policy addressing: (a) access \nby minors to inappropriate matter on the Internet; \n(b) the safety and security of minors when using \nelectronic mail, chat rooms, and other forms of direct \nelectronic communications; (c) unauthorized access, \nincluding so-called “ hacking, ” and other unlawful \nactivities by minors online; (d) unauthorized disclo-\nsure, use, and dissemination of personal information \nregarding minors; and (e) restricting minors ’ access \nto materials harmful to them. \n Schools and libraries are required to certify that they \nhave their safety policies and technology in place before \nreceiving E-Rate funding, as follows: \n ● CIPA does not affect E-Rate funding for schools and \nlibraries receiving discounts only for telecommuni-\ncations, such as telephone service. \n 9 U.S. Code: Wire and Electronic Communications Interception and \nInterception of Oral Communications, www.law.cornell.edu/uscode/18/\nusc_sup_01_18_10_I_20_119.html . \n 10 What CIPA Requires, www.fcc.gov/cgb/consumerfacts/cipa.html . \n" }, { "page_number": 769, "text": "PART | VII Advanced Security\n736\n ● An authorized person may disable the blocking \nor filtering measure during any use by an adult to \nenable access for bona fide research or other lawful \npurposes. \n ● CIPA does not require the tracking of Internet use by \nminors or adults. \n “ Harmful to minors ” is defined under the Act as: \n any picture, image, graphic image file, or other visual depic-\ntion that (i) taken as a whole and with respect to minors, \nappeals to a prurient interest in nudity, sex, or excretion; \n(ii) depicts, describes, or represents, in a patently offensive \nway with respect to what is suitable for minors, an actual or \nsimulated sexual act or sexual contact, actual or simulated \nnormal or perverted sexual acts, or a lewd exhibition of the \ngenitals; and (iii) taken as a whole, lacks serious literary, \nartistic, political, or scientific value as to minors. \n Court Rulings: CIPA From Internet Law \nTreatise \n On June 23, 2003, the U.S. Supreme Court reversed a \nDistrict Court’s holding in United States v. American \nLibrary Ass’n , 539 U.S. 194 (2003). 11 It held that the use of \nInternet filtering software does not violate library patrons ’ \nFirst Amendment rights. Therefore, CIPA is constitutional \nand a valid exercise of Congress’s spending power. \n The Court held, in a plurality opinion, that libraries ’ \nfiltering of Internet material should be subject to a rational \nbasis review, not strict scrutiny. It explained that, because \ncollective decisions regarding printed material have gen-\nerally only been subject to a rational basis review, deci-\nsions regarding which Web sites to block should likewise \nbe subject to the same test. It reasoned that libraries are no \nless entitled to make content-based judgments about their \ncollections when they collect material from the Internet \nthan when they collect material from any other source. \n Further, it reasoned that heightened judicial scrutiny \nis also inappropriate because “ Internet access in public \nlibraries is neither a ‘ traditional ’ nor a ‘ designated ’ public \nforum ” ( Id . at 2304). Therefore, although filtering soft-\nware may overblock constitutionally-protected speech \nand a less restrictive alternative may exist, because the \ngovernment is not required to use the least restrictive \nmeans under a rational basis review, CIPA is nonetheless \nconstitutional. \n Moreover, the Court held that Congress did not exceed \nits spending power by enacting CIPA because, when the \ngovernment uses public funds to establish a program, it is \nentitled to define its limits. By denying federal funding, \nthe government is not penalizing libraries that refuse to \nfilter the Internet, or denying their rights to provide their \npatrons with unfiltered Internet access. Rather, it “ simply \nreflects Congress ’ decision not to subsidize their doing \nso ” ( Id . at 2308). 12 \n The Trump Card of Content Filtering: The \n “ National Security Letter ” \n The FBI, CIA, or DoD can issue an administrative sub-\npoena to ISPs for Web site access logs, records, and con-\nnection logs for various individuals. Along with a gag \norder, this letter comes with no judicial oversight and does \nnot require probable cause. In 2001, Section 505 of the \nPATRIOT Act powers were expanded for the use of the \nNSL. There are many contentious issues with these laws, \nand the Electronic Frontier Foundation and the American \nCivil Liberties Union (ACLU) are battling our govern-\nment to prevent their expansion and open interpretation. 13 \n ISP Content Filtering Might Be a “ Five-Year \nFelony ” \n There are issues looming for ISPs that monitor their net-\nworks for copyright infringement or bandwidth hogs, \nsince they may be committing felonies by breaking federal \nwiretapping laws. University of Colorado law professor \nPaul Ohm, a former federal computer crimes prosecutor, \nargues that ISPs such as Comcast, AT & T, and Charter \nCommunications that are or are contemplating ways to \nthrottle bandwidth, police for copyright violations, and \nserve targeted ads by examining their customers ’ Internet \npackets are putting themselves in criminal and civil \njeopardy. 14 \n State of Texas: An Example of an Enhanced \nContent-Filtering Law \n Texas state law requires all Texas ISPs to link to block-\ning and filtering software sites. In 1997, during the 75th \nRegular Session of the Texas Legislature, House Bill \n1300 was passed. HB 1300 requires ISPs to make a link \n 13 ACLU Sues Over Internet Privacy, Challenges ISPs Being Forced \nto Secretly Turn Over Customer Data, www.cbsnews.com/stories/\n2004/04/29/terror/main614638.shtml . \n 14 Forer Prosecutor: ISP Content Filtering Might be a ‘ Five Year \nFelony, ’ http://blog.wired.com/27bstroke6/2008/05/isp-content-f-1.html. \n 11 CIPA \nand \nE-Rate \nRuling, \n www.cdt.org/speech/cipa/030623\ndecision.pdf . \n 12 IETF Fights CIPA, http://ilt.eff.org/index.php/Speech :_CIPA. \n" }, { "page_number": 770, "text": "Chapter | 42 Content Filtering\n737\navailable on their first Web page that leads to Internet \n “ censorware ” software, also known as “ automatic ” block-\ning and screening software. The two most important por-\ntions of the law are shown here: \n Sec. 35.102. SOFTWARE OR SERVICES THAT RESTRICT \nACCESS TO CERTAIN MATERIAL ON INTERNET. \n (a) A person who provides an interactive computer service \nto another person for a fee shall provide free of charge to \neach subscriber of the service in this state a link leading \nto fully functional shareware, freeware, or demonstration \nversions of software or to a service that, for at least one \noperating system, enables the subscriber to automatically \nblock or screen material on the Internet. \n (b) A provider is considered to be in compliance with this \nsection if the provider places, on the provider’s first page of \nworld wide Web text information accessible to a subscriber, \na link leading to the software or a service described by \nSubsection (a). The identity of the link or other on-screen \ndepiction of the link must appear set out from surrounding \nwritten or graphical material so as to be conspicuous. \n Sec. 35.103. CIVIL PENALTY. \n (a) A person is liable to the state for a civil penalty of \n$2,000 for each day on which the person provides an \ninteractive computer service for a fee but fails to provide a \nlink to software or a service as required by Section 35.102. \nThe aggregate civil penalty may not exceed $60,000 . 15 \n (b) The attorney general may institute a suit to recover the \ncivil penalty. Before filing suit, the attorney general shall \ngive the person notice of the person’s noncompliance and \nliability for a civil penalty. If the person complies with \nthe requirements of Section 35.102 not later than the 30th \nday after the date of the notice, the violation is considered \ncured and the person is not liable for the civil penalty . \n The following are international laws involving con-\ntent filtering: \n ● UK: Data Protection Act \n ● EU: Safer Internet Action Plan \n ● Many other countries have also enacted legislation \n Additionally, the United Kingdom and some other \nEuropean countries have data retention policies. Under \nthese policies ISPs and carriers are obliged to retain a \nrecord of all their clients ’ Web browsing. The data reten-\ntion period varies from six months to three years. In the \nU.K. this retained data is available to a very wide range of \npublic bodies, including the police and security services. \nAnyone who operates a proxy service of any kind in one \nof these countries needs to be aware that a record is kept \nof all Web browsing through their computers. On March \n15, 2006, the European Union adopted Directive 2006/24/\nEC, which requires all member states to introduce statu-\ntory data retention. The United States does not have a stat-\nutory data retention specifically targeting information in \nthis area, though such provisions are under consideration. \n 7. ISSUES AND PROBLEMS WITH \nCONTENT FILTERING \n By far the biggest challenge to content-filtering deploy-\nments comes from those users who intend to bypass \nand gain access to their intended Web content. For con-\ntent-filtering technologies to work well, developers have \nto understand all the methods that users will devise and \nemploy to circumvent the content-filtering systems, inad-\nvertently or otherwise. \n Bypass and Circumvention \n Some controls may be bypassed successfully using alterna-\ntive protocols such as FTP, telnet, or HTTPS, by conduct-\ning searches in a different language, or by using a proxy \nserver or a circumventor such as Psiphon or UltraSurf. \nCached Web pages returned by Google or other searches \nmay bypass some controls as well. Web syndication serv-\nices may provide alternate paths to access content. \n The primary circumvention method surfers use is \nproxies. There are basically five types of proxies: \n ● Client-based proxies \n ● Open proxies \n ● HTTP Web-based proxies \n ● Secure public and private Web-based proxies \n ● Secure anonymous Web-based proxies \n Client-Based Proxies \n These are programs that users download and run on their \ncomputers. Many of these programs are run as “ portable \napplications, ” which means they don’t require any installa-\ntion or elevated privileges, so they can be run from a USB \nthumb drive by a user with limited privileges. The three \nmost widely used include TorPark, which uses Firefox \nand the XeroBank network, Google Web Accelerator, and \nMcAfee’s Anonymizer. \n These programs create a local proxy server using a \nnonstandard port. Then they configure the browser to use \nthe local proxy by changing its proxy server settings to \nthe form localhost:port \u0003 127.0.0.1:9777, for example. \n 15 Texas ISP Laws can be found here: www.tlc.state.tx.us/legal/\nb & ccode/b & c_title10/80C258(3).HTML . \n" }, { "page_number": 771, "text": "PART | VII Advanced Security\n738\nWeb content requests are then tunneled through the proxy \nprogram to an appropriate proxy server using a custom \nprotocol, which is typically encrypted. The content-filter-\ning gateway doesn’t see the browser-to-local proxy traffic, \nbecause it flies under its content inspection radar. All the \ngateway may see is the custom protocol that encapsulates \nthe user’s Web request. \n The network of proxy servers is either static, as is \nthe case with commercial programs such as McAfee’s \nAnonymizer and Google Web Accelerator, or it’s private \nand dynamic, as is the case with Psiphon and XeroBank. \nIn both cases, the proxy server network is typically built \nby individuals who volunteer their home computers for \nuse by installing the corresponding proxy server software. \n Currently the UltraSurf proxy client is the most \nadvanced tool available to circumvent gateway secu-\nrity Web content filters. UltraSurf was developed by an \norganization called UltraReach, 16 which was founded by \na group of Chinese political dissidents. UltraReach devel-\nopers continue to actively maintain and update UltraSurf. \nThey designed UltraSurf specifically to allow Chinese \ncitizens to circumvent the Chinese government’s efforts to \nrestrict Internet use in China. The UltraSurf application is \na very sophisticated piece of software. It uses a distributed \nnetwork of proxy servers, installed and maintained by \nvolunteers around the world, much like a peer-to-peer net-\nwork. It uses multiple schemes to locate the proxy servers \nin its network, spanning different protocols. It uses port \nand protocol tunneling to trick security devices into ignor-\ning it or mishandling it. It also uses encryption and misdi-\nrection to thwart efforts to investigate how it works. \n Ultrasurf is free and requires no registration, which \nmakes it widely distributable. It requires no installation \nand can be run by a user who doesn’t have administra-\ntive permissions to his computer, which makes it very \nportable. It can easily be carried around on a USB thumb \ndrive and run from there. \n Another formidable bypass application is Psiphon. 17 \nThis is a distributed “ personal trust ” style Web proxy \ndesigned to help Internet users affected by Internet censor-\nship securely bypass content-filtering systems typically set \nup by governments. Psiphon was developed by the Citizen \nLab at the University of Toronto, building on previous gen-\nerations of Web proxy software systems. Psiphon’s recom-\nmended use is among private, trusted relationships that \nspan censored and uncensored locations (such as those that \nexist among friends and family members, for example) \nrather than as an open public proxy. Traffic between clients \nand servers in the Psiphon system is encrypted using the \nHTTPS protocol. \n According to Nart Villeneuve, director of technical \nresearch at the Citizen Lab, “ The idea is to get them to \ninstall this on their computer, and then deliver the loca-\ntion of that circumventor, to people in filtered countries \nby the means they know to be the most secure. What \nwe’re trying to build is a network of trust among people \nwho know each other, rather than a large tech network \nthat people can just tap into. ” 18 \n Psiphon takes a different approach to censorship cir-\ncumvention than other tools used for such purposes, \nsuch as The Onion Router, aka Tor. Psiphon requires no \ndownload on the client side and thus offers ease of use \nfor the end user. But unlike Tor, Psiphon is not an ano-\nnymizer; the server logs all the client’s surfing history. \nPsiphon differs from previous approaches in that the users \nthemselves have access to server software. The develop-\ners of Psiphon have provided the user with a Microsoft \nWindows platform executable for the Psiphon server. If \nthe server software attains a high level of use, this would \nresult in a greater number of servers being online. A great \nnumber of servers online would make the task of attack-\ning the overall user base more difficult for those hostile to \nuse of the Psiphon proxy than attacking a few centralized \nservers, because each individual Web proxy would have \nto be disabled one by one. \n There are inherent security risks in approaches such \nas Psiphon, specifically those presented by logging by the \nservices themselves. The real-world risk of log keeping \nwas illustrated by the turnover of the emails of Li Zhi to \nthe Chinese government by Yahoo. Li was subsequently \narrested, convicted, and sent to jail for eight years. 19 Some \nhave raised concerns that the IP addresses and the Psiphon \nsoftware download logs of Psiphon users could fall into \nthe wrong hands if the Citizen Lab computers were to get \nhacked or otherwise compromised. \n These tools are a double-edged sword: They are incred-\nibly powerful tools for allowing political dissidents around \nthe world to evade oppression, but they also provide end \nusers on private, filtered networks with a way to access \nthe Internet that violates acceptable use policies and intro-\nduces liability to an organization. The best way to block \nthese types of circumventing technologies is by using \npacket inspection and blocking based on signatures of this \napplication setting up or in process. \n 19 Yahoo may have helped jail another Chinese user, www.infoworld.\ncom/article/06/02/09/75208_HNyahoohelpedjail_1.html . \n 16 UltraReach Information can be found at www.ultrareach.com/ . \n 17 Psiphon: http://psiphon.civisec.org/ . \n 18 Psiphon Web Proxy Information can be found here: http://en.\nwikipedia.org/wiki/Psiphon#cite_note-2#cite_note-2 . \n" }, { "page_number": 772, "text": "Chapter | 42 Content Filtering\n739\n Open Proxies \n Open proxies are accessed by changing the configuration \nof your browser. Your browser can be modified to send \nall traffic to a proxy at a specific IP and port. Usually \nthe IT department builds an OS image that makes this \nconfiguration as a way of facilitating corporate installed \nWeb filtering. To check it out on Internet Explorer, go to \nTools | Internet Options | Connections | LAN Settings. \nThis is where the IT department, school, or library nor-\nmally locks you into using their proxy, or if you have \nadministrative privileges you can configure your browser \nto use an open proxy. \n When your PC is configured to use an open proxy, the \nbrowser simply sends all its Web content requests to the \nproxy, as opposed to resolving the URL to an IP and send-\ning the request directly to the destination Web site. The open \nproxy then does the DNS name resolution, connects to the \ndestination Web site, and returns that content to the browser. \n Blocking this behavior is rather straightforward. When \nthe end user attempts to access a blocked site through an \nopen proxy, the name of the site is encoded right there in \nthe request, and the content filter works just as it would if \nthe browser wasn’t configured to use an open proxy. \n HTTP Web-Based Proxies (Public and \nPrivate) \n These are Web sites that are purpose-built to proxy Web traf-\nfic. To use them, the user goes to the Web page of the proxy \nWeb site, then types the desired URL into a text box on the \nHTML page that site serves. The browser makes requests to \nthe proxy site, and the proxy site returns the content of a dif-\nferent site, the one the user actually wants to see. The con-\ntent inspection gateway only sees traffic to the proxy site. \n PHProxy and CGIProxy are the most well-known \ndevelopment efforts used for this purpose. Peacefire’s \nCircumventor is an example of an application using these \ntools kits. The HTML code request and response from the \nproxy is specifically engineered to evade filtering. The \nmore difficult it is to reverse engineer the URL of the prox-\nied site from the HTTP traffic that’s flowing between the \nbrowser and the proxy, the more successful this method of \ncircumvention is. \n The biggest benefit for these types of proxies for the cir-\ncumventer is that they are simple to install on your home or \noffice computer. A nontechnical user can do it in a matter \nof minutes. Once it is installed, the user has a Web-based \nproxy running on his home computer, which is presum-\nably not filtered. He can then access his home computer \nfrom a filtered network (like an office or school network) \nusing just a browser, and circumvent your carefully crafted \nWeb filtering policy. \n Secure Public Web-Based Proxies \n These proxies are basically the same as the HTTP Web-\nbased proxies except that they use the SSL encrypted \nHTTPS protocol. There are two types of HTTPS proxies: \npublic and anonymous. Public HTTPS proxies are built \nby organizations such as Proxy.org and Peacefire and \nare publicized via mailing lists and word of mouth. They \nintentionally look like completely legitimate sites, with \nproperly constructed certificates that have been issued by \ntrusted certificate authorities like VeriSign. These sites \nare blocked by IP addresses, since the content-filtering \ndevice cannot decrypt and look inside HTTPS packets. \n The other type of secure anonymous Web-based prox-\nies is more difficult to locate and block and makes the \ngame much more interesting. In this case, the user takes \nthat same proxy software package used by the public \nproxy sites and installs it on his home computer. The pack-\nage generates a certificate and listens on HTTPS. Now the \nuser has a secure Web-based proxy running on her home \ncomputer. Since these sites are anonymous and because in \nthe sense that nobody links to them they can’t be found \nby classic site discovery techniques like spidering links on \nknown sites, building an IP list and blocking them is prob-\nlematic and must be done with certificate examination and \nother heuristic techniques. \n These proxy types combine the problems we’ve \ndescribed; since they are encrypted using HTTPS there \nis one more huge risk to using these proxy portals. There \nare nefarious individuals on the Internet who build open \nproxies and publicize them through mailing lists and other \nsites. Often they are built with criminal intent, as a way to \nsteal user credentials. The open proxy can see and capture \nand log everything you are sending and receiving, even \nHTTPS. Use caution if you dare to explore here. \n Process Killing \n Some of the more poorly designed PC installed content-\nfiltering application programs and agents can be shut \ndown by killing their processes: for example, in Microsoft \nWindows through the Windows Task Manager or in Mac \nOS using Activity Monitor. \n Remote PC Control Applications \n Windows RPC, VNC, Citrix GoToMyPc, BeAnywhere, \nWallCooler, I’m InTouch, eBLVD, BeamYourScreen, \n" }, { "page_number": 773, "text": "PART | VII Advanced Security\n740\nPCMobilizr, and Cisco’s WebEx are examples. Some of \nthese applications are business critical and will not be \nblocked by corporately deployed content-filtering sys-\ntems, and their intentional misuse is a serious risk. \n Overblocking and Underblocking \n Overblocking is the issue that occurs when the content-\nfiltering technology blocks legitimate Web sites because \nof a tuning, filter update missing, or other technology \nlimitation. Underblocking occurs when a content filter \nis deployed and does not block a Web site in a targeted \ncategory; the user will see the content and a policy vio-\nlation will occur. \n Blacklist and Whitelist Determination \n There are countless conversations going on between HR \nand IT departments every day trying to appropriately \ntune the content filter’s blacklist. Some strict companies, \nschools, and parents allow surfing only to sites that are \non a whitelist. \n Casual Surfing Mistake \n A friend sends a link in email, a popup window offers \nup something interesting, or you mistype a Web site \naddress and get a typo-squatter porn site. All these ways \nwill land you on a Web site that is not approved by your \ncontent-filtering system. You better have a way to deal \nwith this reality. \n Getting the List Updated \n Most content-filtering companies send out very frequent \nupdates. These must be accessed, downloaded, and \nincorporated. There has to be an automatic function for \nthis or your IT administration will avoid this task and \nyour lists will become quickly outdated. \n Time-of-Day Policy Changing \n Benevolent companies sometimes allow for surfing \nbefore or after business hours. Setting this up and man-\naging the policy enforcement is an HR and IT challenge. \n Override Authorization Methods \n Many content filters have an option that allows author-\nized people to bypass the content filter. This is especially \nuseful in environments where the computer is being \nsupervised and the content filter is aggressively blocking \nWeb sites that need to be accessed. Usually the company \nowners and executives claim this privilege. \n Hide Content in “ Noise ” or Use \nSteganography \n The most devious and technical approaches to get infor-\nmation past filters is to hide this information in pictures \nor video or within normal communication noise. \n Nonrepudiation: Smart Cards, ID Cards \nfor Access \n One of the strongest content access methods is to know, \nreally know, who is browsing. If you don’t have ano-\nnymity and you know that censors are controlling the \ncontent, the risks should be lower. \n Warn and Allow Methods \n This is a method of allowing the user to go to the desired \nsite, but after a warning that the site may not meet HR \nor IT policy. Usually this type of warning is enough to \nmake the surfer stop in her tracks. \n Integration with Spam Filtering tools \n Most new content-filtering technology has a related \ncomponent that inspects mail and coordinates policy \nwith the content-filtering gateway. \n Detect Spyware and Malware in the HTTP \nPayload \n Most new content-filtering technology goes beyond just \nblocking offensive content. The same technology is look-\ning for and blocking malware and spyware in HTTP data. \nDon’t select an enterprise product without this feature. \n Integration with Directory Servers \n The easiest way to manages content filtering that requires \ngranular user-lever control is to set up groups within direc-\ntory servers. For example, Trusted User Groups, Executive \nUser Groups, Owner User Group, and Restricted User \nGroup will have different browsing behavior allowed or \ndisallowed. \n" }, { "page_number": 774, "text": "Chapter | 42 Content Filtering\n741\n Language Support \n The content-filtering gateway must have support for \nmultiple languages or the surfer will just find Spanish \nporn sites, for example. Typically a global ratings data-\nbase will support multiple languages. \n Financial Considerations Are Important \n Don’t forget that a content filter project includes some \nof these items when calculating total cost of ownership \nfor ROI payback: \n ● Licensing costs: Per user or per gateway. \n ● Servers: How many do you need to buy; how about \nhigh availability? \n ● Appliances: How many do you need to buy, how \nabout high availability? \n ● Installation: Can you do it yourself or do you need a \nconsultant or the manufacturer to help? \n ● Maintenance: Support and updates are necessary to \nkeep your solution current. \n ● Ongoing administration from your IT staff. \n ● Patching, scanning, remediation by your IT staff. \n ● Some content filters need add-on server and license \ncosts, for example: \n – ISA Server \n – MS Server \n – Logging Server \n – Analyzer Server \n – AV Server \n – Firewall \n ● Some content-filtering systems require integration \ncosts with third-party enforcement points such as a \nfirewall. \n Scalability and Usability \n Critical design issues must be addressed. So, how are \nout-of-cache queries handled (see Figure 42.8 ). \n We’ll use the Fortinet Unified Threat Appliance as \nan example. When a match is not found in the FortiGuard \ncache, a request is sent to the Fortinet Distribution Network \n(FDN) in parallel with the request sent to the Web server to \nretrieve the Web pages. The time to query the FDN for URL \nrating is often negligible and far less than the time to retrieve \nthe Webpage because: \n ● FDN servers are strategically deployed close to \nthe major backbones and the roundtrip time from a \nFortiGate unit to the FDN and back is usually less \nthan the roundtrip time from the FortiGate unit to the \nWeb site and back. \n ● The latency of responding to a query is less than \n1ms, even when an FDN server is operating at its \nmaximum capacity. This compares to generally \nhundreds of milliseconds to even several seconds to \nretrieve a Web page because of normal network and \nWeb server latency. \nFortiGuard\nValidation Engine\nWeb Filtering\nPolicy\nWeb Filtering Engine\nWeb Rating\nCache\nFiltering\nPolicy\nRating\nValidate\nCustomer\nDatabase\nWeb Ratings\nDatabase\n FIGURE 42.8 Cache Rating and Filtering Policy Engine. 20 \n 20 Image Courtesy of Fortinet For a complete review of their wide \nrange of Fortinet UTM Appliance, see www.fortinet.com . \n" }, { "page_number": 775, "text": "PART | VII Advanced Security\n742\n ● The average payload of a FortiGuard URL query \npacket is less than 256 bytes, and one round trip \nis enough to retrieve the rating. The average size \nof a Web page is 10 Kbytes and usually requires a \nminimum of three round trips. See the following \nprocedural steps with regard to the FortiGuard URL \nquery packet, as shown in Figure 42.9 : \n 1. User requests a URL. \n 2. If the rating for the URL is already cached in \nthe FortiGate unit, it is immediately compared \nwith the policy for the user. If the site is allowed, \nthe page is requested (3a) and the response is \nretrieved (4b). \n 3. If the URL rating is not in the FortiGate cache, \nthe page is requested (3a) and a rating request is \nmade simultaneously to the FortiGuard Rating \nServer (3b). \n 4. When the rating response is received by the \nFortiGate unit (4a), it is compared with the \nrequestor’s policy (2). The response from the \nWeb site (4b) is queued by the FortiGate unit if \nnecessary until the rating is received. \n 5. If the policy is to allow the page, the Web site \nresponse (4b) is passed to the requestor (5). \nOtherwise, a user-definable “ blocked ” message \nis sent to the requestor and the event is logged in \nthe content-filtering log. \n Performance Issues \n Testing and measuring Web content-filtering perform-\nance is a challenge because constantly changing network \nconditions have the greatest effect on performance. Every \nmillisecond of delay for content inspection is multiplied \nby the entire user base. Poorly designed or constrained \nsystems will affect Web access business efficiency. \n Reporting Is a Critical Requirement \n It’s one thing to block Web content, but for businesses, \nschools, and parents it is critical to see results and issues \nthat reports will illustrate. Real-time visibility to Internet \nusage, historical trending of Web traffic, and detailed \nforensic reporting help gauge user intent, help in enforc-\ning Internet use policies, and enable retention of archived \nrecords to satisfy legal requirements and aid in regula-\ntory compliance. \n Typically Web filtering reports are generated and organ-\nized according to category and groups and users. There are \nper-group statistics in addition to the per-category statis-\ntics. Many reports should be supported including: \n ● Management reports . The most frequently used \nreports by customers or those of most interest to \nmanagement. These include category, destination, \ndisposition, group, risk, and user. \n ● Summary reports . Overview of usage with daily and \ngrand totals. Summary Reports are used to view \nInternet usage trends. \n ● User detail reports . The most complete picture \nof Internet usage. Detail into one user’s online \nactivities. \n Additionally this information is logged for reporting \npurposes: \n ● Source IP \n ● Destination IP \n ● URL \n ● Policy Action (allow, block, monitor) \n ● Content Category \n Bandwidth Usage \n Content-filtering systems should have the ability to display \ncurrent protocol usage and report on patterns. Bandwidth \nsavings give the most rapid ROI and need to be measur-\nable. Figure 42.10 shows a bandwidth monitoring report \nfrom Fortinet. \n Precision Percentage and Recall \n The accuracy and efficacy of content-filtering systems are \nmeasured by precision and recall. Precision is the percent-\nage of the number of relevant Web sites retrieved com-\npared to the total number of irrelevant and relevant Web \nsites retrieved. Recall is the percentage of the number of \nrelevant records retrieved compared to the total number \nof relevant records in the database. There is an inverse \nrelationship between these two metrics that cannot be \n4b\n3a\n5\n1\n3b\n4a\nFortiGuard Web Content Filtering Service\nEnd Customer\nRequested\nWeb Site\nPublic\nInternet\nFortiGuard\nRating\nServer\n FIGURE 42.9 FortiGuard URL query packet.*** \n" }, { "page_number": 776, "text": "Chapter | 42 Content Filtering\n743\navoided: Maximizing one minimizes the other, and vice \nversa. Precision and recall must be considered together. A \nsingle metric of adding the precision and recall together \nis a good overall indication of the accuracy and efficacy. \n The categorization of Web sites is an information \nretrieval process whereby each URL or Web page can be \nconsidered a record. A correctly categorized URL is a rel-\nevant record retrieved, whereas an incorrectly categorized \nURL is an irrelevant record retrieved. The objective of \nWeb filtering is to block Web pages that are designated to \nbe blocked and allow Web pages that are permitted. Web \nfiltering precision is a measure of underblocking, or let-\nting pages through that should be blocked. Higher preci-\nsion leads to lower underblocking. \n Web filtering recall is a measure of overblocking. \nOverblocking results from false positives and means \nblocking pages that should not be blocked. High recall \nleads to fewer false positives and lower overblocking. A \nperfect Web filtering system would have 100% precision \nand 100% recall, or a score of 200% overall. \n A customer’s Internet access policies dictate the Web \nsites to block, and typically all Web sites that are poten-\ntially liable, objectionable, or controversial are blocked. \n 9. RELATED PRODUCTS \n Instant messaging, IRC, FTP, telnet, and especially email \nare all forms of communication that can be inspected with \ncontent-filtering technology. Also, more and more compa-\nnies are integrating DLP technologies to reduce the risk \nof confidential, HIPPA, PCI, and PII information from \nleaking out over Internet connections. DLP inspection and \nblocking enforce data leakage and encryption policies. \nExpect to see this functionality become standard as \ncontent-filtering products mature. \n On the other hand, Internet accountability software is a \ntype of computer software that provides detailed reports that \naccount for user behavior, surfing history, chat sessions, and \nactions on the Internet. Internet accountability software is \nused for various reasons, court-mandated sanctions, com-\npany policy obligations, and as a recovery step in porn addic-\ntion. Versions of accountability software monitor Internet use \non a personal computer,or Internet use by a specific user on a \ncomputer. These software applications then generate reports \nof Internet use, monitored by a third party, that account for \nand manage an individual’s Internet browsing. \n The first vendor to offer Internet accountability soft-\nware was Covenant Eyes. Available in March 2000, \nCovenant Eyes accountability software was developed \nto provide Internet users with a means of reporting their \nonline activity to one or more “ accountability partners. ” \nThe term “ accountability partner ” is a well-known concept \nin addiction-recovery circles and 12-step programs, such \nas Alcoholics Anonymous and Sexaholics Anonymous. \nAccountability partners have access to a user’s Internet \nbrowsing record, which eliminates the anonymity of Internet \nuse, thus providing incentive to not view Internet pornogra-\nphy or other explicit sexual images online. Today there are \nseveral accountability software providers: Covenant Eyes, \nPromise Keepers, K9 Web Protection, and X3watch. \n 10. CONCLUSION \n Content filtering is a fast-paced battle of new technolo-\ngies and the relentless trumping of these systems by sub-\nversion and evasion. Altruistic development efforts by \n FIGURE 42.10 Shows that surfing consumes the majority of the bandwidth of an Internet connection. \n" }, { "page_number": 777, "text": "PART | VII Advanced Security\n744\npassionate programmers on a mission to support citizens \nin countries that block access to content will win, then \nlose, and then win again in a never-ending cycle. Other \nchallenges include employees and kids who don’t under-\nstand all the risks and don’t think the abuse of a school- \nor company-provided computer and network is a big deal. \nAdd new technologies, Web 2.0 applications, YouTube, \nand streaming sites, and the challenges and arguments \nfor content filtering will not end anytime soon. \n As we have explored, content filtering and its three \nobjectives — accuracy, scalability, and maintainability —\n are at odds with each other. Accurate blocking makes it \nhard to scale and maintain, and easily scalable and main-\ntainable systems are not as accurate. Companies that \nmake content-filtering technology are attempting to make \nthese challenges easier to manage and maintain. \n Content filtering is controversial, and the law is fre-\nquently changing in the U.S. and internationally. IT poli-\ncies try to cope and are being updated every year to deal \nwith new legal issues. \n Content filtering is morphing and aggregating with \nother technologies to address multifaceted threats. In the \nfuture, the standalone content filter/proxy will be just \none important part of your security protection posture. \n Finally, in the high-stakes chess game of content fil-\ntering, the censors and policy enforcers are always per-\npetually destined to have the worst move in chess: the \nsecond to last one. \n" }, { "page_number": 778, "text": "745\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Data Loss Protection \n Ken Perkins \n Blazent Incorporated \n Stealing Trade Secrets from E. I. du Pont de Nemours \nand Company \n WILMINGTON, DE—Colm F. Connolly, United \nStates Attorney for the District of Delaware; William \nD. Chase, Special Agent in Charge of the Baltimore \nFederal Bureau of Investigation (FBI) Field Office; and \nDarryl W. Jackson, Assistant Secretary of Commerce \nfor Export Enforcement, announced today the unseal-\ning of a one-count Criminal Information charging \nGary Min, a.k.a. Yonggang Min, with stealing trade \nsecrets from E. I. du Pont de Nemours and Company \n( “ DuPont ” ). Min pleaded guilty to the charge on \nNovember 13, 2006. The offense carries a maximum \nprison sentence of 10 years, a fine of up to $250,000, \nand restitution. \n Pursuant to the terms of the plea agreement, Min \nadmitted that he misappropriated DuPont’s proprie-\ntary trade secrets without the company’s consent and \nagreed to cooperate with the government. \n According to facts recited by the government and \nacknowledged by Min at Min’s guilty plea hearing, \nMin began working for DuPont as a research chem-\nist in November 1995. Throughout his tenure at \nDuPont, Min’s research focused generally on polyim-\nides, a category of heat and chemical resistant poly-\nmers, and more specifically on high-performance \nfilms. Beginning in July 2005, Min began discus-\nsions with Victrex PLC about possible employment \nopportunities in Asia. Victrex manufactures PEEK, ™ \na polymer compound that is a functional competitor \nwith two DuPont products, Vespel® and Kapton®. \nOn October 18, 2005, Min signed an employment \nagreement with Victrex, with his employment set to \nbegin in January 2006. Min did not tell DuPont that \nhe had accepted a job with Victrex, however, until \nDecember 12, 2005. \n Between August 2005 and December 12, 2005, \nMin accessed an unusually high volume of abstracts \nand full-text .pdf documents off of DuPont’s Electronic \nData Library ( “ EDL ” ). The EDL server, which is located \nat DuPont’s experimental station in Wilmington, is \none of DuPont’s primary databases for storing confi-\ndential and proprietary information. Min downloaded \napproximately 22,000 abstracts from the EDL and \n Chapter 43 \n IT professionals are tasked with the some of the most \ncomplex and daunting tasks in any organization. Some of \nthe roles and responsibilities are paramount to the com-\npany’s livelihood and profitability and maybe even the \nultimate survival of the organization. Some of the most \nchallenging issues facing IT professionals today are \nsecuring communications and complying with the vast \nnumber of data privacy regulations. Secure communica-\ntions must protect the organization against spam, viruses, \nand worms; securing outbound traffic; guaranteeing the \navailability and continuity of the core business systems \n(such as corporate email, Internet connectivity, and phone \nsystems), all while facing an increasing workload with \nthe same workforce. In addition, many organizations face \nchallenges in meeting compliance goals, contingency \nplans for disasters, detecting and/or preventing data mis-\nappropriation, and dealing with hacking, both internally \nand externally. \n Almost every week, IT professionals can open the \nnewspaper or browse online news sites and read stories \nthat would keep most people up at night (see sidebar, \n “ Stealing Trade Secrets From E. I. du Pont de Nemours \nand Company ” ). The dollar amounts lost are stagger-\ning and growing each year (see sidebar, “ Stored Secure \nInformation Intrusions ” ). Pressures of compliance regu-\nlations, brand protection, and corporate intellectual prop-\nerty are all driving organizations to evaluate and/or adopt \ndata loss protection (DLP) solutions.\n" }, { "page_number": 779, "text": "PART | VII Advanced Security\n746\naccessed approximately 16,706 documents — fifteen \ntimes the number of abstracts and reports accessed by \nthe next highest user of the EDL for that period. The \nvast majority of Min’s EDL searches were unrelated \nto his research responsibilities and his work on high-\nperformance films. Rather, Min’s EDL searches cov-\nered most of DuPont’s major technologies and prod-\nuct lines, as well as new and emerging technologies \nin the research and development stage. The fair \nmarket value of the technology accessed by Min \nexceeded $400 million. \n After Min gave DuPont notice that he was resign-\ning to take a position at Victrex, DuPont uncovered \nMin’s unusually-high EDL usage. DuPont imme-\ndiately contacted the FBI in Wilmington, which \nlaunched a joint investigation with the United States \nAttorney’s Office and the United States Department \nof Commerce. Min began working at Victrex on \nJanuary 1, 2006. On or about February 2, 2006, Min \nuploaded approximately 180 DuPont documents —\n including documents containing confidential, trade \nsecret information — to his Victrex-assigned laptop \ncomputer. On February 3, 2006, DuPont officials told \nVictrex officials in London about Min’s EDL activities \nand explained that Min had accessed confidential \nand proprietary action. Victrex officials seized Min’s \nlaptop computer from him on February 8, 2006, and \nsubsequently turned it over to the FBI. ” 1 \n Stored Secure Information Intrusions \n Retailer TJX suffered an unauthorized intrusion or intrusions \ninto portions of its computer system that process and store \ninformation related to credit and debit card, check and \nunreceipted merchandise return transactions (the intrusion \nor intrusions, collectively, the “ Computer Intrusion ” ), which \nwas discovered during the fourth quarter of fiscal 2007. The \ntheft of customer data primarily related to portions of the \ntransactions at its stores (other than Bob’s Stores) during \nthe periods 2003 through June 2004 and mid-May 2006 \nthrough mid-December 2006. \n During the first six months of fiscal 2007 TJX incurred \npretax costs of $38 million for costs related to the \nComputer Intrusion. In addition, in the second quarter \nended July 28, 2007, TJX established a pretax reserve for \nits estimated exposure to potential losses related to the \nComputer Intrusion and recorded a pretax charge of $178 \nmillion. As of January 26, 2008, TJX reduced the reserve \nby $19 million, primarily due to insurance proceeds with \nrespect to the Computer Intrusion, which had not pre-\nviously been reflected in the reserve, as well as a reduc-\ntion in estimated legal and other fees as the Company has \ncontinued to resolve outstanding disputes, litigation, and \ninvestigations. This reserve reflects the Company’s current \nestimation of probable losses in accordance with gener-\nally accepted accounting principles with respect to the \nComputer Intrusion and includes a current estimation of \ntotal potential cash liabilities from pending litigation, pro-\nceedings, investigations and other claims, as well as legal \nand other costs and expenses, arising from the Computer \nIntrusion. This reduction in the reserve results in a credit to \nthe Provision for Computer Intrusion related costs of $19 \nmillion in the fiscal 2007 fourth quarter and a pretax charge \nof $197 million for the fiscal year ended January 26, 2008. \n The Provision for Computer Intrusion related costs \nincreased fiscal 2008 fourth quarter net income by $11 \nmillion, or $0.02 per share, and reduced net income from \ncontinuing operations for the full fiscal 2008 year by $119 \nmillion, or $0.25 per share. 2 \n Note: In the June 2007 General Accounting Office article, \n “ GAO-07-737 Personal Information: Data Breaches Are \nFrequent, But Evidence of Resulting Identity Theft Is Limited; \nHowever, the Full Extent Is Unknown, ” 31 companies that \nresponded to a 2006 survey said they incurred an average of \n$1.4 million per data breach. 3 \n The list of concerning stories of companies and \norganizations affected by data breaches grows every \nyear. The penalties are not limited to financial losses \nbut sometimes hurt people personally through invasion \nof privacy. The organizations harmed are not limited \nto Wall Street and have implications of influencing the \nnational security of countries worldwide. The pressures \nacross entire organizations are growing to keep data in \nits place and keep it a secure manner. It is no wonder \nthat DLP solutions are included in most IT organiza-\ntions ’ initiatives for the next few years. \n So, with this in mind, this chapter could be consid-\nered an introduction and a primer to the concepts of \nDLP. The terms, acronyms, and concepts discussed here \nwill give the reader a baseline understanding of how to \ninvestigate and evaluate DLP applications in the mar-\nket today. However, this chapter should not be consid-\nered the authoritative single source of information on the \ntopic. \n 1 “ Guilty plea in trade secrets case, ” Department of Justice Press \nRelease, February 15, 2007. \n 2 “ SEC EDGAR fi ling information form 8-K, ” TJX Companies, Inc., \nFebruary 20, 2008. \n 3 “ GAO-07-737 personal information: Data breaches are frequent, but \nevidence of resulting identity theft is limited; however, the full extent \nis unknown, ” General Accounting Offi ce, June 2007. \n" }, { "page_number": 780, "text": "Chapter | 43 Data Loss Protection\n747\n 1. PRECURSORS OF DLP \n Even before the Internet and all the wonderful benefits \nit brings to the world, organizations ’ data were exposed \nto the outside world. Modems, telex, and fax machines \nwere some of the first enablers of electronic communica-\ntions. Electronic methods of communications, by default, \nincrease the speed and ease of communication, but they \nalso create inherent security risks. Once IT organiza-\ntions noticed they were at risk, they immediately started \nfocusing on creating impenetrable moats to surround \nthe “ IT castle. ” As communication protocols standard-\nized and with the mainstream adoption of the Internet, \nTransmission Control Protocol/Internet Protocol (TCP/\nIP) became the generally accepted default language of \nthe Internet. This phenomenon brought to light external-\nfacing security technologies and consequently their quick \nadoption. Some common technologies that protect TCP/\nIP networks from external threats are: \n ● Firewalls. Inspect network traffic passing through it, \nand denies or permits passage based on a set of rules. \n ● Intrusion detection systems (IDSs). Sensors log \npotential suspicious activity and allow for the \nremediation of the issue. \n ● Intrusion prevention systems (IPSs). React to \nsuspicious activity by automatically performing a reset \nto the connection or by adjusting the firewall to block \nnetwork traffic from the suspected malicious source. \n ● Antivirus protection. Attempts to identify, neutralize, \nor eliminate malicious software. \n ● Antispam technology. Attempts to let in “ good ” \nemails and keep out “ bad ” emails. \n The common thread in these technologies: Keep the \n “ bad guys ” out while letting normal, efficient business \nprocesses occur. These technologies initially offered some \nvery high-level, nongranular features such as blocking a \nTCP/IP port, allowing communications to and from a cer-\ntain range of IP addresses, identifying keywords (without \ncontext or much flexibility), signatures of viruses, and \nblocking spam that used common techniques used by \nspammers. \n Once IT organizations had a good handle on external-\nfacing services, the next logical thought comes to mind: \nWhat happens if the “ bad guy, ” undertrained or undere-\nducated users, already have access to the information \ncontained in an organization? In some circles of IT, this \nanimal is simply known as an employee. Employees, by \ntheir default, “ inside ” nature, have permission to access \nthe company’s most sensitive information to accomplish \ntheir jobs. Even though the behavior of nonmalicious \nemployees might cause as much damage as an inten-\ntional act, the disgruntled employee or insider is a unique \nthreat that needs to be addressed. \n The disgruntled insider, working from within an \norganization, is a principal source of computer crimes. \nInsiders may not need a great deal of knowledge about \ncomputer hacking because their knowledge of a victim’s \nsystem often allows them to gain unrestricted access \nto cause damage to the system or to steal system data. \nWith the advent of technology outsourcing, even non-\nemployees have the rights to view/create/delete some \nof the most sensitive data assets within an organization. \nThe insider threat could also include contractor person-\nnel and even vendors working onsite. To make matters \nworse, the ease of finding information to help with hack-\ning systems is no harder than typing a search string into \npopular search engines. The following is an example of \nhow easy it is for non- “ black hats ” to perform compli-\ncated hacks without much technical knowledge: \n 1. Open a browser that is connected to the Internet. \n 2. Go to any popular Internet search engine site. \n 3. Search for the string “ cracking WEP How to. ” \n Note: Observe the number of articles, most with step-\nby-step instructions, on how to find the Wired Equivalent \nPrivacy (WEP) encryption key to “ hijack ” a Wi-Fi access \npoint. \n So, what happens if an inside worker puts the organi-\nzation at risk through his activity on the network or cor-\nporate assets? The next wave of technologies that IT \norganizations started to address dealt with the “ inside \nman ” issue. Some examples of these types of technolo-\ngies include: \n ● Web filtering . Can allow/deny content to a user, espe-\ncially when it is used to restrict material delivered \nover the Web. \n ● Proxy servers. Services the requests of its clients by \nforwarding requests to other servers and may block \nentire functionality such as Internet messaging/chat, \nWeb email, and peer-to-peer file sharing programs. \n ● Audit systems (both manual and automated). \nTechnology that records every packet of data that \nenters/leave the organization’s network. Can be thought \nof as a network “ VCR. ” Automated appliances feature \npost-event investigative reports. Manual systems might \njust use open-source packet-capture technologies \nwriting to a disk for a record of network events. \n" }, { "page_number": 781, "text": "PART | VII Advanced Security\n748\n ● Computer forensic systems. Is a branch of forensic \nscience pertaining to legal evidence found in \ncomputers and digital storage media. Computer \nforensics adheres to standards of evidence admissible \nin a court of law. Computer forensics experts \ninvestigate data storage devices (such as hard drives, \nUSB drives, CD-ROMs, floppy disks, tape drives, \netc.), identifying, preserving, and then analyzing \nsources of documentary or other digital evidence. \n ● Data stores for email governance. \n ● IM- and chat-monitoring services . The adoption of IM \nacross corporate networks outside the control of IT \norganizations creates risks and liabilities for companies \nwho do not effectively manage and support IM use. \nCompanies implement specialized IM archiving and \nsecurity products and services to mitigate these risks \nand provide safe, secure, productive instant-messaging \ncapabilities to their employees. \n ● Document management systems. A computer sys-\ntem (or set of computer programs) used to track and \nstore electronic documents and/or images of paper \ndocuments. \n Each of these technologies are necessary security \nmeasures implemented in [or “ by ” ] IT organizations to \naddress point or niche areas of vulnerabilities in corpo-\nrate networks and computer assets. \n Even before DLP became a concept, IT organizations \nhave been practicing the tenets of DLP for years. Firewalls \nat the edge of corporate networks can block access to IP \naddresses, subnets, and Internet sites. One could say this is \nthe first attempt to keep data where it should reside, within \nthe organization. DLP should be looked at nothing more \nthan the natural progression of the IT security life cycle. \n 2. WHAT IS DLP? \n Data loss protection is a term that has percolated up from \nthe alphabet soup of computer security concepts in the \npast few years. Known in the past as information leak \ndetection and prevention (ILDP), used by IDC; informa-\ntion protection and control (IPC); information leak pre-\nvention (ILP), coined by Forrester; content monitoring \nand filtering (CMF), suggested by Gartner; or extrusion \nprevention system (EPS), the opposite of intrusion preven-\ntion system (IPS), the acronym DLP seems to have won \nout. No matter what acronym of the day is used, DLP is \nan automated system to identify anything that leaves the \norganization that could harm the organization. \n DLP applications try to move away from the point \nor niche application and give a more holistic approach \nto coverage, remediation and reporting of data issues. \nOne way of evaluating an organization’s level of risk is \nto look around in an unbiased fashion. The most benign \ncommunication technologies could be used against the \norganization and cause harm. \n Before embarking on a DLP project, understanding \nsome example types of harm and/or the corresponding \nregulations can help with the evaluation. The following \nsidebar, “ Current Data Privacy Legislation and Standards, ” \naddresses only a fraction of current data privacy legislation \nand standards but should give the reader a good under-\nstanding of the complexities involved in protecting data.\n Examples of Harm \n Scenario \n An administrative assistant confirms a hotel reservation for \nan upcoming conference by emailing a spreadsheet with \nemployee’s credit card numbers with expiration dates; \nsometimes if they want to make it really easy for the “ bad \nguys, ” an admin will include the credit card’s “ secret ” PIN, \nalso known as card verification number (CVN). \n Problem \n Possible violation of GLBA and puts the organization’s \nemployees at risk for identity theft and credit card fraud. \n Legislation \n Gramm-Leach-Bliley Act \n GLBA compliance is mandatory; whether a financial insti-\ntution discloses nonpublic information or not, there must \nbe a policy in place to protect the information from fore-\nseeable threats in security and data integrity. \n Major components put into place to govern the collec-\ntion, disclosure, and protection of consumers ’ nonpublic \npersonal information; or personally identifiable information: \n ● Financial Privacy Rule \n ● Safeguards Rule \n ● Pretexting Protection \n Financial Privacy Rule \n (Subtitle A: Disclosure of Nonpublic Personal Information, \ncodified at 15 U.S.C. § 6801 – 6809) \n The Financial Privacy Rule requires financial institu-\ntions to provide each consumer with a privacy notice at the \ntime the consumer relationship is established and annually \nthereafter. The privacy notice must explain the information \n Current Data Privacy Legislation and Standards \n" }, { "page_number": 782, "text": "Chapter | 43 Data Loss Protection\n749\nc ollected about the consumer, where that information is \nshared, how that information is used, and how that infor-\nmation is protected. The notice must also identify the con-\nsumer’s right to opt out of the information being shared \nwith unaffiliated parties per the Fair Credit Reporting Act. \nShould the privacy policy change at any point in time, the \nconsumer must be notified again for acceptance. Each time \nthe privacy notice is reestablished, the consumer has the \nright to opt-out again. The unaffiliated parties receiving the \nnonpublic information are held to the acceptance terms of \nthe consumer under the original relationship agreement. In \nsummary, the financial privacy rule provides for a privacy \npolicy agreement between the company and the consumer \npertaining to the protection of the consumer’s personal non-\npublic information. \n Safeguards Rule \n (Subtitle A: Disclosure of Nonpublic Personal Information, \ncodified at 15 U.S.C. § 6801 – 6809) \n The Safeguards Rule requires financial institutions to \ndevelop a written information security plan that describes \nhow the company is prepared for and plans to continue \nto protect clients ’ nonpublic personal information. (The \nSafeguards Rule also applies to information of those no \nlonger consumers of the financial institution.) This plan \nmust include: \n ● Denoting at least one employee to manage the \nsafeguards \n ● Constructing a thorough risk management on each \ndepartment handling the nonpublic information \n ● Developing, monitoring, and testing a program to secure \nthe information \n ● Changing the safeguards as needed with the changes in \nhow information is collected, stored, and used \n This rule is intended to do what most businesses should \nalready be doing: protect their clients . The Safeguards Rule \nforces financial institutions to take a closer look at how \nthey manage private data and to do a risk analysis on their \ncurrent processes. No process is perfect, so this has meant \nthat every financial institution has had to make some effort \nto comply with the GLBA. \n Pretexting Protection \n (Subtitle B: Fraudulent Access to Financial Information, \ncodified at 15 U.S.C. § 6821 – 6827) \n Pretexting (sometimes referred to as social engineering ) \noccurs when someone tries to gain access to personal non-\npublic information without proper authority to do so. This \nmay entail requesting private information while impersonat-\ning the account holder, by phone, by mail, by email, or even \nby phishing (i.e., using a phony Web site or email to col-\nlect data). The GLBA encourages the organizations covered \nby the GLBA to implement safeguards against pretexting. \nFor example, a well-written plan to meet GLBA’s Safeguards \nRule ( “ develop, monitor, and test a program to secure the \ninformation ” ) ought to include a section on training employ-\nees to recognize and deflect inquiries made under pretext. \nIn the United States, pretexting by individuals is punishable \nas a common law crime of False Pretenses. \n Scenario \n An HR employee, whose main job function is to process claims, \nforwards via email an employee’s Explanation of Benefits that \ncontains a variety of Protected Health Information. The email is \nsent in the clear, unencrypted, to the organization’s healthcare \nprovider. \n Problem \n Could violate the Health Insurance Portability and \nAccountability Act (HIPAA), depending on the type of \norganization. \n Legislation \n The Privacy Rule \n The Privacy Rule took effect on April 14, 2003, with a \none-year extension for certain “ small plans. ” It establishes \nregulations for the use and disclosure of Protected Health \nInformation (PHI). PHI is any information about health sta-\ntus, provision of health care, or payment for health care \nthat can be linked to an individual. This is interpreted rather \nbroadly and includes any part of a patient’s medical record \nor payment history. \n Covered entities must disclose PHI to the individual \nwithin 30 days upon request. They also must disclose PHI \nwhen required to do so by law, such as reporting suspected \nchild abuse to state child welfare agencies. \n A covered entity may disclose PHI to facilitate treatment, \npayment, or healthcare operations or if the covered entity \nhas obtained authorization from the individual. However, \nwhen a covered entity discloses any PHI, it must make a \nreasonable effort to disclose only the minimum necessary \ninformation required to achieve its purpose. \n The Privacy Rule gives individuals the right to request \nthat a covered entity correct any inaccurate PHI. It also \nrequires covered entities to take reasonable steps to ensure \nthe confidentiality of communications with individuals. For \nexample, an individual can ask to be called at his or her \nwork number, instead of home or cell phone number. \n The Privacy Rule requires covered entities to notify indi-\nviduals of uses of their PHI. Covered entities must also keep \ntrack of disclosures of PHI and document privacy poli-\ncies and procedures. They must appoint a Privacy Official \nand a contact person responsible for receiving complaints \nand train all members of their workforce in procedures \nregarding PHI. \n An individual who believes that the Privacy Rule is not \nbeing upheld can file a complaint with the Department of \nHealth and Human Services Office for Civil Rights (OCR). \n" }, { "page_number": 783, "text": "PART | VII Advanced Security\n750\n Scenario \n An employee opens an email whose subject is “ 25 Reasons \nWhy Beer is Better than Women. ” The employee finds this \njoke amusing and forwards the email to other coworkers \nusing the corporate email system. \n Problem \n Puts the organization in an exposed position for claims of \nsexual harassment and a hostile workplace environment. \n Legislation \n In the U.S., the Civil Rights Act of 1964 Title VII prohib-\nits employment discrimination based on race, sex, color, \nnational origin, or religion. The prohibition of sex discrimi-\nnation covers both females and males. This discrimination \noccurs when the sex of the worker is made a condition of \nemployment (i.e., all female waitpersons or male carpen-\nters) or where this is a job requirement that does not men-\ntion sex but ends up barring many more persons of one \nsex than the other from the job (such as height and weight \nlimits). \n In 1998, Chevron settled, out of court, a lawsuit brought \nby several female employees after the “ 25 Reasons ” \nemail was widely circulated throughout the organization. \nUltimately, Chevron settled out of court for $2.2 million. \n Scenario \n A retail store server electronically transmits daily point-of-\nsale (POS) transactions to the main corporate billing server. \nThe POS system records the time, date, register number, \nemployee number, part number, quantity, and if paid for \nby credit card, the card number. This transaction occurs \nnightly as part of a batch job and is transmitted over the \nstore’s Wi-Fi network. \n Problem \n PCI DSS stands for Payment Card Industry Data Security \nStandard. It was developed by the major credit card com-\npanies as a guideline to help organizations that process \ncard payments prevent credit-card fraud, cracking, and \nvarious other security vulnerabilities and threats. A com-\npany processing, storing, or transmitting payment card \ndata must be PCI DSS compliant or risk losing its ability \nto process credit card payments and being audited and/or \nfined. Merchants and payment card service providers must \nvalidate their compliance periodically. This validation gets \nconducted by auditors (that is persons who are the PCI DSS \nQualified Security Assessors, or QSAs). Although individu-\nals receive QSA status, reports on compliance can only be \nsigned off by an individual QSA on behalf of a PCI coun-\ncil-approved consultancy. Smaller companies, processing \n fewer than about 80,000 transactions a year, are allowed to \nperform a self-assessment questionnaire. Penalties are often \naccessed and fines of $25,000 per month are possible for \nlarge merchants for noncompliance. \n PCI DSS requires 12 requirements to be in compliance: \n Requirement 1: Install and maintain a firewall configura-\ntion to protect cardholder data \n Firewalls are computer devices that control computer \ntraffic allowed into and out of a company’s network, as \nwell as traffic into more sensitive areas within a company’s \ninternal network. A firewall examines all network traffic \nand blocks those transmissions that do not meet the speci-\nfied security criteria. \n Requirement 2: Do not use vendor-supplied defaults for \nsystem passwords and other security parameters \n Hackers (external and internal to a company) often use \nvendor default passwords and other vendor default settings \nto compromise systems. These passwords and settings are \nwell known in hacker communities and easily determined \nvia public information. \n Requirement 3: Protect stored cardholder data \n Encryption is a critical component of cardholder data \nprotection. If an intruder circumvents other network secu-\nrity controls and gains access to encrypted data, without \nthe proper cryptographic keys, the data is unreadable and \nunusable to that person. Other effective methods of protect-\ning stored data should be considered as potential risk miti-\ngation opportunities. For example, methods for minimizing \nrisk include not storing cardholder data unless absolutely \nnecessary, truncating cardholder data if full PAN is not \nneeded and not sending PAN in unencrypted emails. \n Requirement 4: Encrypt transmission of cardholder data \nacross open, public networks \n Sensitive information must be encrypted during transmis-\nsion over networks that are easy and common for a hacker \nto intercept, modify, and divert data while in transit. \n Requirement 5: Use and regularly update anti-virus soft-\nware or programs \n Many vulnerabilities and malicious viruses enter the \nnetwork via employees ’ email activities. Antivirus software \nmust be used on all systems commonly affected by viruses \nto protect systems from malicious software. \n Requirement 6: Develop and maintain secure systems \nand applications \n Unscrupulous individuals use security vulnerabilities to \ngain privileged access to systems. Many of these vulner-\nabilities are fixed by vendor-provided security patches. All \nsystems must have the most recently released, appropriate \nsoftware patches to protect against exploitation by employ-\nees, external hackers, and viruses. Note: Appropriate soft-\nware patches are those patches that have been evaluated \nand tested sufficiently to determine that the patches do not \nconflict with existing security configurations. For in-house \ndeveloped applications, numerous vulnerabilities can be \navoided by using standard system development processes \nand secure coding techniques. \n Requirement 7: Restrict access to cardholder data by \nbusiness need-to-know \n" }, { "page_number": 784, "text": "Chapter | 43 Data Loss Protection\n751\n 4 “ Portions of this production are provided courtesy of PCI Security Standards Council, LLC ( “ PCI SSC ” ) and/or its licensors. © 2007 PCI \nSecurity Standards Council, LLC. All rights reserved. Neither PCI SSC nor its licensors endorses this product, its provider or the methods, pro-\ncedures, statements, views, opinions or advice contained herein. All references to documents, materials or portions thereof provided by PCI SSC \n(the “ PCI Materials ” ) should be read as qualifi ed by the actual PCI Materials. For questions regarding the PCI Materials, please contact PCI SSC \nthrough its Web site at https://www.pcisecuritystandards.org . ” \n This requirement ensures critical data can only be \naccessed by authorized personnel. \n Requirement 8: Assign a unique ID to each person with \ncomputer access \n Assigning a unique identification (ID) to each person \nwith access ensures that actions taken on critical data and \nsystems are performed by, and can be traced to, known and \nauthorized users. \n Requirement 9: Restrict physical access to cardholder \ndata \n Any physical access to data or systems that house card-\nholder data provides the opportunity for individuals to \naccess devices or data and to remove systems or hardcop-\nies, and should be appropriately restricted. \n Requirement 10: Track and monitor all access to net-\nwork resources and cardholder data \n Logging mechanisms and the ability to track user activi-\nties are critical. The presence of logs in all environments \nallows thorough tracking and analysis if something does go \nwrong. Determining the cause of a compromise is very dif-\nficult without system activity logs. \n Requirement 11: Regularly test security systems and \nprocesses \n Vulnerabilities are being discovered continually by hack-\ners and researchers, and being introduced by new software. \nSystems, processes, and custom software should be tested \nfrequently to ensure security is maintained over time and \nwith any changes in software. \n Requirement 12: Maintain a policy that addresses infor-\nmation security for employees and contractors \n A strong security policy sets the security tone for the \nwhole company and informs employees what is expected \nof them. All employees should be aware of the sensitivity of \ndata and their responsibilities for protecting it. 4 \n Organizations are facing pressures to become Sarbanes-\nOxley compliant. \n SOX Section 404: Assessment of internal control \n The most contentious aspect of SOX is Section 404, \nwhich requires management and the external auditor to \nreport on the adequacy of the company’s internal con-\ntrol over financial reporting (ICFR). This is the most costly \naspect of the legislation for companies to implement, as \ndocumenting and testing important financial manual and \nautomated controls requires enormous effort. \n Under Section 404 of the Act, management is required to \nproduce an “ internal control report ” as part of each annual \nExchange Act report. The report must affirm “ the responsibil-\nity of management for establishing and maintaining an ade-\nquate internal control structure and procedures for financial \nreporting. ” The report must also “ contain an assessment, as \nof the end of the most recent fiscal year of the Company, \nof the effectiveness of the internal control structure and \nprocedures of the issuer for financial reporting. ” To do this, \nmanagers are generally adopting an internal control frame-\nwork such as that described in Committee of Sponsoring \nOrganization of the Treadway Commission (COSO). \n Both management and the external auditor are responsi-\nble for performing their assessment in the context of a top-\ndown risk assessment, which requires management to base \nboth the scope of its assessment and evidence gathered on \nrisk. Both the Public Company Accounting Oversight Board \n(PCAOB) and SEC recently issued guidance on this topic to \nhelp alleviate the significant costs of compliance and better \nfocus the assessment on the most critical risk areas. \n The recently released Auditing Standard No. 5 of the \nPCAOB, which superseded Auditing Standard No 2. has the \nfollowing key requirements for the external auditor: \n ● Assess both the design and operating effectiveness of \nselected internal controls related to significant accounts \nand relevant assertions, in the context of material mis-\nstatement risks \n ● Understand the flow of transactions, including IT \naspects, sufficiently to identify points at which a mis-\nstatement could arise \n ● Evaluate company-level (entity-level) controls, which \ncorrespond to the components of the COSO framework \n ● Perform a fraud risk assessment \n ● Evaluate controls designed to prevent or detect fraud, \nincluding management override of controls \n ● Evaluate controls over the period-end financial reporting \nprocess; \n ● Scale the assessment based on the size and complexity \nof the company \n ● Rely on management’s work based on factors such as \ncompetency, objectivity, and risk \n ● Evaluate controls over the safeguarding of assets \n ● Conclude on the adequacy of internal control over \nfinancial reporting \n The recently released SEC guidance is generally consist-\nent with the PCAOB’s guidance above, only intended for \nmanagement. \n" }, { "page_number": 785, "text": "PART | VII Advanced Security\n752\n 5 Wikipedia contributors, “ Sarbanes-Oxley Act, ” Wikipedia, Wednesday, 2008-05-14 14:31 UTC, http://en.wikipedia.org/wiki/Sarbannes_Oxley_Act . \n After the release of this guidance, the SEC required \nsmaller public companies to comply with SOX Section \n404, companies with year ends after December 15, 2007. \nSmaller public companies performing their first manage-\nment assessment under Sarbanes-Oxley Section 404 may \nfind their first year of compliance after December 15, 2007 \nparticularly challenging. To help unravel the maze of uncer-\ntainty, Lord & Benoit, a SOX compliance company, issued \n “ 10 Threats to Compliance for Smaller Companies ” ( www.\nsection404.org/pdf/sox_404_10_threats_to_compliance_\nfor_smaller_public_companies.pdf ), which gathered histori-\ncal evidence of material weaknesses from companies with \nrevenues under $100 million. The research was compiled \naggregating the results of 148 first-time companies with \nmaterial weaknesses and revenues under $100 million. \nThe following were the 10 leading material weaknesses in \nLord & Benoit’s study: accounting and disclosure controls, \ntreasury, competency and training of accounting personnel, \ncontrol environment, design of controls/lack of effective \ncompensating controls, revenue recognition, financial clos-\ning process, inadequate account reconciliations, informa-\ntion technology and consolidations, mergers, intercompany \naccounts. 5 \n Scenario \n A guidance counselor at a high school gets a request from a \nstudent’s prospective college. The college asked for the stu-\ndent’s transcripts. The guidance counselor sends the tran-\nscript over the schools email system unencrypted. \n Problem \n FERPA privacy concerns, depending on the age of the \nstudent. \n Legislation \n The Family Educational Rights and Privacy Act (FERPA) \n(20 U.S.C. § 1232g; 34 CFR Part 99) is a federal law that \nprotects the privacy of student education records. The law \napplies to all schools that receive funds under an applica-\nble program of the U.S. Department of Education. \n FERPA gives parents certain rights with respect to their \nchildren’s education records. These rights transfer to the \nstudent when he or she reaches the age of 18 or attends a \nschool beyond the high school level. Students to whom the \nrights have transferred are “ eligible students. ” \n Parents or eligible students have the right to inspect \nand review the student’s education records maintained \nby the school. Schools are not required to provide copies \nof records unless, for reasons such as great distance, it is \nimpossible for parents or eligible students to review the \nrecords. Schools may charge a fee for copies. \n Parents or eligible students have the right to request that \na school correct records that they believe to be inaccu-\nrate or misleading. If the school decides not to amend the \nrecord, the parent or eligible student then has the right to a \nformal hearing. After the hearing, if the school still decides \nnot to amend the record, the parent or eligible student has \nthe right to place a statement with the record setting forth \nhis or her view about the contested information. \n Generally, schools must have written permission from \nthe parent or eligible student in order to release any infor-\nmation from a student’s education record. However, FERPA \nallows schools to disclose those records, without consent, \nto the following parties or under the following conditions \n(34 CFR § 99.31): \n ● School officials with legitimate educational interest \n ● Other schools to which a student is transferring \n ● Specified officials for audit or evaluation purposes \n ● Appropriate parties in connection with financial aid to a \nstudent \n ● Organizations conducting certain studies for or on \nbehalf of the school \n ● Accrediting organizations \n ● To comply with a judicial order or lawfully issued \nsubpoena \n ● Appropriate officials in cases of health and safety \nemergencies \n ● State and local authorities, within a juvenile justice sys-\ntem, pursuant to specific State law \n Schools may disclose, without consent, “ directory ” \ninformation such as a student’s name, address, telephone \nnumber, date and place of birth, honors and awards, and \ndates of attendance. However, schools must tell parents \nand eligible students about directory information and allow \nparents and eligible students a reasonable amount of time \nto request that the school not disclose directory informa-\ntion about them. Schools must notify parents and eligible \nstudents annually of their rights under FERPA. The actual \nmeans of notification (special letter, inclusion in a PTA bul-\nletin, student handbook, or newspaper article) is left to the \ndiscretion of each school. \n Scenario \n Employee job hunting, posting resumes and trying to find \nanother job while working. See Figure 43.1 for an example \nof a DLP system capturing the full content of a user going \nthrough the resignation process. \n Problem \n Loss of productivity for that employee. \n Warning sign for a possible disgruntled employee. \n" }, { "page_number": 786, "text": "Chapter | 43 Data Loss Protection\n753\n 3. WHERE TO BEGIN? \n A reasonable place to begin talking about DLP is with \nthe department of the organization that handles corpo-\nrate policy and/or governance (see sidebar, “ An Example \nof an Acceptable Use Policy ” ). Monitoring employees is \nat best an interesting proposition. Corporate culture can \ndrive whether monitoring of any kind is even allowed. A \ngood litmus test would be the types of notice that appear \nin the employee handbook.\n Use of Email and Computer Systems \n All information created, accessed or stored using company \napplications, systems, or resources, including email, is the \nproperty of the company. Users do not have a right to pri-\nvacy regarding any activity conducted using the company’s \nsystem. The company can review, read, access, or otherwise \nmonitor email and all activities on the company system or \nany other system accessed by use of the company system. \nIn addition, the Company could be required to allow others \nto read email or other documents on the company’s system \nin the context of a lawsuit or other legal action. \n All users must abide by the rules of network etiquette, which \ninclude being polite and using the network and the Internet in \na safe and legal manner. The company or authorized company \nofficials will make a good faith judgment as to which mate-\nrials, files, information, software, communications, and other \ncontent and activity are permitted and prohibited based on the \nfollowing guidelines and under the particular circumstances. \n Among the uses that are considered unacceptable and \nconstitute a violation of this policy are the following: \n ● Using, transmitting, receiving, or seeking inappropriate, \noffensive, swearing, vulgar, profane, suggestive, obscene, \nabusive, harassing, belligerent, threatening, defamatory \n(harming another’s reputation by lies), or misleading lan-\nguage or materials; revealing personal information such \nas another’s home address, home telephone number, or \nSocial Security number; making ethnic, sexual-prefer-\nence, age or gender-related slurs or jokes. \n ● Users may never harass, intimidate, threaten others, or \nengage in other illegal activity (including pornography, \nterrorism, espionage, theft, or drugs) by email or other \nposting. All such instances should be reported to man-\nagement for appropriate action. In addition to violating \nthis policy, such behavior may also violate other com-\npany policies or civil or criminal laws. \n An Example of an Acceptable Use Policy \n FIGURE 43.1 Webmail event: Content rendering of a resignation event. 6 \n 6 Figures 43.1, 43.2 (a-c), and 43.3 (a-b), inclusive of the Vericept trademark and logo, are provided by Vericept Corporation solely for use as screen-\nshots herein and may not be reproduced or used in any other way without the prior written permission of Vericept Corporation. All rights reserved. \n" }, { "page_number": 787, "text": "PART | VII Advanced Security\n754\n Some organizations are more apt to take advantage of \nthe laws and rights that companies have to defend them-\nselves. Simply asking around and performing informal \ninterviews with Human Resources, Security, and Legal \ncan save days and weeks of time down the line. \n In summary, implementing a DLP application with-\nout the proper Human Resources, Security, and Legal \npolicies could be a waste of time because IT profession-\nals will catch employees violating security standard. The \nevents in a DLP system must be actionable and have \n “ teeth ” for changes to take place. \n 4. DATA IS LIKE WATER \n As most anyone who has had a water leak in dwelling \nknows, water will find a way out of where it is supposed \nto go. Pipes are meant to direct the proper flow of water \nboth in and out. If a leak happens, the occupant will \neventually find a damp spot, a watermark, or a real drip. \nIt might take minutes or days to notice the leak and might \ntake just as long to find the source of the leak. \n Much like the water analogy, employees are given \ndata “ pipes ” to do their jobs with enabling technology \nprovided by the IT organization. Instead of water flow-\ning through, data can ingress/egress the organization in \nmultiple methods. \n Corporate email is a powerful efficient time saving \ntool that speeds communication. A user can attach a 10 \nmegabyte file, personal pictures, a recipe for chili and next \nquarter’s marketing plan or an acquisition target. Chat and \n Accessing a Company’s Information System \n You are accessing a Company’s information system (IS) \nthat is provided for Company-authorized use only. By \nusing this IS, you consent to the following conditions: \n ● The Company routinely monitors communications \noccurring on this IS, and any device attached to this \nIS, for purposes including, but not limited to, pen-\netration testing, monitoring, network defense, quality \ncontrol, and employee misconduct, law enforcement, \nand counterintelligence investigations. \n ● At any time the Company may inspect and/or seize data \nstored on this IS and any device attached to this IS. \n ● Communications occurring on or data stored on this \nIS, or any device attached to this IS, are not private. \nThey are subject to routine monitoring and search. \n ● Any communications occurring on or data stored on \nthis IS, or any device attached to this IS, may be dis-\nclosed or used for any Company-authorized purpose. \n ● Security protections may be utilized on this IS to pro-\ntect certain interests that are important to the Company. \nFor example, password, access cards, encryption or \nbiometric access controls provide security for the ben-\nefit of the Company. These protections are not pro-\nvided for your benefit or privacy and may be modified \nor eliminated at the Company’s discretion. \n ● Among the uses that are considered unacceptable and \nconstitute a violation of this policy are downloading \nor transmitting copyrighted materials without permis-\nsion from the owner of the copyright on those materials. \nEven if materials on the network or the Internet are not \nmarked with the copyright symbol, you should assume \nthat they are protected under copyright laws unless \nthere is explicit permission from the copyright holder on \nthe materials to use them. \n ● Users must not use email or other communications \nmethods, including but not limited to news group post-\ning, blogs, forums, instant messaging, and chat servers, \nto send company proprietary or confidential informa-\ntion to any unauthorized party. Such information may \nbe disclosed to authorized persons in encrypted files if \nsent over publicly accessible media such as the Internet \nor other broadcast media such as wireless communica-\ntion. Such information may be sent in unencrypted files \nonly within the company system. Users are responsible \nfor properly labeling such information. \n Certain specific policies extend the Company’s acceptable use \npolicy by placing further restrictions on that activity. Examples \ninclude, but are not limited to: software usage, network usage, \nshell policy, remote access policy, wireless policy, and the \nmobile email access policy. These and any additional policies \nare available from the IT Web site on the intranet. \n Your use of the network and the Internet is a privilege, \nnot a right. If you violate this policy, at a minimum you \nwill be subject to having your access to the network and \nthe Internet terminated. You breach this policy not only \nby affirmatively violating the above provisions but also by \nfailing to report any violations of this policy by other users \nwhich come to your attention. Further, you violate this pol-\nicy if you permit another to use your account or password \nto access the network or the Internet, including but not \nlimited to someone whose access has been denied or ter-\nminated. Sharing your account with anyone is a violation \nof this policy. It is your responsibility to keep your account \nsecure by choosing a sufficiently complex password and \nchanging it on a regular basis. \n Another good indicator that the organization would \nbe a good fit for a DLP application is the sign-on screen \nthat appears before or after a computer user logs on to \nher workstation (see sidebar, “ Accessing a Company’s \nInformation System ” ).\n" }, { "page_number": 788, "text": "Chapter | 43 Data Loss Protection\n755\nIM is the quickest growing form of electronic communi-\ncation and a great enabler of efficient workflow. Files can \nbe sent as well over these protocols or “ pipes. ” Web mail \nis usually the “ weapon of choice ” by users who like to \nconduct personal business at work. Web mail allows users \nto attach files of any type. \n Thus, the IT network “ plumbing ” needs to be moni-\ntored, maintained, and evaluated on an ongoing basis. \nThe U.S. government has published a complete and well-\nrounded standard that organizations can use as a good \nfirst step to compare where they are strong and where \nthey can use improvement. \n The U.S. Government Federal Information Security \nManagement Act of 2002 (FISMA) offers reasonable \nguidelines that most organizations could benefit by \nadopting. Even though FISMA is mandated for govern-\nment agencies and contractors, it can be applied to the \ncorporate world as well. \n FISMA sets forth a comprehensive framework for \nensuring the effectiveness of security controls over \ninformation resources that support federal operations \nand assets. FISMA’s framework creates a cycle of risk \nmanagement activities necessary for an effective security \nprogram, and these activities are similar to the principles \nnoted in our study of the risk management activities of \nleading private sector organizations — assessing risk, \nestablishing a central management focal point, imple-\nmenting appropriate policies and procedures, promoting \nawareness, and monitoring and evaluating policy and \ncontrol effectiveness. More specifically, FISMA requires \nthe head of each agency to provide information security \nprotections commensurate with the risk and magnitude \nof harm resulting from the unauthorized access, use, dis-\nclosure, disruption, modification, or destruction of infor-\nmation and information systems used or operated by the \nagency or on behalf of the agency. In this regard, FISMA \nrequires that agencies implement information security \nprograms that, among other things, include: \n ● Periodic assessments of the risk \n ● Risk-based policies and procedures \n ● Subordinate plans for providing adequate \ninformation security for networks, facilities, and \nsystems or groups of information systems, as \nappropriate \n ● Security awareness training for agency personnel, \nincluding contractors and other users of information \nsystems that support the operations and assets of the \nagency \n ● Periodic testing and evaluation of the effectiveness \nof information security policies, procedures, and \npractices, performed with a frequency depending on \nrisk, but no less than annually \n ● A process for planning, implementing, evaluating, \nand documenting remedial action to address any \ndeficiencies \n ● Procedures for detecting, reporting, and responding \nto security incidents \n ● Plans and procedures to ensure continuity of \noperations \n In addition, agencies must develop and maintain an \ninventory of major information systems that is updated \nat least annually and report annually to the Director of \nOMB and several Congressional Committees on the \nadequacy and effectiveness of their information security \npolicies, procedures, and practices and compliance with \nthe requirements of the act. An internal risk assessment \nof what types of “ communication, ” both manual and \nelectronic, that are allowed within the organization can \ngive the DLP evaluator a baseline of the type of trans-\nmission that are probably taking place. \n Some types of communications that should be evalu-\nated are not always obvious but could be just as damaging \nas electronic methods. The following list encompasses \nsome of those obvious and not so obvious methods: \n ● Pencil and paper \n ● Photocopier \n ● Fax \n ● Voicemail \n ● Digital camera \n ● Jump drive \n ● MP3/iPod \n ● DVD/CD-ROM/3½ in. floppy \n ● Magnetic tape \n ● SATA drives \n ● IM/chat \n ● FTP/FTPS \n ● SMTP/POP3/IMAP \n ● HTTP post/response \n ● HTTPS \n ● Telnet \n ● SCP \n ● P2P \n ● Rogue ports \n ● GoToMyPC \n ● Web conferencing systems \n 5. YOU DON’T KNOW WHAT YOU \nDON’T KNOW \n Embarking on a DLP evaluation or implementation can \nbe a straightforward exercise. The IT professional usu-\nally has a mandate in mind and a few problems that the \n" }, { "page_number": 789, "text": "PART | VII Advanced Security\n756\nDLP application will address. Invariably, many other \nissues will arise as DLP applications do a very good job \nat finding most potential security and privacy issues. \n Reports that say that something hasn’t happened are \nalways interesting to me, because as we know, there are \n ‘ known knowns ’ ; there are things we know we know. \nWe also know there are ‘ known unknowns ’ ; that is to say \nwe know there are some things we do not know. But there \nare also ‘ unknown unknowns ’ — the ones we don’t know we \ndon’t know. \n — Donald Rumsfeld, U.S. Department of Defense, \nFebruary 12, 2002 \n Once the corporate culture has established that DLP is \nworth investigating or worth implementing, the next \nlogical step would be performing a risk/exposure assess-\nment. Several DLP vendors offer free pilots or proof of \nconcepts and should be leveraged to jumpstart the data \nrisk assessment for a very low monetary cost. \n A risk/exposure assessment usually involves placing \na server on the edge of the corporate network and sam-\npling/recording the network traffic that is egressing the \norganization. In addition, the assessment might involve \nlook for high-risk files at rest and the activity of what is \nhappening on the workstation environment. Most if not \nall DLP applications have predefined risk categories that \ncover a wide range of risk profiles. Some examples are: \n ● Regulations: GLBA, HIPAA, PCI-DSS, SOX, \nFERPA, PHI \n ● Acceptable use: Violence, gangs, profanity, adult \nthemes, weapons, harassment, racism, pornography \n ● Productivity: Streaming media, resignation, \nshopping, Webmail \n ● Insider hacker activity: Root activity, nmap, stack, \nsmashing code, keyloggers \n Deciding what risk categories are most important to \nyour organization can streamline the DLP evaluation. If \ndata categories are turned on but not likely to impact what \nis truly important to the organization, the test/pilot result \nwill contain a lot of “ noise. ” Focus on the “ low-hanging \nfruit. ” For example, if the organization’s life blood is cus-\ntomer data, focus on the categories that address those types \nof leaks. \n Precision versus Recall \n Before the precision versus recall discussion can take \nplace, definitions are necessary: \n ● False positive. A false positive occurs when the DLP \napplication-monitoring or DLP application-blocking \ntechniques wrongly classify a legitimate transmis-\nsion or event as “ uninteresting ” and, as a result, the \nevent must be remediated anyway. Remediating \nan event is a time-consuming process which could \ninvolve one to many administrators dispositioning \nthe event. A high number of false positives is normal \nduring an initial implementation, but the number \nshould fall after the DLP application is tuned. \n ● False negative. A false negative occurs when a \ntransmission is not detected as interesting. The \nproblem with false negatives is usually the DLP \nadministrator does not know these transmissions \nare happening in the first place. An analogy would \nbe a bank employee who embezzles thousands of \ndollars and the bank does not notice the theft until \nit is too late. \n ● True positive. Condition present and the DLP \napplication records the event for remediation. \n ● True negative. Condition not present and the DLP \napplication does not record it. \n DLP application testing and tuning can involve a \ntrade-off: \n ● The acceptable level of false positives (in which a \nnonmatch is declared to be a match). \n ● The acceptable level of false negatives (in which an \nactual match is not detected). \n An evaluator can think of this process as a slider bar \nconcept, with false negatives on the left side and false \npositives on the right. A properly tuned DLP application \nminimizes false positives and diminishes the chances of \nfalse negatives. \n This iterative process of tuning is called thresholding . \nCreating the proper threshold eventually leads to the min-\nimization of acceptable amounts of false positives with \nno or minimal false negatives. \n An easy way to achieve thresholding is to make the \ntest more restrictive or more sensitive. The more restric-\ntive the test is, the higher the risk of rejecting true posi-\ntives; and, the less sensitive the test is, the higher the risk \nof accepting false positives . \n 6. HOW DO DLP APPLICATIONS \nWORK? \n The way that most DLP applications capture interest-\ning events is through different kinds of analysis engines. \nMost support simple keyword matching. For example, \nany time you see the phrase “ project phoenix ” in a data \ntransmission, the network event is stored for later review. \n" }, { "page_number": 790, "text": "Chapter | 43 Data Loss Protection\n757\nKeywords can be grouped and joined. Regular expression \n(RegEx) support is featured in most of today’s DLP appli-\ncations. Regular expressions provide a concise and flex-\nible means for identifying strings of text of interest, such \nas particular characters, words, or patterns of characters. \nRegular expressions are written in a formal language that \ncan be interpreted by a regular expression processor, a \nprogram that either serves as a parser generator or exam-\nines text and identifies parts that match the provided spec-\nification. A real-world example would be the expression: \n (r|b)?ed \n Any transmission that contained the word red , bed , \nor even ed would be captured for later investigation. \nRegular expressions can also do pattern matching on \ncredit card number and U.S. Social Security numbers: \n ^\\d{ } - ?\\d{ } - ?\\d{ }\n3\n2\n4 \n which can be read: Any three numbers followed by an \noptional dash followed by any two numbers followed by \nan optional dash and then followed by any four numbers. \nRegular expressions offer a certain level of efficiencies \nbut cannot address all DLP concerns. Weighting of either \nkeyword(s) and/or RegEx’s can help. A real-world exam-\nple might be the word red is worth three points and the \nSSN is worth five points, but for an event to hit the trans-\nmission, it must contain 22 points. In this example, four \nSSNs and the word red would trigger an event (4 times \n5 plus 3 equals 23, which would trigger the event score \nrule). Scoring can help address the thresholding issue. \nTo address some of the limitation of simple keyword and \nRegEx’s, DLP applications can also look for data “ sig-\nnatures ” or hashes of data. Hashing creates a mathemati-\ncal representation of the sensitive data and looks for that \ns ignature. Sensitive data or representative types of data \ncan be bulk loaded from databases and example files. \n 7. EAT YOUR VEGETABLES \n DLP is like the layers of an onion. Once the first layer of \nprotection is implemented, the next layer should/could \nbe addressed. There are many different forms of DLP \napplications, depending on the velocity and location of \nthe sensitive data. \n Data in Motion \n Data in motion is an easy place to start implementing \na DLP application because most can function in “ pas-\nsive ” mode, meaning it looks at only a copy of the actual \ndata egressing/ingressing the network. One way to look \nat data-in-motion monitoring is like a very intelligent \nVCR. Instead of recording every packet of information \nthat passes in and out of an organization, DLP applica-\ntions only capture, flag, and record the transmissions \nthat fall within the categories/policies that are turned on \n(see sidebar, “ Case Study: DLP Applications ” ). There \nare two main types of data-in-motion analysis: \n ● Passive monitoring. Using a Switched Port Analyzer \n(SPAN) on a router, port mirror on a switch or a net-\nwork tap(s) that feeds the outbound network traffic \nto the DLP application for analysis. \n ● Active (inline) enforcement. Using an active \negress port or through a proxy server, some DLP \napplications can stop the transmission from \nhappening. The port itself can be reset or the proxy \nserver can show a failure of transmission. The event \nthat keyed off the reset or failure is still recorded. \n Background \n A Fortune 500 Company has tens of thousands of employ-\nees with access to the Internet through an authenticated \nmethod. The Company has recently retired a version of \nlaptops with the associated docking station, monitors \nand mice. New laptops were purchased and given to the \nemployees. The old assets were retired to a storage closet. \nOne manager noticed some docking stations had gone \nmissing. That in and of itself was not concerning as this \ncompany had a liberal policy of donating old compu-\nter assets to charity. After looking in the company’s Asset \nManagement System and talking to the organization’s char-\nity manager, the manager found this was not the case. An \ninvestigation was launched both electronically and through \ntraditional investigative means. \n Action \n The organization had a DLP application in use with data-\nin-motion implemented. This particular DLP application \nhad a strong acceptable use set of categories/policies. One \nof them was “ Shopping, ” which covered both traditional \nshopping outlets but also popular online auction sites. The \nDLP investigator selected the report that returns all transmis-\nsions that violated the “ Shopping ” category and contained \nthe keyword of the model number of the docking station. \nWithin seconds, a user from within their network was found \n Case Study: DLP Applications \n" }, { "page_number": 791, "text": "PART | VII Advanced Security\n758\n Case Study: Data-at-Rest Files \n Background \n A Fortune 500 Company has multiple customer service \ncenters located throughout the United States. Each customer \nserver representative has a personal computer with a hard \ndrive and Internet access. The representative’s job entails \ntaking inbound phone calls to help their customers with \naccount management including auto-pay features. Auto-\npay setup information could include taking a credit-card \nnumber and expiration date and/or setting up an electronic \nfund transfer payment which includes an ABA routing \nnumber and account number. This sensitive information is \nsupposed to be entered directly into the corporate enter-\nprise resource planning (ERP) system application. Invariably \ncustomer service representatives run into issues during this \nprocess (connectivity to the ERP system is interrupted, \npower goes down, computer needs to be rebooted, etc.) \nand sensitive data finds its way into unapproved places on \nthe personal computer — a note text file, a word-processing \ndocument, an electronic spreadsheet, or in an email system. \nEven though employees went through training for their job \nthat included handling of sensitive data, the management \nsuspected that data was finding a way out of the ERP sys-\ntem. Another issue that the management faced with a very \nhigh turnover ratio and that employee training was falling \nbehind. \n Action \n A DLP data-at-rest pilot was performed and over one thou-\nsand files that contained credit-card numbers and other \ncustomer personal identifiable information were found. \n Result \n The Fortune 500 Company was able to cleanse the hard \ndrives of files that contained sensitive data by using the \nlegend the DLP application provided. More important, the \nsystemic cause of the problem had to be readdressed with \ntraining and tightening down the security of the representa-\ntive, personal computers. \n Data at Rest \n Static computer files on drives, removable media or \neven tape can grow to the millions in large multinational \norganizations. Unless tight controls are implemented, \ndata can spawn out of control. Even though email trans-\nmissions account for more than 80% of DLP violations, \ndata-at-rest files that are resting where they are not sup-\nposed to be can be a major concern (see sidebar, “ Case \nStudy: Data-at-Rest Files ” ). \n Data-at-rest risk can occur in other places besides the \npersonal computer’s file system. One of the benefits of \nnetworked computer systems is the ability to share files. \nFile shares can also pose a risk because the original \nowner of the file now has no idea what happened to the \nfile after they share it. \n The same can be said of many Web-based collabo-\nration and document management platforms that are \navailable in the market today. Collaboration tools can be \nused to host Web sites that can be used to access shared \nworkspaces and documents, as well as specialized appli-\ncations such as wikis, blogs, and many other forms of \napplications, from within a browser. Once again, the \nwonderful world of shared computing can also put an \norganization’s data at risk. \n DLP application can help with databases as well and \nhalf the battle is knowing where the organizations most \nsensitive data resides. The data-at-rest function of DLP \napplications can definitely help. \n Data in Use \n DLP applications can also help keep data where it is \nsupposed to stay (see sidebar, “ Case Study: Data-in-\nUse Files ” ). Agent-based technologies that run resi-\ndent on the guest operating system can track, monitor, \na uctioning the exact same model docking stations as the \nones the company had just retired. \n Result \n The Fortune 500 Company was able to remediate a situ-\nation quickly that not only stopped the loss of p hysical \nassets, but get rid of an employee that was stealing while \nat work. The real value of this exercise could have been, \nif an employee is “ ok ” with stealing assets to make a \nfew extra dollars on the company’s dime, one might \nask, What else might an employee that thinks it is ok to \nsteal do? \n" }, { "page_number": 792, "text": "Chapter | 43 Data Loss Protection\n759\nblock, report, quarantine or notify the usage of particular \nkinds of data files and/or the contents of the file itself. \nPolicies can be centrally administered and “ pushed ” out \nof the organization’s computer assets. Since the agent \nis resident on the computer, it can also create an inven-\ntory of every file on the hard drives, removable media \nand even music players. Since the agent knows of the \nfile systems down to the operating system level, it can \nallow or disallow certain types of removable media. \nFor example, an organization might allow a USB stor-\nage device if and only if the device supports encryption. \nThe agent will disallow any other types of USB devices \nsuch as music players, cameras, removable hard drives, \nand so on.\n Background \n An electronics manufacturer has created a revolutionary \nnew design for a cell phone and wants to keep the design \nand photographs of the prototype under wraps. They have \nhad problems in the past with pictures ending up on blog \nsites, competitors “ borrowing ” design ideas, and even other \ncountries creating similar products and launching an imita-\ntor within weeks of the initial product launch. \n Action \n Each document, whether a spreadsheet, document, dia-\ngram, or photograph, was secretly watermarked with a \nspecial secret code. At the same time, the main security \ngroup created an organizational unit within their main \nLDAP application. This was the only group that had per-\nmission to access the watermarked files. At the same time, \na DLP application agent was rolled out to the computer \nassets within the organization. If anyone “ found ” a marked \nfile and tried to do something with it, unless they were \nin the privileged group, access was denied and an alert \n(see Figure 43.2c ) went back to the main DLP reporting \nserver. \n Result \n The electronics manufacturer was able to deliver its revolu-\ntionary product to market in a secure manner and on time. \n Case Study: Data-in-Use Files \n Much like the different flavors of DLP that are available \n(data in motion, data at rest and data in use), conditions of \nthe severity of action that DLP applications take on the event \ncan vary. A good place to diagnose the problems organiza-\ntions are currently facing would be to start with Monitoring \n(see sidebar, “ Case Study in Monitoring ” ). Monitoring is \nonly capturing the actual event that took place to review \nat a later time. Most DLP applications offer real-time or \nnear real-time monitoring of events that violated a policy. \nMonitoring coupled with escalation can help most organiza-\ntions immediately. Escalation works well with monitoring as \nwhen an event happens, rules can be put into place on who \nshould be notified and/or how the notification should take \nplace. Email is the most common form of escalation.\n Background \n A large data provider of financial account records dealt \nwith millions of customer records. The customer records \ncontained account numbers and other nonpublic, personal \ninformation (NPPI) and Social Security numbers. The data \nprovider mandated the protection of this type of sensitive \ndata was the top priority of the upcoming year. \n Action \n A DLP application was implemented. Social Security number, \ncustomer information, and NPPI categories were turned on. \nAfter one week, over 1000 data transmissions were captured \nover various protocols (email, FTP, and Web traffic). After \ninvestigating the results, over 800 transmissions were found \nto have come from a pair of servers that was set up to trans-\nmit account information to partners. \n Result \n By simply changing the type of transmission to a secure, \nencrypted format, the data was secured with no interrup-\ntion of normal business processes. Monitoring was the \nway to go in this case and made the organization more \nsecure and minimized the number of processes that were \nimpacted. \n Case Study in Monitoring \n Another action that DLP application supports is noti-\nfication. Notification can temporarily interrupt that trans-\nmission of an event and could require user interaction. See \n Figure 43.2a for an example of the kind of “ bounce ” email \na user could receive if she sends an email containing sen-\nsitive information. See Figure 43.2b for an example of the \ntype of notification a user could see if he tries to open a \nsensitive data document. The DLP application could make \n" }, { "page_number": 793, "text": "PART | VII Advanced Security\n760\nthe user justify why access is needed, deny access, or sim-\nply log the attempt back to the main reporting console. \n Notification can enhance the current user education \nprogram in place and serve as a gentle reminder. The onus \nof action lies solely on the end user and does not take \nresources from the already thinly stretched IT organization. \n The next level of severity of implementing DLP could \nbe quarantining and then outright blocking. Quarantining \nevents places the transmission in “ stasis ” for review from \na DLP administrator. The quarantine administrator can \nrelease, release with encryption, block, or send the event \nback to the offending user for remediation. Blocking is \nan action that stops the transmission in its entirety based \nupon the contents. \n Both quarantining and blocking should be used spar-\ningly and only after the organization feels comfortable \nwith the policies and procedures. The first time an exec-\nutive tries to send a transmission and cannot because of \nthe action set forth in the DLP application, the IT profes-\nsional can potentially lose his job. \n 8. IT’S A FAMILY AFFAIR, NOT JUST IT \nSECURITY’S PROBLEM \n The IT organization does and most likely maintains \nthe corporate email system; almost everyone across all \ndepartments within an organization uses email. The \nsame can be said for the DLP application. Even though \nIT will implement and maintain the DLP application, \nthe events will most likely come from all different types \nof users across the entire organization. When concern-\ning events are captured and there will be many captured \n(a)\n(b)\n(c)\n FIGURE 43.2 (a) Email with user notification on the fly; (b) PC user tries to access a protected document; and (c) policy prompts a justification alert. 6 \n" }, { "page_number": 794, "text": "Chapter | 43 Data Loss Protection\n761\nby the DLP application, most management will turn to \nIT to resolve the problem. The IT organization should \nnot become the “ police and judge. ” Each business unit \nshould have published standards on how events should \nbe handled and escalated. \n Most DLP applications can segregate duties to allow \nnon-IT personnel to review the disposition of the event \ncaptured. One way to address DLP would be to assign \ncertain types of events to administrators in the appropri-\nate department. If a racial email is captured, the most \nappropriate department might be a Human Resource \nemployee. If a personal information transmission is cap-\ntured, a compliance officer should be assigned. IT might \nbe tasked if the nature of the event is hacking related. \n Users can also have a level of privilege within the \nDLP application. Reviewers can be assigned to initial \ninvestigations of only certain types or all events. If nec-\nessary, an event can be escalated to a reviewer’s superior. \nUsers can have administrative or reports-only rights. \n Each of these functions relates to the concept of \nworkflow within the DLP application. Events need to be \nprioritized, escalated, reviewed, annotated, ignored and \neventually closed. The workflow should be easy to use \nacross the DLP community and reports should be eas-\nily assessable and created/tuned. See Figure 43.3a for \nan example of an Executive Dashboard that allows the \nuser to quickly assess the state of risk and allows for a \nquick-click drill down for more granular information. \n(a)\n(b)\n FIGURE 43.3 (a) Dashboard; (b) Email Event Overview. 6 \n" }, { "page_number": 795, "text": "PART | VII Advanced Security\n762\n Figure 43.3b is the result of a click from the Executive \nDashboard to the full content capture of the event. \n 9. VENDORS, VENDORS EVERYWHERE! \nWHO DO YOU BELIEVE? \n At the end of the day the DLP market and applications \nare maturing at an incredible pace. Vendors are releasing \nnew features and functions almost every calendar quar-\nter. In the past, when monitoring seem sufficient to diag-\nnose the central issue of data security, the marketplace \nwas demanding more control, more granularity, easier \nuser interfaces, and more actionable reports, as well as \nmoving the DLP application off the main network egress \npoint and parlaying the same functionality to the desktop/\nlaptops, servers, and their respective end points to docu-\nment storage repositories and databases. \n In evaluating DLP applications, it is important to focus \non the type of underlying engine that analyzes the data and \nthen work up from that base. Next rate the ease of config-\nuring the data categories and the ability to preload certain \ndocuments and document types. Look for a mature prod-\nuct with plenty of industry-specific references. Company \nstability and financial health should also come into play. \nRoadmaps of future offerings can give an idea of the fea-\ntures and functions coming in the next release. The rela-\ntionship with the vendor is an important requirement to \nmake sure that the purchase and subsequent implemen-\ntation goes smoothly. The vendor should offer training \nthat empowers the IT organization to be somewhat self-\ns ustaining instead of having to go back to the vendor every \ntime a configuration needs to be implemented. The vendor \nshould offer best practices that other customers have used \nto help with quick adoption of policies. This allows for an \neffective system that will improve and lower the overall \nrisk profile of the organization. \n Analyst briefings about the DLP space can be found \non the Internet for free and can provide an unbiased view \nfrom a third party of things that should be evaluated dur-\ning the selection process. \n 10. CONCLUSION \n DLP is an important tool that should at least be evaluated \nby organizations that are looking to protect their employ-\nees, customers, and stakeholders. An effectively imple-\nmented DLP application can augment current security \nsafeguards. A well thought out strategy for a DLP appli-\ncation and implementation should be designed first before \na purchase. All parts of the organization are likely to be \nimpacted by DLP, and IT should not be the only organiza-\ntion to evaluate and create policies. A holistic approach \nwill help foster a successful implementation that is sup-\nported by the DLP vendor and other departments, and ulti-\nmately the employees should improve the data risk profile \nof an organization. The main goal is to keep the brand \nname and r eputation of the organization safe and to con-\ntinue to operate with minimal data security interruptions. \nMany types of DLP approaches are available in the market \ntoday; picking the right vendor and product with the right \nfeatures and functions can foster best practices, augment \nalready implemented employee training and policies, and \nultimately safeguard the most critical data assets of the \norganization. \n" }, { "page_number": 796, "text": " Appendices \n Part VIII \n APPENDIX A Configuring Authentication Service on \nMicrosoft Windows Vista \n John R. Vacca \n APPENDIX B Security Management and Resiliency \n John R. Vacca \n APPENDIX C List of Top Security Implementation and \nDeployment Companies \n APPENDIX D List of Security Products \n APPENDIX E List of Security Standards \n APPENDIX F List of Miscellaneous Security Resources \n APPENDIX G Ensuring Built-in Frequency Hopping Spread \nSpectrum Wireless Network Security \n APPENDIX H Configuring Wireless Internet Security \nRemote Access \n APPENDIX I Frequently Asked Questions \n APPENDIX J Glossary \n" }, { "page_number": 797, "text": "This page intentionally left blank\n" }, { "page_number": 798, "text": "765\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Configuring Authentication Service on \nMicrosoft Windows Vista \n John R. Vacca \n Appendix A \n This appendix describes the configuration of Windows \nVista® authentication service features that are relevant \nto IT professionals. The authentication service features \nincluded with Windows Vista extend to a strong set of \nplatform-based authentication features to help provide \nbetter security, manageability, and user experience. The \nfeatures include the following: \n ● Backup and Restore of Stored Usernames and \nPasswords \n ● Credential Security Service Provider and SSO for \nTerminal Services Logon \n ● TLS/SSL Cryptographic Enhancements \n ● Kerberos Enhancements \n ● Smart Card Authentication Changes \n ● Previous Logon Information 1 \n 1. BACKUP AND RESTORE OF STORED \nUSERNAMES AND PASSWORDS \n In some environments, the stored usernames and pass-\nwords feature significantly affects user efficiency and \nproductivity because it can store dozens of credentials \nfor using network resources per user. Unfortunately, this \nsame usefulness can lead to frustration if a user suddenly \nloses his or her stored usernames and passwords because \nof a hardware or software failure on the client computer \nshe regularly uses. 1 \n To resolve this potential source of user frustration (or \nto help organizations support this feature), Windows Vista \nincludes a Backup and Restore Wizard that allows users \nto back up usernames and passwords they have requested \nthat Windows remember for them. This new functionality \nallows users to restore the usernames and passwords on \nany computer running Windows Vista. Restoring a backup \nfile on a different computer allows users to effectively \nroam or move their saved usernames and passwords. 1 \n Caution: Restoring usernames and passwords from a \nbackup file will replace any existing saved usernames \nand passwords the user has on the computer. \n To access this feature, open Control Panel , double-\nclick User Accounts , and then click Manage your \nnetwork passwords . If you have previously saved any \ncredentials, the Back up button is enabled. 1 \n Automation and Scripting \n For security reasons, this feature cannot be automated, \nnor can the backup process be initiated by an application \nthat runs under standard user credentials. This feature \nrequires Windows Vista. 1 \n Security Considerations \n The backup file is encrypted by using Advanced Encryp-\ntion Standard (AES) and a password supplied by the \nuser at the time the backup is performed. The password \nshould be a strong password to avoid the possibility of \nthe backup being compromised if the password is lost. 1 \n 2. CREDENTIAL SECURITY SERVICE \nPROVIDER AND SSO FOR TERMINAL \nSERVICES LOGON \n Authentication protocols are implemented in Windows \nby security service providers. Windows Vista introduces \n 1 “ Windows Vista Authentication Features, ” Microsoft TechNet, © \n2008 Microsoft Corporation; all rights reserved; Microsoft Corporation, \nOne Microsoft Way, Redmond, WA 98052-6399, 2008. \n" }, { "page_number": 799, "text": "PART | VIII Appendices\n766\n a new authentication package called the Credential \nSecurity Service Provider, or CredSSP, that provides a \nsingle sign-on (SSO) user experience when starting new \nTerminal Services sessions. CredSSP enables applica-\ntions to delegate users ’ credentials from the client com-\nputer (by using the client-side security service provider) \nto the target server (through the server-side security \nservice provider) based on client policies. CredSSP poli-\ncies are configured via Group Policy, and delegation of \ncredentials is turned off by default. 1 \n Like the Kerberos authentication protocol, CredSSP \ncan delegate credentials from the client to the server, but \nit does so by using a completely different mechanism \nand with different usability and security characteristics. \nWith CredSSP, when policy specifies that credentials \nshould be delegated, users will be prompted for creden-\ntials (unlike Kerberos delegation) which means the user \nhas some control over whether the delegation should \noccur and (more importantly) what credentials should be \nused. With Kerberos delegation, only the user’s Active \nDirectory® credentials can be delegated. 1 \n Unlike the experience in Windows Server® 2003 \nTerminal Server, the credential prompt is on the client \ncomputer and not the server. Most importantly, the client \ncredential prompt is on the secure desktop. Therefore, not \neven the Terminal Services client can see the credentials, \nwhich is an important Common Criteria requirement. \nFurthermore, the credentials obtained from the prompt \nwill not be delegated until the server identity is authen-\nticated (subject to policy configuration). Finally, the \nterminal server will not establish a session for the user \n(which consumes a significant amount of memory and \nCPU processing time on the server) before authenticat-\ning the client, which decreases the chances of successful \ndenial-of-service attacks on the server. 1 \n Requirements \n This feature requires the Terminal Services client to \nrun on Windows Vista or Windows Server 2008. It also \nrequires that Terminal Services be hosted on a server that \nruns Windows Server 2008. 1 \n Configuration \n CredSSP policies, and by extension the SSO function-\nality they provide to Terminal Services, are configured \nvia Group Policy. Use the Local Group Policy Editor \nto navigate to Local Computer Policy | Computer \nConfiguration | Administrative Templates | System | \nCredentials Delegation, and enable one or more of the \npolicy options. 1 \n Security Considerations \n When credential delegation is enabled, the terminal \nserver will receive the user credentials in plaintext form, \nwhich can introduce risk to the network environment if \nthe servers are not well secured. An organization that \nwants to achieve this functionality should plan carefully \nfor its deployment and ensure that an effective security \nprogram for the servers is in place beforehand. 1 \n In addition, a few of the policy settings might increase \nor decrease the risk. For example, the Allow Default \nCredentials with NTLM-only Server Authentication \nand Allow Fresh Credentials with NTLM-only Server \nAuthentication policy settings remove the restriction to \nrequire the Kerberos authentication protocol for authen-\ntication between the client and server. If a computer \nrequires NTLM and either of these settings is selected, \nthen NTLM will be used and will allow communica-\ntion to occur successfully but at a higher security risk. \nThe Kerberos protocol provides significant additional \nsecurity in this scenario because it provides mutual \nauthentication — that is, positive authentication of the \nserver to the client. This functionality is important, \nbecause users should be protected from delegating their \nplaintext credentials to an attacker who might have \ntaken control of a network session. Before enabling the \nNTLM-only policies, network administrators should \nfirst ensure that NTLM authentication is necessary in the \nscenario that they need to support. 1 \n 3. TLS/SSL CRYPTOGRAPHIC \nENHANCEMENTS \n Microsoft has added new TLS extensions that enable the \nsupport of both AES and new elliptic curve cryptography \n(ECC) cipher suites. In addition, custom cryptographic \nmechanisms can now be implemented and used with \nSchannel as custom cipher suites. Schannel is the Windows \nsecurity package that implements TLS and SSL. 1 \n AES Cipher Suites \n The support for AES (which is not available in Microsoft \nWindows® 2000 Server or Windows Server 2003) is \nimportant, because AES has become a National Institute \nof Standards and Technology (NIST) standard. To ease \nthe process of bulk encryption, cipher suites that support \n" }, { "page_number": 800, "text": "Appendix | A Configuring Authentication Service on Microsoft Windows Vista\n767\n AES have been added. The following list is the sub-\nset of TLS AES cipher suites defined in Request for \nComments (RFC) 3268, Advanced Encryption Standard \n(AES) Ciphersuites for Transport Layer Security (TLS) \n( http://go.microsoft.com/fwlink/ ?LinkId \u0003 105879), that are \navailable in Windows Vista: \n ● TLS_RSA_WITH_AES_128_CBC_SHA \n ● TLS_RSA_WITH_AES_256_CBC_SHA \n ● TLS_DHE_DSS_WITH_AES_128_CBC_SHA \n ● TLS_DHE_DSS_WITH_AES_256_CBC_SHA 1 \n Requirements \n To negotiate these new cipher suites, the client and server \ncomputers must be running either Windows Vista or \nWindows Server 2008. 1 \n Configure AES \n For information about the registry entries used to config-\nure TLS/SSL ciphers in previous versions of Windows, \nsee TLS/SSL Tools and Settings ( http://go.microsoft.\ncom/fwlink/ ?LinkId \u0003 105880). These settings are only \navailable for cipher suites included with Windows oper-\nating systems earlier than Windows Vista and are not \nsupported for AES. Cipher preferences are configured \nin Windows Vista by enabling the SSL Cipher Suite \nOrder policy setting in Administrative Templates | \nNetwork | SSL Configuration Settings. 1 \n \n Tip: The Windows Vista-based computer must be restarted \nfor any setting changes to take effect. \n ECC Cipher Suites \n ECC is a key-generation technique that is based on ellip-\ntic curve theory and is used to create more efficient and \nsmaller cryptographic keys. ECC key generation dif-\nfers from the traditional method that uses the product of \nvery large prime numbers to create keys. Instead, ECC \nuses an elliptic curve equation to create keys. ECC keys \nare approximately six times smaller than the equivalent \nstrength traditional keys, which significantly reduces the \ncomputations that are needed during the TLS handshake \nto establish a secure connection. 1 \n In Windows Vista, the Schannel security service \nprovider includes new cipher suites that support ECC \ncryptography. ECC cipher suites can now be negoti-\nated as part of the standard TLS handshake. The sub-\nset of ECC cipher suites defined in RFC 4492, Elliptic \nCurve Cryptography (ECC) Cipher Suites for Transport \nLayer Security (TLS) ( http://go.microsoft.com/fwlink/\n ?LinkId \u0003 105881), that are available in Windows Vista \nis shown in the following list: \n ● TLS_ECDHE_ECDSA_WITH_AES_128_CBC_\nSHA_P256 \n ● TLS_ECDHE_ECDSA_WITH_AES_128_CBC_\nSHA_P384 \n ● TLS_ECDHE_ECDSA_WITH_AES_128_CBC_\nSHA_P521 \n ● TLS_ECDHE_ECDSA_WITH_AES_256_CBC_\nSHA_P256 \n ● TLS_ECDHE_ECDSA_WITH_AES_256_CBC_\nSHA_P384 \n ● TLS_ECDHE_ECDSA_WITH_AES_256_CBC_\nSHA_P521 \n ● TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_\nP256 \n ● TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_\nP384 \n ● TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_\nP521 \n ● TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_\nP256 \n ● TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_\nP384 \n ● TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_\nP521 1 \n The ECC cipher suites use three NIST curves: P-256 \n(secp256r1), P-384 (secp384r1), and P-521 (secp521r1). 1 \n Requirements \n To use the ECDHE_ECDSA cipher suites, ECC cer-\ntificates must be used. Rivest-Shamir-Adleman (RSA) \ncertificates can be used to negotiate the ECDHE_RSA \ncipher suites. Additionally, the client and server comput-\ners must be running either Windows Vista or Windows \nServer 2008. 1 \n Configure ECC Cipher Suites \n Cipher preferences are configured in Windows Vista \nby using the SSL Cipher Suite Order policy set-\nting in Administrative Templates | Network | SSL \nConfiguration Settings. 1 \n \n Tip: The Windows Vista-based computer must be \nrestarted for these settings to take effect. \n" }, { "page_number": 801, "text": "PART | VIII Appendices\n768\n Schannel CNG Provider Model \n Microsoft introduced a new implementation of the cryp-\ntographic libraries with Windows Vista that is referred to \nas Cryptography Next Generation, or CNG. CNG allows \nfor an extensible provider model for cryptographic \nalgorithms. 1 \n Schannel, which is Microsoft’s implementation of \nTLS/SSL for Windows Server 2008 and Windows Vista, \nuses CNG so that any underlying cryptographic mecha-\nnisms can be used. This allows organizations to create \nnew cipher suites or reuse existing ones when used with \nSchannel. The new cipher suites included with Windows \nServer 2008 and Windows Vista are only available to \napplications running in user mode. 1 \n Requirements \n Because both the client and server computers must be \nable to negotiate the same TLS/SSL cipher, the Schannel \nCNG feature requires Windows Server 2008 and \nWindows Vista to use the same custom cipher configured \nfor use on both the client and server computers. In addi-\ntion, the custom cipher must be prioritized above other \nciphers that could be negotiated. 1 \n Configure Custom Cipher Suites \n Cipher preferences, including preferences for cus-\ntom cipher suites, are configured in Windows Vista \nby using the SSL Cipher Suite Order policy setting \n TABLE A.1 Prioritized List of TLS and SSL Cipher Suites \n Number \n Cipher Suites \n 1. \n TLS_RSA_WITH_AES_128_CBC_SHA \n 2. \n TLS_RSA_WITH_AES_256_CBC_SHA \n 3. \n TLS_RSA_WITH_RC4_128_SHA \n 4. \n TLS_RSA_WITH_3DES_EDE_CBC_SHA \n 5. \n TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P256 \n 6. \n TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P384 \n 7. \n TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P521 \n 8. \n TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P256 \n 9. \n TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P384 \n 10. \n TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P521 \n 11. \n TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256 \n 12. \n TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P384 \n 13. \n TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P521 \n 14. \n TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256 \n 15. \n TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384 \n 16. \n TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P521 \n 17. \n TLS_DHE_DSS_WITH_AES_128_CBC_SHA \n 18. \n TLS_DHE_DSS_WITH_AES_256_CBC_SHA \n 19. \n TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA \n 20. \n TLS_RSA_WITH_RC4_128_MD5 \n 21. \n SSL_CK_RC4_128_WITH_MD5 \n 22. \n SSL_CK_DES_192_EDE3_CBC_WITH_MD5 \n 23. \n TLS_RSA_WITH_NULL_MD5 \n 24. \n TLS_RSA_WITH_NULL_SHA \n" }, { "page_number": 802, "text": "Appendix | A Configuring Authentication Service on Microsoft Windows Vista\n769\n in Administrative Templates | Network | SSL \nConfiguration Settings. 1 \n \n Tip: The Windows Vista-based computer must be \nrestarted for these settings to take effect. \n Default Cipher Suite Preference \n Windows Vista prioritizes the complete list of TLS \nand SSL cipher suites as shown in Table A.1 . 1 The \ncipher suite negotiated will be the highest-listed cipher \nsuite that is supported by both the client and the server \ncomputers. 1 \n Previous Cipher Suites \n The Microsoft Schannel provider supports the cipher \nsuites listed in Table A.2 . 1 However, the cipher suites \nare not enabled by default. To enable any of these \ncipher suites, use the SSL Cipher Suite Order policy \nsetting in Administrative Templates | Network | SSL \nConfiguration Settings .\n \n Warning: Enabling any of these SSL cipher suites is not \nrecommended. Future versions of Windows might not \nsupport these cipher suites. \n 4. KERBEROS ENHANCEMENTS \n Microsoft’s implementation of the Kerberos authentica-\ntion protocol is significantly improved in Windows Vista \nwith the following features: \n ● AES support \n ● Improved security for Kerberos Key Distribution \nCenters (KDCs) located on branch office domain \ncontrollers 1 \n AES \n This Windows Vista security enhancement enables the use \nof AES encryption with the Kerberos authentication pro-\ntocol. This enhancement includes the following changes \nfrom Windows XP: \n ● AES support for the base Kerberos authentication \nprotocol. The base Kerberos protocol in Windows \nVista supports AES for encryption of ticket-granting \ntickets (TGTs), service tickets, and session keys. \n ● AES support for the Generic Security Service (GSS)-\nKerberos mechanism. In addition to enabling AES \nfor the base protocol, GSS messages (which conduct \nclient/server communications in Windows Vista) are \nprotected with AES. 1 \n Requirements \n All Kerberos authentication requests involve three differ-\nent parties: the client requesting a connection, the server \nthat will provide the requested data, and the Kerberos \nKDC that provides the keys that are used to protect the \nvarious messages This discussion focuses on how AES \ncan be used to protect these Kerberos authentication pro-\ntocol messages and data structures that are exchanged \namong the three parties. Typically, when the parties are \noperating systems running Windows Vista or Windows \nServer 2008, the exchange will use AES. However, if one \nof the parties is an operating system running Windows \n2000 Professional, Windows 2000 Server, Windows XP, \nor Windows Server 2003, the exchange will not use AES. \nThe specific exchanges are: \n ● TGT. The TGT is created by the KDC and sent to the \nclient if authentication to the KDC succeeds. \n ● Service ticket. A service ticket is the data created \nby the KDC, which is provided to the client and \nthen sent by the client to the server to establish \nauthentication of the client. \n TABLE A.2 Previous Cipher Suites \n Number \n Cipher Suites \n 1. \n RSA_EXPORT_RC4_40_MD5 \n 2. \n RSA_EXPORT1024_RC4_56_SHA \n 3. \n RSA_EXPORT1024_DES_CBC_SHA \n 4. \n SSL_CK_RC4_128_EXPORT40_MD5 \n 5. \n SSL_CK_DES_64_CBC_WITH_MD5 \n 6. \n RSA_DES_CBC_SHA \n 7. \n RSA_RC4_128_MD5 \n 8. \n RSA_RC4_128_SHA \n 9. \n RSA_3DES_EDE_CBC_SHA \n 10. \n RSA_NULL_MD5 \n 11. \n RSA_NULL_SHA \n 12. \n DHE_DSS_EXPORT1024_DES_SHA \n 13. \n DHE_DSS_DES_CBC_SHA \n 14. \n DHE_DSS_3DES_EDE_CBC_SHA \n" }, { "page_number": 803, "text": "PART | VIII Appendices\n770\n ● AS-REQ/REP. The authentication service request/\nresponse (AS-REQ/REP) exchange is the Kerberos \nTGT request and reply messages sent to the KDC \nfrom the client. If the exchange is successful, the \nclient is provided with a TGT. \n ● TGS-REQ/REP. The ticket-granting service request/\nresponse (TGS-REQ/REP) exchange is the Kerberos \nservice ticket request and reply messages that are \nsent to the KDC from the client when it is instructed \nto obtain a service ticket for a server. \n ● GSS . The Generic Security Service application \nprogramming interface (GSS-API) and the Generic \nSecurity Service Negotiate Support Provider (GSS-\nSPNEGO) mechanisms negotiate a secure context \nfor sending and receiving messages between the cli-\nent and server by using key material derived from the \nprevious ticket exchanges. 1 \n Table A.3 shows whether AES is used in each \nexchange for different combinations of Windows operating \nsystems. 1 \n Read-Only Domain Controller and \nKerberos Authentication \n Windows Vista includes new Kerberos authentication \nprotocol features to further protect a Windows Server \n2008 domain controller that is physically located in a \nbranch office. With the read-only domain controller \n(RODC), the KDC issues TGTs to branch users only and \nforwards other requests to the hub domain controller. 1 \n In the Windows implementation, the keys used to \ncreate TGTs are derived from the password of the krbtgt \naccount. This account and its password are typically \nreplicated to every domain controller in the domain. In \nthe branch office scenario, the risk of theft or unauthor-\nized access to the local domain controller (and therefore \nthe security of the krbtgt account) is typically greater. \nTo mitigate this risk, the RODC has a unique krbtgt \naccount that does not have all of the capabilities of a \nstandard krbtgt account on a standard domain controller. \nIf the RODC is compromised, the scope of the breach in \nregards to the krbtgt account information is limited to \nthat RODC, not the other KDCs. 1 \n 5. SMART CARD AUTHENTICATION \nCHANGES \n Although Windows Server 2003 includes support for \nsmart cards, the types of certificates that smart cards \ncan contain are limited by strict requirements. Each cer-\ntificate needs to be associated with a user principal name \n(UPN) and needs to contain the smart card logon object \nidentifier (also known as OID) in the Enhanced Key \nUsage field. In addition, each certificate requires that \nsigning be used in conjunction with encryption. 1 \n To better support smart card deployments, Windows \nVista enables support for a range of certificates. \n TABLE A.3 Usage of AES with Various Windows Operating Systems \n Client \n Server \n KDC \n Ticket/Message Encryption \n Operating systems earlier \nthan Windows Vista \n Operating systems earlier than \nWindows Server 2008 \n Windows Server 2008 \n TGT might be encrypted with \nAES based on policy \n Operating systems earlier \nthan Windows Vista \n Windows Server 2008 \n Windows Server 2008 \n Service ticket encrypted with AES \n Windows Vista \n Windows Server 2008 \n Windows Server 2008 \n All tickets and GSS encrypted \nwith AES \n Windows Vista \n Windows Server 2008 \n Operating systems earlier than \nWindows Server 2008 \n GSS encrypted with AES \n Windows Vista \n Operating systems earlier than \nWindows Server 2008 \n Windows Server 2008 \n AS-REQ/REP and TGS-REQ/REP \nencrypted with AES \n Operating systems earlier \nthan Windows Vista \n Windows Server 2008 \n Operating systems earlier than \nWindows Server 2008 \n No AES \n Windows Vista \n Operating systems earlier than \nWindows Server 2008 \n Operating systems earlier than \nWindows Server 2008 \n No AES \n Operating systems earlier \nthan Windows Vista \n Operating systems earlier than \nWindows Server 2008 \n Operating systems earlier than \nWindows Server 2008 \n No AES \n" }, { "page_number": 804, "text": "Appendix | A Configuring Authentication Service on Microsoft Windows Vista\n771\n Customers now have the ability to deploy smart cards \nwith certificates that are not limited by the previous \nrequirements. Specific certificate requirements that were \nchanged are itemized in the following list: \n ● UPN is no longer required. \n ● For smart card logon, the Enhanced Key Usage \n(no need for smart card logon object identifier) \nand Subject Alternative Name (need not contain \nemail ID) fields are not required. If an enhanced \nkey usage is present, it must contain the Smart Card \nLogon enhanced key usage. \n ● You can enable any certificate to be visible for the \nsmart card credential provider. \n ● Smart card logon certificates do not require a Key \nExchange (AT_KEYEXCHANGE) field. \n ● Certificate Revocation List (CRL) is no longer a \nrequired field. 1 \n Because the restrictions found in previous versions \nof Windows are still considered best practices for cer-\ntificate deployment, the listed changes are not enabled \nby default except for multiple certificates support (some \ncertificates are excluded). Administrators need to con-\nfigure the registry keys on client computers to enable \nthe functionality. Group Policy can be used to accom-\nplish this configuration. In addition to these changes to \nthe smart card requirements, the following functional \nchanges have been made for smart card logon: \n ● When a smart card is inserted into a smart card \nreader, the logon process does not start automati-\ncally. Typically, users are now required to press Ctrl \n \u0002 Alt \u0002 Delete to start the logon process. \n ● All valid smart card logon certificates are \nenumerated and presented to users. The smart card \nmust be inserted before users can enter a personal \nidentification number (PIN). \n ● Keys are no longer restricted to being in the default \ncontainer, and certificates in different smart cards \ncan be chosen. \n ● Multiple Terminal Services sessions are supported in \na single process. Because Windows Vista is closely \nintegrated with Terminal Services to provide fast \nuser switching, this functionality is an important \nimprovement. 1 \n Additional Changes to Common Smart \nCard Logon Scenarios \n The following part of this appendix, describes the \nchanges in Windows Vista for common smart card \nauthentication and logon scenarios. 1 \n Smart Card Logon of a Single User with One \nCertificate into Multiple Accounts \n In Windows Vista, a single user certificate can be \nmapped to multiple accounts. For example, a user can \nlog on to his or her user account or can log on as domain \nadministrator. 1 \n Smart Card Logon of Multiple Users into a \nSingle Account \n Windows Vista supports the ability for multiple users \nwith unique smart card certificates to log on to a single \naccount, such as an administrator’s account. 1 \n Smart Card Logon Across Forests \n In some situations, such as logon across forests, there \nmight not be enough information in the smart card cer-\ntificate to reliably route the logon request. Windows \nVista allows users to enter a “ hint ” in the form domain\\\nuser username or a fully qualified UPN such as user@\nDNSnameofdomain.com that allows reliable authentica-\ntion routing. For the Hint field to appear during smart \ncard logon, the X509HintsNeeded registry key must be \nset on the client computer. 1 \n OCSP Support for PKINIT \n The Public Key Cryptography for Initial Authentication \nin Kerberos (PKINIT) protocol is the smart card authen-\ntication mechanism for the Kerberos authentication \nprotocol. Online Certificate Status Protocol (OCSP), \ndefined in RFC 2560, X.509 Internet Public Key \nInfrastructure Online Certificate Status Protocol-OCSP \n( http://go.microsoft.com/fwlink/ ?LinkID \u0003 67082), enables \napplications to obtain timely information regarding the \nrevocation status of a certificate. 1 \n Because the Windows KDC is a high-volume trans-\nactional service, it benefits greatly by using OCSP \ninstead of relying on CRLs. Windows Server 2008 KDCs \nwill always attempt to use OCSP to verify certificate \nvalidity. 1 \n Certificate Revocation Support \n Table A.4 describes the registry key values you can use \nto disable CRL checking. 1 \n \n Tip: The settings must be enabled on both the client and \nKDC computers to disable CRL checking. \n" }, { "page_number": 805, "text": "PART | VIII Appendices\n772\n Smart Card Root Certificate Requirements for \nUse When Joining a Domain \n When using a smart card to join a domain, the smart \ncard certificate must comply with one of the following \nconditions: \n ● The smart card certificate must contain a Subject \nfield that contains the DNS domain name within the \ndistinguished name. If it does not contain this field, \nresolution to the appropriate domain will fail, caus-\ning the domain join with smart card to fail. \n ● The smart card certificate must contain a UPN in \nwhich the domain part of the UPN must resolve \nto the actual domain. For example, the UPN user-\nname@engineering.corp.example.com would work, \nbut username@engineering.example.com would not \nwork because the Kerberos client would not be able \nto find the appropriate domain. 1 \n The solution for both of the listed conditions is to \nsupply a hint (enabled via the X509HintsNeeded reg-\nistry setting) in the credentials prompt when joining a \ndomain. If the client computer is not joined to a domain, \nthen the client will only be able to resolve the server \ndomain by viewing the distinguished name on the certif-\nicate (as opposed to the UPN). For this scenario to work, \nthe Subject field for the certificate must include “ DC \u0003 ” \nfor domain name resolution. To deploy root certificates \non smart cards for the currently joined domain, the fol-\nlowing command can be used: 1 \n certutil-scroots \n Terminal Server Redirection \n In terminal server scenarios, remote servers are used \nto run services. However, the smart card is local to the \ncomputers from which the connections are established. \nIn smart card logon scenarios, the Smart Card service \non the remote servers will appropriately redirect to the \nsmart card readers that are connected to the remote com-\nputers from which users are trying to log on. 1 \n Terminal Server Sign-On Experience \n As a part of Common Criteria compliance require-\nments, the Terminal Services client must be able to be \nconfigured to use Stored User Names and Passwords to \nacquire and save users ’ passwords and smart card PINs. \nCommon Criteria requires that applications must not \nhave direct access to users ’ passwords or PINs. 1 \n Specifically, the Common Criteria requirement is \nthat passwords and PINs never leave the highly trusted \nLocal Security Authority (LSA) unencrypted. When \nthis requirement is applied to a distributed scenario, it \nshould allow passwords and PINs to travel between one \ntrusted LSA and another if they are encrypted during \ntransit. 1 \n In Windows Vista, smart card users will need to log \non for every new Terminal Services session. However, \nusers are not prompted for their PINs more than once \nto establish a Terminal Services session. For example, \nafter a user double-clicks a Microsoft Word document \nicon that resides on a remote computer, the user will \nbe prompted to enter his or her PIN. This PIN will be \npassed via a secure channel established by CredSSP. The \nPIN is routed to the Terminal Services client over the \nsecure channel, where it is used to access the keys on the \nsmart card. Users are not prompted again for their PINs \nunless a PIN is incorrect or if a smart card fails due to \nproblems with the card or reader. 1 \n Terminal Services and Smart Card Logon \n This feature enables users to log on with smart cards by \nentering a PIN on the Terminal Services client, which \nsecurely relays this information to the server in a man-\nner similar to authentication that uses a user name and \npassword. Smart card logon with Terminal Server was \nfirst introduced in Windows XP. It is not available in any \nearlier versions of the Windows operating system. In \nWindows XP, users can use only one certificate from the \ndefault container to log on. To enable smart card logon \nto a terminal server, the KDC certificate must be present \non the local Terminal Services client computer. 1 \n \n TABLE A.4 Registry Key Values to Disable CRL Checking \n Key \n Value \n HKLM\\SYSTEM\\CCS\\Services\\Kdc\\UseCachedCRLOnlyAndIgnore \nRevocationUnknownErrors \n Type \u0003 DWORD Value \u0003 1 \n HKLM\\SYSTEM\\CCS\\Control\\LSA\\Kerberos\\Parameters\\UseCachedCRL \nOnlyAndIgnoreRevocationUnknownErrors \n Type \u0003 DWORD Value \u0003 1 \n" }, { "page_number": 806, "text": "Appendix | A Configuring Authentication Service on Microsoft Windows Vista\n773\n Tip: This feature requires terminal servers running \nWindows Server 2008. If Windows Vista – based client \ncomputers are used to log on to terminal servers run-\nning Windows Server 2003, the user experience and \ncapabilities are equivalent to using Terminal Server with \nWindows XP – based client computers. \n Terminal Services and Smart Card Logon \nacross Domains \n With Windows Vista, it is now possible to use smart \ncards to log on across domains or from computers that \nwere not joined to domains trusted by the users ’ account \ndomains. Common scenarios where this functionality is \nrequired include: \n ● Using Routing and Remote Access to access an \norganization’s network resources across the \nInternet. \n ● Using Terminal Services across domains that are \nnot trusted or computers that are not joined to a \ndomain. 1 \n To enable this functionality, the root certificate for \nthe user’s account domain must be provisioned on the \nsmart card. To provision the domain root certificate \nwhile using a computer that is a member of the domain, \ntype the following at a command prompt: 1 \n certutil-scroots update \n In addition, for Terminal Services access across \ntrusting domains, the KDC certificate of the domain of \nthe Terminal Services computer needs to be present in \nthe NTAuth store of the client computer. The following \ncommand can be used to provision this certificate onto \nthe client computer: 1 \n certutil-addstore-enterprise NTAUTH certfile \n Replace certfile with the root certificate of the KDC \ncertificate issuer. 1 \n To log on to a terminal server by using a smart card \nfrom a client computer that is not a member of the \ndomain, the smart card needs to be provisioned with the \nroot certificate of the terminal server’s domain control-\nler. Furthermore, cross-domain terminal server logon \nwill only work if the UPN attribute of the certificate is in \nthe form username@domain_dns_name . If it is impos-\nsible or impractical to deploy certificates with the UPN \nattribute in this form, enabling domain hints with Group \nPolicy is a potential solution. 1 \n Terminal Services and Smart Card Logon \nRegistry Settings \n If smart card logon is the only form of logon that is sup-\nported, enable the following registry setting: 1 \n \nHKEY_LOCAL_MACHINE\\Software\\Policies\\\nMicrosoft|Windows\\CreddentialsDelega\n \n \n Caution: Incorrectly editing the registry may severely \ndamage your system. Before making changes to the regis-\ntry, you should back up any valued data on the computer. \n AllowFreshCredentials \n If the user also has a corresponding password-enabled \naccount, enabling the following setting would also be \nuseful: 1 \n \nHKEY_LOCAL_MACHINE\\Software\\Policies\\\nMicrosoft\\Windows\\CreddentialDelegation\n \n AllowFreshCredentialsWhenNTLMOnly \n Delegation of default and saved credentials are not sup-\nported for Terminal Services with smart card logon. The \nfollowing Group Policy settings are ignored: \n ● HKEY_LOCAL_MACHINE\\Software\\Policies\\\nMicrosoft\\Windows\\CredentialsDelegation \n ● AllowDefCredentials \n ● AllowDefCredentialsWhenNTLMOnly \n ● AllowSavedCredentials \n ● AllowSavedCredentialsWhenNTLMOnly 1 \n 6. PREVIOUS LOGON INFORMATION \n This setting enables users to determine whether their \naccounts were used (or were attempted to be used) with-\nout their knowledge. When this policy is enabled and the \nWindows Vista – based computer is joined to a Windows \nServer 2008 functional-level domain, the following \ninformation is displayed after a successful interactive \nlogon: \n 1. Date and time of the last successful logon by \nthat user \n 2. Date and time of the last unsuccessful logon attempt \nwith the same user name \n 3. The number of failed logon attempts since the last \nsuccessful logon with the same username 1 \n" }, { "page_number": 807, "text": "PART | VIII Appendices\n774\n \n Note: The source of this information is the Active \nDirectory database or Security Accounts Manager (SAM) \nof the computer providing the information. For local \naccounts, this information is always up to date. However, \nin domain environments, domain controllers depend on \nreplication, so the information might not be up to date. \n Configuration \n To enable this feature, use the Local Group Policy Editor \nto navigate to Local Computer Policy | Computer \nConfiguration | Administrative Templates | Windows \nComponents | Windows Logon Options , and enable \n Display Information about previous logons during \nuser logon . Additional information is available on the \n Explain Text tab for this setting. 1 \n \n Caution: If this policy is enabled and the Windows Vista-\nbased computer is not joined to a Windows Server 2008 \nfunctional-level domain, a warning message will appear \nstating that the information could not be retrieved and \nthe user will not be able to log on. Do not enable this \npolicy setting unless the Windows Vista – based computer \nis joined to a Windows Server 2008 functional-level \ndomain. \n Security Considerations \n User training that accompanies the deployment of \nWindows Vista should include information about how to \nuse this information and what to do if the information \ndoes not represent the user’s actions. 1 \n" }, { "page_number": 808, "text": "775\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Security Management and Resiliency \n John R. Vacca \n Appendix B \n The United States needs to strengthen the computer secu-\nrity management and resilience component of its home-\nland security strategy to mitigate the effect of successful \nterrorist attacks and other disasters. As a nation, the U.S. \nmust be able to withstand a blow and then bounce back. \nThat’s resilience ! 1 \n The U.S. can mitigate risk but cannot guarantee that \nanother attack will not occur, nor can it prevent natural and \naccidental disasters. It requires this country to admit that \nsome disasters cannot be avoided. It also requires the U.S. \nto acknowledge that, faced with disaster, most citizens, \nbusinesses, and other institutions will take action to rescue \nthemselves and others. 1 \n According to U.S. government analysts, 86% of the \ncountry’s critical infrastructure is privately owned and \noperated. However, it isn’t the job of the government to \ndo the on-the-ground work of security management and \nresiliency. 1 \n The private sector can provide the means and the \nexecution. On the other hand, it should also be the cham-\npion and the facilitator of security management and \nresiliency chain; balancing the interests of stakehold-\ners; setting broad objectives and strategies; and providing \noversight. 1 \n However, the creativity and ingenuity of the \nAmerican people must also be taken into consideration, \nincluding the businesses they create. This also includes \nhow government can prepare its citizens for a disaster \nor an emergency by giving them the necessary security \nmanagement tools. 1 \n The government’s first goal is to provide timely and \naccurate security management information during a crisis. \nGovernment must also leverage technology to help inform \ncitizens that danger is near. The Department of Homeland \nSecurity (DHS) has created the Ready Business program, \nwhich gives small-to-medium-size businesses guid-\nance on which security management and resiliency tools \nand resources are available to them to ensure business \ncontinuity. 1 \n The government’s second goal is to provide order, so \ncitizens can focus on disaster response, rather than protect-\ning themselves from social chaos. While local and state \nforces can maintain order during a disaster; just in case \nthey cannot, DHS is studying specialized law enforcement \ndeployment teams (LEDTs). These teams from neighboring \njurisdictions would assist local and state forces when they \nare taxed to the breaking point. LEDTs could help provide \nan organized system that would allow state and local law \nenforcement to assist each other to quickly resume normal \npolice services, to an area hit by a terrorist attack or natural \ndisaster (something Louisiana and New Orleans police did \nnot have after Hurricane Katrina struck). 1 \n Finally, the government can increase infrastructure \nsecurity management and resilience after an attack or \ndisaster. Nevertheless, this can be done through the dis-\npersal of key functions across multiple service providers, \nflexible supply chains, and related systems. 1 \n Business should build in such flexibility as well. \nFlexibility, is cheaper than redundancy, and it is also a \nsmart business decision because it makes companies more \ncompetitive. 1 \n A company that builds in the ability to respond to \nsupply disruption is automatically building in the abil-\nity to respond to demand fluctuations and winning mar-\nket share. For the past 16 years, AT & T has invested in \nmobile central offices: 500 trailers that hold everything \n 1 Matthew Harwood, “ U.S. Must be More Resilient to Disasters and \nTerrorism, Experts Explain, ” Copyright © 2008, Security Management, \nSecurity Management, ASIS International, Inc. Worldwide Headquarters \nUSA, 1625 Prince Street, Alexandria, Virginia 22314-2818, 2008. \n" }, { "page_number": 809, "text": "PART | VIII Appendices\n776\nthe company needs to keep their network up and running. \nOn 9/11, AT & T dispatched these trailers to New York \nbecause the terrorist attack had knocked out a company \ntransport hub in the sixth subbasement of the World \nTrade Center’s South Tower. Within 48 hours, the trail-\ners were operational and accepting call traffic. 1 \n Finally, the United States has been too focused on \nprevention to the detriment of security management and \nresilience. After 9/11, the Bush administration focused \nsolely on preventing the next attack, as opposed to how \nbest to recover should an incident occur. That, of course, \nis not the best approach. 1 \n" }, { "page_number": 810, "text": "777\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n List of Top Security Implementation \nand Deployment Companies \n Appendix C \n Aaditya Corporation ( www.aadityacorp.com/ ): Network \nSecurity Consultants offering specialization in net-\nwork security policy definition. \n Advent Information Management Ltd ( www.advent-\nim.co.uk/ ): Knowledge-based consultancy offer-\ning information management advice, training and \nconsultancy. \n AMPEG Security Lighthouse ( www.security-lighthouse.\ncom/ ): Produce information security management \nsoftware which provides a national and international \nperspective on your company’s security status. \n Atsec ( www.atsec.com/01/index.php ): Offers a range \nof services based on the Common Criteria standard, \nISO 15408. \n BindView Policy Compliance ( www.symantec.com/\nbusiness/solutions/ index.jsp?ptid \u0003 tab2 & ctid \u0003 tab2_\n2): Provides organizations with advanced tools to \nproactively build and measure security best practices \nacross the enterprise. \n CF6 Luxembourg S.A. ( www.cf6.lu/ ): Security policies \nare the formalization of security needs in order to \ndefine security measures for implementation. CF6 \nLuxembourg proposes to help companies to develop \nand implement security policies. \n Citicus ONE-security risk management ( www.citicus.\ncom/index.asp ): Citicus provides tools for informa-\ntion risk management to ensure that compliance with \nsecurity policy can be monitored and enhanced. \n Computer Policy Guide ( www.computerpolicy.com/ ): \nA commercial manual with sample policies. Topics \ninclude: Email; Internet Usage; Personal Computer \nUsage; Information Security; and Document \nRetention. \n CoSoSys SRL ( www.fortedownloads.com/ CoSoSys-\nSRL-Surf-it-Easy/): Provides software to protect PC \nendpoints and networks. \n Delta Risk LLC ( www.delta-risk.net/ ): Provides infor-\nmation on a range of policy related services, includ-\ning risk assessment, awareness and training. \n DynamicPolicy – Efficient Policy Management \n( www.zequel.com/ ): DynamicPolicy is an Intranet \napplication that enables companies to automatically \ncreate, manage and disseminate their corporate poli-\ncies and procedures, particularly those related with \nInformation Security. \n FoolProof Software – Desktop Security for windows \nand MAC OS-education and libraries ( www.fool-\nproofsoftware.com/assets/cm.js ): FoolProof Security \nprovides complete protection for both Windows® \nand Macintosh® operating systems and desktops \nby preventing unwanted or malicious changes from \nbeing made to the system. \n Information Management Technologies ( www.imt.com.\nsa/files/index.asp ): Saudi Arabia. BS7799 Audit; \nForensic Services and Training; Data Recovery \nLaboratory; Managed Security Service Provider; \nSecurity Control Frameworks. \n Information Security and IT Security Austria ( www.\neclipse-security.at/ ): Austrian Information Security \nand IT-Security Company offering Consulting \nServices, Information Security Awareness Training \nin German and English language, and M.M.O.S.S \nSoftware (Massive Multiuser Online Sensitiving \nSoftware). \n Information Shield, Inc. ( www.informationshield.com/ ): \nA global provider of prepackaged security policies \nand customizable implementation guidance. \n IT Security Essentials Guide ( www.ovitztaylor-\ngates.com/TheITSecurityEssentialsGuide.html ): \nManagement resources for enterprise projects, offer-\ning how-to workbooks, project plans and planning \nguides, tools, templates and checklists. \n" }, { "page_number": 811, "text": "PART | VIII Appendices\n778\n Layton Technology Inc. ( www.laytontechnology.com/ ): \nOffers a range of Windows based audit, monitoring \nand access control software solutions. \n Megaprime ( www.megaprime.com.au/ ): Offers ISO/IEC \n17799 compliant information security policy and \nmanagement systems, security architectures, secure \napplications and networks. \n Pirean Limited Homepage ( www.pirean.com/ ): \nProviding enterprise systems, risk management \nand information security services, applications and \neducation with a focus on security management and \ninternational standards (BS7799 / ISO17799). \n Policy Manager – Cisco Systems ( www.cisco.com/en/\nUS/products/sw/netmgtsw/ps996/ps5663/index.\nhtml ): Cisco Secure Policy Manager is a scalable, \npowerful security policy management system for \nCisco firewalls and Virtual Private Network (VPN) \ngateways. Assistance is also provided with the devel-\nopment and auditing of security policy. \n Prolateral Consulting ( www.prolateral.com/ ): \nConsultancy for ISO17799 BS7799, Information \nSecurity Management. \n Prolify ( www.prolify.com/ ): Prolify delivers Dynamic \nProcess Management (DPM) solutions for IT \nGovernance, enabling enterprises to achieve higher \nlevels of control, efficiency and compliance. \n PSS Systems ( www.pss-systems.com/ ): Document policy \nmanagement and enforcement for electronic docu-\nments. Enterprise software to protect, track and ensure \nthe destruction of highly mobile, distributed assets. \n Ruskwig Security Portal ( www.ruskwig.com/ ): Provides \nsecurity policies, an encryption package, security \npolicy templates, Internet and email usage policies. \n Secoda Risk Management ( www.secoda.com/ ): \nRuleSafe from Secoda enables the people in your \norganization to achieve real awareness of policies. \nExpert content and compliance tracking help organi-\nzations implement security (BS7799), privacy and \nregulatory requirements. \n Security Policies & Baseline Standards: Effective \nImplementation ( www.security.kirion.net/\nsecuritypolicy/ ): Discussion of topic with security \npolicies and baseline standards information. \n ● Singular Security ( www.singularsecurity.com/ ): \nA firm specializing in delivering security \nconfiguration management services designed to \nmitigate both computing and company-wide policy \ncompliance requirements. \n ● Spry Control LLC ( www.sprycontrol.com/ ): Spry \nControl provides Information Technology Audit \nServices as a part of corporate oversight or external \naudit that address information security, data privacy \nand technology risks. \n ● VantagePoint Security ( www.vantagepointsecurity.\ncom/ ): Professional services firm specializing in \nsecurity policy, assessments, risk mitigation, and \nmanaged services. \n ● Vision training and consultancy: Dedicated to onsite \nconsultancy and training in the following elements: \nBS7799 and ISO90012000. \n ● Xbasics, LLC: Offers information security and \nFISMA-related software for government, industry \nand consulting organizations. \n LIST OF SAN IMPLEMENTATION AND \nDEPLOYMENT COMPANIES \n There are many different SAN implementation and deploy-\nment companies that offer products and services, from very \nlarge companies to smaller, lesser-known ones. A list of \nsome of the more well-known companies is offered here: \n Bull: www.bull.com/index.php \n Dell: www.dell.com \n EMC: www.emc.com \n Fujitsu: www.fujitsu.\ncom/global/services/computing/storage/system/ \n HP: welcome.hp.com/country/us/en/prodserv/storage.\nhtml \n Hitachi Data Systems: www.hds.\ncom/products/storage-networking/ \n IBM: www-03.ibm.com/systems/storage/ \n NetApp: www.netapp.com/us/ \n NEC: www.nec.co.jp/necstorage/global/index.shtml \n 3PAR: www.3par.com/index.html \n Sun: www.sun.com/storagetek/networking.jsp \n Xiotech: www.xiotech.com/ \n SAN SECURITY IMPLEMENTATION AND \nDEPLOYMENT COMPANIES: 1 \n McData SANtegrity Security Suite Software ( www.\nmcdata.com/products/network/security/santegrity.\nhtml ): SANtegrity Security Suite enhances business \ncontinuity by reducing the impact of human influ-\nences on your networked data. This robust suite of \nsoftware applications provides unsurpassed storage \narea network (SAN) protection. SANtegrity lets \n 1 SAN Security www.sansecurity.com/san-security-vendors.shtml \n" }, { "page_number": 812, "text": "Appendix | C List of Top Security Implementation and Deployment Companies\n779\n you build secure storage networks by providing \nend-to-end security features for McD fabrics. Using \nSANtegrity software, you can apply layers of secu-\nrity to individual storage network ports, switches and \nentire fabrics through: Multi-level Access Control, \nAdvanced Zoning Security, Secure Management \nzones, SANtegrity Features and Functions. \n Brocade Secure Fabric OS ( www.brocade.com/prod-\nucts/silkworm/silkworm_12000/fabric_os_datasheet.\njsp ): A Comprehensive SAN Security Solution. As \na greater number of organizations implement larger \nStorage Area Networks (SANs), they are facing \nnew challenges in regard to data and system secu-\nrity. Especially as organizations interconnect SANs \nover longer distances through existing networks, \nthey have an even greater need to effectively man-\nage their security and policy requirements. To help \nthese organizations improve security, Brocade has \ndeveloped Secure Fabric OS ™ , a comprehensive \nsecurity solution for Brocade-based SAN fabrics \nthat provides policy-based security protection for \nmore predictable change management, assured con-\nfiguration integrity, and reduced risk of downtime. \nSecure Fabric OS protects the network by using the \nstrongest, enterprise-class security methods avail-\nable, including digital certificates and digital signa-\ntures, multiple levels of password protection, strong \npassword encryption, and Public Key Infrastructure \n(PKI)-based authentication, and 128-bit encryption \nof the switch’s private key used for digital signatures. \n Hifn 4300 HIPP III Storage Security Processor ( www.\nhifn.com/products/4300.html ): The Hifn ™ HIPP III \n4300 Storage Security Processor is the first security \nprocessor designed for the specific requirements of \nIP Storage applications. The 4300 offers a complete \nIPsec data path solution optimized for IP Storage \nbased systems, combining inbound and outbound \npolicy processing, SA lookup, SA context handling, \nand packet formatting – all within a single chip. \nHifn’s 4300 delivers industry-leading cryptographic \nfunctionality, supporting the DES/3DES-CBC, AES-\nCBC, AES-CTR, MD5, SHA-1 and AES-XCBC-\nMAC algorithms. Hifn also provides complete \nsoftware support, including an optional onboard \niSCSI-compliant IPsec software stack, offering an \nembedded HTML manager application. \n HP StorageWorks Secure Fabric OS ( javascript:\nvar%20handle \u0003 window.open( ‘ h18006. www1.\nhp.com/products/storage/software/sansecurity/ ): HP \nSecure Fabric OS solutions include a c omprehensive \nSAN infrastructure security software tool and value \nadded services for 1 Gb and 2 Gb SAN Switches \nenvironments. With its flexible design, the Security \nfeature enables organizations to customize SAN \nfabric security in order to meet specific policy \nrequirements. In addition, Security Fabric OS works \nwith security practices already deployed in many \nSAN environments such as Advanced Zoning. HP \nServices also provide a portfolio of services ranging \nfrom the broad SAN Design and Architecture that \ncan provide a complete multisite security design, to \na single site Security Installation & Startup service \nthat shows you how to configure your Secure Fabric \nOS environment using the most used industry tested \naspects of security. HP Secure Fabric OS is a com-\nplete solution for securing SAN infrastructures. \n Decru Dataform Security Appliances ( www.decru.com/ ): \nDecru DataFort ™ is a reliable, multigigabit-speed \nencryption appliance that integrates transparently into \nNAS, SAN, DAS and tape backup environments. By \nlocking down stored data with strong encryption, and \nrouting all access through secure hardware, DataFort \nradically simplifies the security model for networked \nstorage. DataFort appliances combine secure access \ncontrols, authentication, storage encryption, and \nsecure logging to provide unprecedented protection \nfor sensitive stored data. Because DataFort protects \ndata at rest and in flight with strong encryption, even \norganizations that outsource IT management can be \nsure their data assets are secure. In short, DataFort \noffers a powerful and cost-effective solution to \naddress a broad range of external, internal, and \nphysical threats to sensitive data. \n Kasten Chase Assurency ( www.kastenchase.com/ ): \nAssurency ™ SecureData provides comprehensive \nsecurity for data storage, including SAN, NAS and \nDAS. Utilizing authentication, access control and \nstrong industry-standard encryption, SecureData pro-\ntects valuable information stored on online, near-line, \nand backup storage media. These safeguards extend \nto stored data both within the datacenter and at off-\nsite data storage facilities. Assurency SecureData \nprotects valuable data assets, such as email, financial \nand health care information, customer and personnel \nrecords and intellectual property. For government \nagencies, Assurency SecureData protects intelli-\ngence and national defense data, law enforcement \ninformation, and confidential citizen records. Helps \nbuild trusted brands that earn customer loyalty and \nretention. \n" }, { "page_number": 813, "text": "This page intentionally left blank\n" }, { "page_number": 814, "text": "781\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n List of Security Products \n Appendix D \n SECURITY SOFTWARE \n Activeworx ( www.crosstecsoftware.com/security/index.\nhtml ): Activeworx provides organizations with \ncomprehensive, real time expert security informa-\ntion analysis combined with strong correlation intel-\nligence, flexible and robust reporting capabilities, \nand tight integration with our best of breed logging \nsolution. \n Ad-Aware 2008 Definition File ( www.download.\ncom/Ad-Aware-2008-Definition-File/3000-8022_4-\n10706164.html ): Updates your Ad-Aware 2007 defi-\nnition file to the latest release. \n Ad-Aware 2008 ( www.download.com/Ad-Aware-\n2008/3000-8022_4-10045910.html ): Protects your \npersonal home computer from malware attacks. \n ArcSight ( www.arcsight.com/product_overview.htm ): \nThe ArcSight SIEM Platform is an integrated set of \nproducts for collecting, analyzing, and managing \nenterprise event information. These products can \nbe purchased and deployed separately or together, \ndepending on organization size and needs. \n Ashampoo FireWall ( www.download.com/Ashampoo-\nFireWall/3000-10435_4-10575187.html ): Sets up an \neffective firewall and prevents unauthorized online \nactivity. \n Avast Virus Cleaner Tool ( www.download.com/Avast-\nVirus-Cleaner-Tool/3000-2239_4-10223809.html ): \nRemoves selected viruses and worms from your \ncomputer. \n AVG Anti-Virus ( www.download.com/AVG-Anti-\nVirus/3000-2239_4-10385707.html ): Protects your \ncomputer from viruses and malicious programs. \n AVG Internet Security ( www.download.com/AVG-\nInternet-Security/3000-2239_4-10710160.html ): \nProtects your PC from harmful Internet threats. \n Avira AntiVir Personal-Free Antivirus ( www.download.\ncom/Avira-AntiVir-Personal-Free-Antivirus/3000-\n2239_4-10322935.html ): Detects and eliminates \nviruses, get free protection for home users. \n Avira Premium Security Suite ( www.download.\ncom/Avira-Premium-Security-Suite/3000-2239_4-\n10683930.html ): Protects your workstation from \nviruses and online threats. \n CCleaner ( www.download.com/CCleaner/3000-2144_\n4-10315544.html ): Cleans up junk files and invalid \nRegistry entries. \n Comodo Firewall Pro ( www.download.com/Comodo-\nFirewall-Pro/3000-10435_4-10460704.html ): \nProtects your PC from harmful threats. \n ESET NOD32 Antivirus ( www.download.com/ESET-\nNOD32-Antivirus/3000-2239_4-10185608.html ): \nProtects your system against viruses. \n Folder Lock ( www.download.com/Folder-Lock/3000-\n2092_4-10063343.html ): Password-protects, locks, \nhides or encrypts files, folders, drives and portable \ndisks in seconds. \n Hotspot Shield ( www.download.com/Hotspot-\nShield/3000-2092_4-10594721.html ): Maintains \nyour anonymity and protects your privacy when \naccessing free Wi-Fi hotspots. \n Intelinet Spyware Remover ( www.download.com/\nIntelinet-Spyware-Remover/3000-8022_4-10888927.\nhtml ): Removes and blocks all types of spyware and \nPC errors on your computer. \n Kaspersky Anti-Virus ( www.download.com/Kaspersky-\nAnti-Virus/3000-2239_4-10259842.html ): Monitors \nand detects viruses and protects your PC from \nviruses, malware, and Trojan attacks. \n Kaspersky Anti-Virus Definition Complete Update \n( www.download.com/Kaspersky-Anti-Virus-\nDefinition-Complete-Update/3000-2239_4-\n10059428.html ): Updates your Kaspersky Anti-Virus \nwith the latest overall set of virus definitions. \n Kaspersky Internet Security ( www.download.\ncom/Kaspersky-Internet-Security/3000-2239_4-\n10012072.html ): Detects and eliminates viruses and \nTrojan horses, even new and unknown ones. \n KeyScrambler Personal ( www.download.com/\nKeyScrambler-Personal/3000-2144_4-10571274.\n" }, { "page_number": 815, "text": "PART | VIII Appendices\n782\nhtml ): Encrypts keystrokes to protect your username \nand password from keyloggers. \n Malwarebytes ’ Anti-Malware ( www.download.\ncom/Malwarebytes-Anti-Malware/3000-8022_4-\n10804572.html ): Detects and removes malware from \nyour computer. \n McAfee VirusScan Plus ( www.download.com/\nMcAfee-VirusScan-Plus/3000-2239_4-10581368.\nhtml ): Removes spyware or virus threats and prevents \nmalicious applications from invading your PC. \n Norton 360 ( www.download.com/Norton-360/3000-\n2239_4-10651162.html ): Surround yourself with \nprotection from viruses, spyware, fraudulent Web \nsites, and phishing scams. \n Norton AntiVirus 2009 ( www.download.com/Norton-\nAntiVirus-2009/3000-2239_4-10592477.html ): \nProtects yourself from all forms of online viruses. \n Norton AntiVirus 2009 Definitions Update ( www.\ndownload.com/Norton-AntiVirus-2009-Definitions-\nUpdate/3000-2239_4-10737487.html ): Updates virus \ndefinitions for your Norton AntiVirus and Internet \nSecurity 2008 products. \n Norton Internet Security ( www.download.com/Norton-\nInternet-Security/3000-8022_4-10592551.html ): \nPrevents viruses, spyware, and phishing attacks to \nenjoy secure Web connection. \n Online Armor Personal Firewall ( www.download.com/\nOnline-Armor-Personal-Firewall/3000-10435_4-\n10426782.html ): Protects your system from para-\nsites, viruses, and identity theft while surfing the \nWeb. \n ParentalControl Bar ( www.download.com/\nParentalControl-Bar/3000-2162_4-10539075.html ): \nPrevents your children from accessing adult-oriented \nWeb sites. \n Password Dragon ( www.download.com/Password-\nDragon/3000-2092_4-10844722.html ): Manages \nyour passwords and keeps them in a safe place. \n PeerGuardian ( www.download.com/PeerGuardian/3000-\n2144_4-10438288.html ): Protect yourself on P2P \nnetworks by blocking IPs. \n RSA Envision ( www.rsa.com/node.aspx?id \u0003 3170 ): \nThe RSA enVision platform provides the power to \ngather and use log data to understand your security, \ncompliance, or operational status in real time or \nover any period of time. \n Secunia Personal Software Inspector ( www.download.\ncom/Secunia-Personal-Software-Inspector/3000-\n2162_4-10717855.html ): Scans your installed pro-\ngrams and categorizes them by their security-update \nstatus. \n Spybot-Search & Destroy ( www.download.com/Spybot-\nSearch-amp-Destroy/3000-8022_4-10122137.html ): \nSearches your hard disk and Registry for threats to \nyour security and privacy. \n SpywareBlaster ( www.download.com/Spyware\nBlaster/3000-8022_4-10196637.html ): Prevents spy-\nware from being installed on your computer. \n Spyware Doctor ( www.download.com/Spyware-\nDoctor/3000-8022_4-10293212.html ): Remove spy-\nware, adware, Trojan horses, and key loggers with \nthis popular and fast utility. \n Syslog NG ( www.balabit.com/network-security/\nsyslog-ng/ ): The syslog-ng application is a flexible \nand highly scalable system logging application that \nis ideal for creating centralized and trusted logging \nsolutions. \n Trend Micro AntiVirus plus AntiSpyware ( www.\ndownload.com/Trend-Micro-AntiVirus-plus-\nAntiSpyware/3000-2239_4-10440657.html ): Detects \nand removes adware and spyware from your home \ncomputer. \n Trend Micro HijackThis ( www.download.com/Trend-\nMicro-HijackThis/3000-8022_4-10227353.html ): \nScans the Registry and hard drive for spyware. \n ZoneAlarm Firewall (Windows 2000/XP): ( www.\ndownload.com/ZoneAlarm-Firewall-Windows-2000-\nXP-/3000-10435_4-10039884.html ): Protects your \nInternet connection from hackers and other security \nbreaches. \n ZoneAlarm Internet Security Suite ( www.download.\ncom/ZoneAlarm-Internet-Security-Suite/3000-\n8022_4-10291278.html ): Equips your system to deal \nwith Web-based security threats including hackers, \nviruses, and worms. \n" }, { "page_number": 816, "text": "783\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n List of Security Standards \n Appendix E \n BITS Financial Services Roundtable ( www.bits.org/\nFISAP/index.php ): Security assessment question-\nnaire and review process based on ISO/IEC 27002 \n(access requires free registration). Also information \non the overlaps between ISO/IEC 27002, PCI-DSS \n1.1 and COBIT. \n Common Criteria ( www.commoncriteriaportal.org/\nthecc.html ): Provides the Common Criteria for \nInformation Technology Security Evaluation, also \npublished as ISO/IEC 15408. \n ISO 27001 Certificates ( iso27001certificates.com/ ): List \nof organizations certified against ISO/IEC 27001 \nor equivalent national standards, maintained by the \nISMS International User Group based on inputs from \nall the certification bodies. \n ISO 27000 Directory ( www.27000.org/ ): Information \ncovering the ISO/IEC 27000 series of standards, \nincluding updates and consultants directory .\n ISO 27001 Security ( www.iso27001security.com/ ): \nInformation about the ISO/IEC 27000-series infor-\nmation security standards and other related stand-\nards, with discussion forum and FAQ. \n ISO 27000 Toolkit ( www.17799-toolkit.com/ ): Package \ncontaining the ISO/IEC 27001 and 27002 standards \nplus supporting materials such as policies and a \nglossary. \n ISO/IEC 27002 Explained ( www.berr.gov.uk/whatwedo/\nsectors/infosec/infosecadvice/legislationpolicy\nstandards/securitystandards/isoiec27002/page33370.\nhtml ): Information on ISO/IEC 27001 and 27002 \nfrom BERR, the UK government department \nfor Business Enterprise and Regulatory Reform \n(formerly the DTI, the Department of Trade and \nIndustry). \n ISO/IEC 27001 Frequently Asked Questions ( www.\natsec. com/01/index.php?id \u0003 06-0101-01 ): FAQ \ncovers the basics of ISO/IEC 27001, the ISO/IEC \nstandard Specification for an Information Security \nManagement System. \n NIST Special Publication 800-53 ( csrc.nist.gov/publica-\ntions/nistpubs/800-53-Rev2/sp800-53-rev2-final.\npdf ): Recommended Security Controls for Federal \nInformation Systems has a similar scope to ISO/IEC \n27002 and cross-references the standard. [PDF] \n Overview of Information Security Standards ( www.\ninfosec.gov.hk/english/technical/files/overview.pdf ): \nReport by the Government of the Hong Kong Special \nAdministrative Region outlines the ISO/IEC 27000-\nseries standards plus related standards, regulations \netc. including PCI-DSS, COBIT, ITIL/ISO 20000, \nFISMA, SOX and HIPAA. [PDF] \n Praxiom Research Group Ltd. ( praxiom.com/\n#ISO%20IEC%2027001%20LIBRARY ): Plain \nEnglish descriptions of ISO/IEC 27001, 27002 and \nother standards, including a list of the controls. \n The Security Practitioner ( security.practitioner.com/intro-\nduction/ ): The ISO 27001 Perspective: An Introduction \nto Information Security is a guide to ISO/IEC 27001 \nand 27002 in the form of an HTML help file. \n Veridion ( www.veridion.net/ ): ISO/IEC 27001 and 27002 \ntraining courses including Lead Auditor and Lead \nImplementer, plus other information security, risk \nmanagement and business continuity courses on \nBS 25999, CISSP, CISA, CISM, MEHARI and \nOCTAVE. \n" }, { "page_number": 817, "text": "This page intentionally left blank\n" }, { "page_number": 818, "text": "785\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n List of Miscellaneous Security \nResources \n Appendix F \n The following is a list of miscellaneous security resources: \n Conferences \n Consumer Information \n Directories \n Help and Tutorials \n Mailing Lists \n News and Media \n Organizations \n Products and Tools \n Research \n Content Filtering Links \n Other Logging Resources \n CONFERENCES \n Airscanner: Wireless Security Boot Camp \n( www.airscanner.com/wireless/ ): Wireless security \ntraining conference held in Dallas, Texas, USA. \n AusCERT ( conference.auscert.org.au/conf2009/ ): \nInternational conference focusing on IT security. \nHeld near Brisbane, Australia. \n DallasCon ( www.dallascon.com/ ): Provides information \nand wireless security training with hands-on boot \ncamps throughout the U.S. as well as annual confer-\nence in Dallas, Texax. \n FIRST Conference ( www.first.org/conference/ ): Annual \ninternational conference focused on handling compu-\nter security incidents. Location varies. \n Infosecurity ( www.infosec.co.uk/page.cfm/Link \u0003 18/\nt \u0003 m/trackLogID \u0003 2317009_A50169F58F ): Global \nseries of annual IT Security exhibitions. \n International Computer Security Audit and Control \nSymposium ( www.cosac.net/ ): This annual confer-\nence held in Ireland is for IT security professionals. \n ITsecurityEvents ( www.itsecurityevents.com/ ): Calendar \nlisting IT security events worldwide. \n Network and Distributed System Security Symposium \n( www.isoc.org/includes/javascript.js ): Annual event \naimed at fostering information exchange among \nresearch scientists and practitioners of network and \ndistributed system security services. Held in San \nDiego, California \n NISC ( www.nisc.org.uk/ ): Information security confer-\nence in Scotland. Details of agenda, guest speakers \nand online booking form. \n The Training Co. ( www.thetrainingco.com/ ): Techno-\nSecurity conference organizers and computer secu-\nrity training providers. Conference details, including \nregistration and pricing. \n VP4S-06: Video Processing for Security ( www.com-\nputer-vision.org/4security/ ): Conference focused on \nprocessing video data from security devices. \n CONSUMER INFORMATION \n AnonIC ( www.anonic.org/ ): A free resource for those in \nneed of Internet privacy. The aim of this site is to edu-\ncate Internet users how to protect their privacy at little \nor no cost. \n Business.gov: Information and Computer Security \nGuide ( www.business.gov/guides/privacy/ ): Provides \nlinks to plain language government resources that \nhelp businesses secure their information systems, \nprotecting mission-critical data. \n Computer Security information for the rest of us. ( www.\nsecure-computing.info/ ): Computer Security infor-\nmation in plain language. Learn how to protect your \ncomputer from viruses, spyware and spam. \n Consumer Guide to Internet Safety, Privacy and Security \n( nclnet.org/essentials/ ): Offers tips and advice for \nmaintaining online privacy and security, and how to \nkeep children safe online. \n EFS File Encryption Tutorial ( www.iopus.com/guides/efs.\nhtm ): Learn how to use the free Microsoft Encrypting \nFile System (EFS) to protect your data and how to \nback up your private key to enable data recovery. \n" }, { "page_number": 819, "text": "PART | VIII Appendices\n786\n GRC Security Now ( www.grc.com/securitynow.htm ): \nProvides access to weekly podcasts and whitepapers \non topics like Windows Vista, computer security, \nvirus advisories, and other interesting hacking \ntopics. \n Home Network Security ( www.cert.org/tech_tips/home_\nnetworks.html ): Gives home users an overview of \nthe security risks and countermeasures associated \nwith Internet connectivity, especially in the context \nof “ always-on ” or broadband access services such as \ncable modems and DSL. \n Internet Security Guide ( www.internetsecurityguide.\ncom/ ): Features articles on business and home user \nInternet security including SSL certificates and net-\nwork vulnerability scanning. \n Online Security Tips for Consumers of Financial \nServices ( www.bits.org/ci_consumer.html ): Advice \nfor conducting secure online transactions. \n Outlook Express Security Tutorial ( www.iopus.com/\nguides/oe-backup.htm ): Learn how to back up \nyour Outlook Express (OE) email, investigate the \nWindows Registry and transfer your email account \nand rules settings to another PC. \n An Overview of email and Internet Monitoring in the \nWorkplace ( www.fmew.com/archive/monitoring/ ): \nCompliance with the law that governs employer \nmonitoring of employee Internet usage and personal \nemail in the workplace. \n Privacy Initiatives ( www.ftc.gov/privacy/ ): Government \nSite that is run by the Federal Trade Commission. \nInformation about how the government can help \nprotect kids and the general public. It has lots of \ninformation about official policies. \n Protect Your Privacy and email on the Internet ( www.taci-\nroglu.com/p/ ): Guide to protecting privacy and \npersonal information. Includes information on pro-\ntecting passwords, email software, IP numbers, \nencryption, firewalls, anti-virus software, and related \nresources. \n Spyware watch ( www.spyware.co.uk/ ): Spyware \ninformation and tools. \n Staysafe.org (staysafe.org/): Educational site intended \nto help consumers understand both the positive \naspects of the Internet as well as how to manage a \nvariety of safety and security issues that exist online. \n Susi ( www.besafeonline.org/English/safer_use_of_\nservices_on_the_internet.htm ): Information and \nadvice to parents and teachers, about risks on the \nInternet and how to behave. \n Wired Safety ( www.wiredsafety.org/ ): Offers advice \nabout things that can go wrong online, including con \nartists, identity thieves, predators, stalkers, criminal \nhackers, fraud, cyber-romance gone wrong and privacy \nproblems. Includes contact form. \n DIRECTORIES \n E-Evidence Information Center ( www.e-evidence.info/ ): \nDirectory of material relating to all aspects of digital \nforensics and electronic evidence. \n Itzalist ( www.itzalist.com/com/computer-security/index.\nhtml ): Computer resources offering antivirus software, \ncurrent virus news, antivirus patches, online protec-\ntion, security software and other information about \ncomputer security. \n The Laughing Bit ( www.tlb.ch/ ): Collection of links \nto information on Windows NT and Checkpoint \nFirewall-1 security. \n Safe World ( soft.safeworld.info/ ): Directory of links to \ndownloadable security software. Brief descriptions \nfor each. \n SecureRoot ( www.secureroot.com/ ): Hacking and secu-\nrity related links. Also offers discussion forums. \n HELP AND TUTORIALS \n How to find security holes ( www.canonical.org/\n%7Ekragen/security-holes.html ): Short primer origi-\nnally written for the Linux Security Audit project. \n Ronald L. Rivest’s Cryptography and Security (people.\ncsail.mit.edu/rivest/crypto-security.html): Provides \nlinks to cryptography and security sites. \n SANS Institute – The Internet Guide To Popular Resources \nOn Computer Security ( www.sans.org/410.php ): \nCombination FAQ and library providing answers to \ncommon information requests about computer security. \n MAILING LISTS \n Alert Security Mailing List ( www.w3.easynet.co.uk/uni-\ntel/services/alert.html ): Monthly security tips and \nalert mailing list. Pay subscription service provides \ninformation, tips and developments to protect your \nInternet computer security \n Computer Forensics Training Mailing List ( www.\ninfosecinstitute.com/courses/computer_foren-\nsics_training.html ): Computer forensics and incident \nresponse mailing list \n FreeBSD Resources ( www.freebsd.org/doc/en_\nUS.ISO8859-1/books/handbook/eresources.html ): \n" }, { "page_number": 820, "text": "Chapter | F List of Miscellaneous Security Resources\n787\n Mailing lists pertaining to FreeBSD. freebsd-security \nand freebsd-security-notifications are sources of offi-\ncial FreeBSD specific notifications. \n InfoSec News ( www.infosecnews.org/ ): Privately-run \nmedium traffic list that caters to distribution of infor-\nmation security news articles. \n ISO17799 & ISO27001 News ( www.molemag.net/ ): \nNews, background and updates on these international \nsecurity standards. \n IWS INFOCON Mailing List ( www.iwar.org.uk/gen-\neral/mailinglist.htm ): The INFOCON mailing list \nis devoted to the discussion of cyber threats and all \naspects of information operations, including offensive \nand defensive information warfare, information \nassurance, psychological operations, electronic \nwarfare. \n Risks Digest ( catless.ncl.ac.uk/Risks ): Forum on risks to \nthe public in computers and related systems. \n SCADA Security List ( www.infosecinstitute.com/\ncourses/scada_security_training.html ): Mailing list \nconcerning DCS and SCADA Security. \n SecuriTeam Mailing Lists ( www.securiteam.com/\nmailinglist.html ): Location of various security mailing \nlists pertaining to exploits, hacking tools, and others. \n Security Clipper ( www.securityclipper.com/alarm-\nsystems.php ): Mailing list aggregator offering a \nselection of security lists to monitor. \n NEWS AND MEDIA \n Security Focus ( www.securityfocus.com/ ): News and \neditorials on security related topics, along with a \ndatabase of security knowledge. \n Computer Security News-Topix ( www.topix.com/tech/\ncomputer-security ): News on computer security \ncontinually updated from thousands of sources \naround the net. \n Computer Security Now ( www.computersecuritynow.\ncom/ ): Computer security news and information \nnow for the less security oriented members of the \ncommunity. \n Enterprise Security Today ( www.enterprise-security-\ntoday.com/fullpage/fullpage.xhtml?dest \u0003 %2F ): \nComputer security news for the I.T. Professional. \n Hagai Bar-El-Information Security Consulting ( www.\nhbarel.com/news.html ): Links to recent information \nsecurity news articles from a variety of sources. \n Help Net Security ( www.net-security.org/ ): Help Net \nSecurity is a security portal offering various informa-\ntion on security issues-news, vulnerabilities, press \nreleases, software, viruses and a popular weekly \nnewsletter. \n Investigative Research into Infrastructure Assurance \nGroup ( news.ists.dartmouth.edu/ ): News digests \narranged by subject with links to full articles. \nSubjects include cybercrime, regulation, consumer \nissues and technology. \n O’Reilly Security Center ( oreilly.com/pub/topic/\nsecurity ): O’Reilly is a leader in technical and \ncomputer book documentation for Security. \n SecureLab ( www.securelab.com/ ): Computer and net-\nwork security software, information, and news. \n SecuriTeam ( www.securiteam.com/ ): Group dedicated \nto bringing you the latest news and utilities in com-\nputer security. Latest exploits with a focus on both \nWindows and Unix. \n Security Geeks ( securitygeeks.shmoo.com/ ): Identity \nand information security news summaries with dis-\ncussion and links to external sources. \n SecurityTracker ( securitytracker.com/ ): Information \non the latest security vulnerabilities, free \nSecurityTracker Alerts, and customized vulnerability \nnotification services. \n Xatrix Security ( www.xatrix.org/ ): Security news portal \nwith articles, a search engine and books. \n ORGANIZATIONS \n Association for Automatic Identification and Mobility \n( www.aimglobal.org/ ): Global trade association \nfor the Automatic Identification and Data Capture \n(AIDC) industry, representing manufacturers, con-\nsultants, system integrators, and users involved in \ntechnologies that include barcode, RFID, card tech-\nnologies, biometrics, RFDC, and their associated \nindustries. \n Association for Information Security ( www.iseca.org/ ): \nNon-profit organization aiming to increase public \nawareness and facilitate collaboration among \ninformation security professionals worldwide. \nOffers security documents repository, training, \nnews and joining information. Headquarters in \nSofia, Bulgaria. \n First ( www.first.org/ ): Forum of Incident Response and \nSecurity Teams. \n Information Systems Audit and Control Association \n( www.isaca.org/ ): Worldwide association of IS pro-\nfessionals dedicated to the audit, control, and security \nof information systems. Offer CISA qualification and \nCOBIT standards. \n" }, { "page_number": 821, "text": "PART | VIII Appendices\n788\n IntoIT ( www.intosaiitaudit.org/ ): The journal of the \nINTOSAI EDP Audit Committee. Its main focuses \nare on information systems auditing, IT performance \nauditing, and IT support for auditing. \n North Texas Chapter ISSA ( issa-northtexas.org/ ): The \nDallas and Fort Worth chapter of the Information \nSystems Security Association (ISSA). \n RCMP Technical Security Branch ( www.rcmp-grc.\ngc.ca/tsb/ ): Canadian organization dedicated to pro-\nviding federal government clients with a full range \nof professional physical and information technology \nsecurity services and police forces with high technol-\nogy forensic services. \n The Shmoo Group ( www.shmoo.com/ ): Privacy, crypto, and \nsecurity tools and resources with daily news updates. \n Switch-CERT ( www.switch.ch/cert/ ): Swiss CERT-Team \nfrom the Swiss research network (Switch). \n PRODUCTS AND TOOLS \n AlphaShield ( www.alphashield.com/ ): Hardware \nproduct used with your DSL or cable modem which \ndisconnects the “ always on ” connection when the \nInternet is not in use, and prevents unauthorized \naccess to your computer. \n Bangkok Systems & Software ( www.bangkoksystem.\ncom/ ): System and software security. Offices in \nThailand and India. \n Beijing Rising International Software Co.,Ltd ( www.\nrising-global.com/ ): Chinese supplier of antivirus, \nfirewall, content management and other network \nsecurity software and products. \n Beyond If Solutions ( www.beyondifsolutions.com/ ): \nSupplier of encryption and biometric devices, mobile \ndevice management software and remote network \naccess. \n BootLocker Security Software ( www.bootlocker.com/ ): \nBootLocker secures your computer by asking for a \npassword on startup. Features include multiple user \nsupport, screensaver activation, system tray support, \nand logging. \n Calyx Suite ( www.calyxsuite.com/ ): Offer token or \nbiometric based authentication, with associated \nfirewall, encryption and single sign-on software. \nTechnical documentation, reseller listings and trial \ndownloads. Located in France. \n CipherLinx ( www.cipherlinx.com/ ): Secure remote con-\ntrol technology using Skipjack encryption. \n ControlGuard ( www.controlguard.com/ ): Provides access \ncontrol solutions for portable devices and removable \nmedia storage. [May not work in all browsers.] \n CT Holdings, Inc. ( www.ct-holdings.com/ ): Develops, \nmarkets and supports security and administration \nsoftware products for both computer networks and \ndesktop personal computers. (Nasdaq: CITN). \n Cyber-Defense ( enclaveforensics.com/ ): Links to free \nsoftware tools for security analysis, content monitor-\ning and content filtering. \n Data Circle ( www.datacircle.com/app/homepage.\nasp ): Products include Datapass, Dataware, and \nDataguide. \n Digital Pathways Services Ltd UK ( www.digpath.\nco.uk/ ): Providing specialized security products for \nencryption, risk assessment, intrusion detection, \nVPNs, and intrusion detection. \n Diversinet Corp. ( www.dvnet.com/ ): Develops digital \ncertificate products based on public-key infrastruc-\ntures and technologies required for corporate net-\nworks, intranets and electronic commerce on the \nInternet for a variety of security authentication appli-\ncations. (Nasdaq: DVNT). \n DLA Security Systems, Inc. ( www.dlaco.com/ ): Key \ncontrol software, key records management software, \nmaster keying software. \n DSH ( www.dshi.com/ ): Commercial and GSA reseller \nfor Arbor Networks, Entercept, Netforensics and \nSolsoft. \n eLearning Corner ( www.elearningcorner.com/ ): Flash \nbased, scorm-compatible online computer security \nawareness courses to improve corporate IT security \nby modifying employee behaviors. \n Enclave Data Solutions ( enclavedatasolutions.com/ ): \nReseller of MailMarshal, WebMarshal, Jatheon email \narchival, Akonix IM and other security products. \n eye4you ( www.eye4you.com.au/ ): Software to monitor \nand restrict PC usage, enforce acceptable use poli-\ncies, teach classes and prevent students changing \nvital system files. \n Faronics ( www.faronics.com/ ): Develops and markets \nend-point non-restrictive, configuration management \nand whitelist based software security solutions. \n Forensic Computers ( www.forensic-computers.com/\nindex.php ): Provides specialized computer systems, \nsecurity and forensic hardware and software. \n GFI Software Ltd ( www.gfi.com/languard/ ): Offers net-\nwork security software including intrusion detection, \nsecurity scanner, anti virus for Exchange and anti \nvirus for ISA server. \n Global Protective Management.com ( www.secureas-\nsess.com/ ): Providing Global Security Solutions. \nGPM has created a unique suite of PC-based secu-\nrity software applications called SecureAssess. \n" }, { "page_number": 822, "text": "Chapter | F List of Miscellaneous Security Resources\n789\n The SecureAssess product line takes full advantage \nof current mobile technologies to provide clients \nwith tools to effectively address their security \nvulnerabilities. \n GuardianEdge Technologies, Inc. ( www.guardianedge.\ncom/ ): Encryption for hard disks and removable stor-\nage, authentication, device control and smart phone \nprotection, within a shared infrastructure giving con-\nsolidated administration. \n Hotfix Reporter ( www.maximized.com/freeware/\nhotfixreporter/ ): Works with Microsoft Network \nSecurity Hotfix Checker (HfNetChk) to scan for \nsecurity holes, and outputs Web pages complete \nwith links to the Microsoft articles and security \npatches. \n IPLocks Inc. ( www.iplocks.com/ ): Database security, \nmonitoring, auditing reporting for governance and \ncompliance. \n iSecurityShop ( www.isecurityshop.com/ ): Offers hard-\nware and software network security products includ-\ning firewalls, cryptographic software, antivirus, and \nintrusion detection systems. \n Juzt-Innovations Ltd. ( www.juzt-innovations.ie/ ): PC \ndata backup and recovery card, 3DES encryption \nutility and smart card access control system. \n KAATAN Software ( www.kaatansoftware.com/ ): \nDeveloper of security software including encryption \nof office documents and SQLserver database auditing. \n Kilross Network Protection Ltd. ( www.kilross.com/ ): \nIrish reseller of IT security products from eSoft, \nSecPoint, SafeNet and others. \n Lexias Incorporated ( www.lexias.com/ ): Provides next \ngeneration solutions in data security and high \navailability data storage. \n Lexura Solutions Inc. ( www.lexurasolutions.com/index.\nhtm ): Software for encryption, intruder alerting and \ncookie management. \n Locum Software Services Limited ( www.locumsoft-\nware.co.uk/ ): Security solutions for Unisys MCP/AS \nsystems. \n Lumigent Technologies ( www.lumigent.com/ ): \nEnterprise data auditing solutions that help organiza-\ntions mitigate the inherent risks associated with data \nuse and regulatory compliance. \n Marshal ( www.marshal.com/ ): Supplier of email and \nWeb security software. \n n-Crypt ( www.n-crypt.co.uk/ ): Develops integrated \nsecurity software products for the IT industry. \n NetSAW: Take a look at your network ( www.proquesys.\ncom/ ). This is a new enterprise class network secu-\nrity product currently being developed by ProQueSys \nthat provides both security experts as well as hobby-\nists with an understanding of the communications on \ntheir computer networks. \n Networking Technologies Inc. ( www.nwtechusa.com/ ): \nDistributor of email security, antivirus, Web filtering \nand archival products. \n New Media Security ( www.newmediasecurity.com/ ): \nProvides solutions to protect data on mobile comput-\ners, laptops, PDAs, tablets and in emails and on CDs. \n NoticeBored ( www.noticebored.com/html/site_map.\nhtml ): Information security awareness materials for \nstaff, managers and IT professionals covering a fresh \ntopic every month. \n Noweco ( www.noweco.com/smhe.htm ): Proteus is a \nsoftware tool designed to audit information secu-\nrity management systems according to ISO17799 \nstandards. \n Oakley Networks Inc. ( www.oakleynetworks.com/ ): \nSecurity systems capable of monitoring ‘ leakage ’ of \nintellectual property through diverse routes such as \nWeb, email, USB and printouts. \n Pacom Systems ( www.pacomsystems.com/ ): Provider of \nintegrated and networked security solutions for \nsingle- and multisite organizations. \n Paktronix Systems: Network Security ( www.paktronix.\ncom/ ): Design, supply, and implement secure net-\nworks. Provide secure border Firewall systems for \nconnecting networks to the Internet or each other. \nOffer Network Address Translation (NAT), Virtual \nPrivate Networking (VPN), with IPSec, and custom \nport translation capabilities. \n PC Lockdown ( www.pclockdown.com.au/ ): Software \nthat allows the remote lockdown of networked work-\nstations. Product features, company information, \nFAQ and contact details. \n Porcupine.org ( www.porcupine.org/ ): Site providing \nseveral pieces of software for protecting computers \nagainst Internet intruders. \n Powertech ( www.powertech.com/powertech/index.asp ): \nSecurity software for the IBM AS/400 and iSeries \nincluding intrusion detection, user access control, \nencryption and auditing. \n Protocom Development Systems ( www.actividentity.\ncom/ ): Specializes in developing network security \nsoftware for all needs with credential management, \nstrong authentication, console security and password \nreset tools. \n Sandstorm Enterprises ( www.sandstorm.net/ ): Products \ninclude PhoneSweep, a commercial telephone line \nscanner and NetIntercept, a network analysis tool to \nreassemble TCP sessions and reconstruct files. \n" }, { "page_number": 823, "text": "PART | VIII Appendices\n790\n SecurDesk ( www.cursorarts.com/ca_sd.html ): Access \ncontrol and verification, protection for sensitive files \nand folders, log usage, customizable desktop envi-\nronment, administration, and limit use. \n Secure Directory File Transfer System ( www.owlcti.\ncom/ ): An essential tool for organizations that \ndemand the ultimate in security. This “ special pur-\npose firewall ” will safeguard the privacy of your \ndata residing on a private network, while at the same \ntime, providing an inflow of information from the \nInternet or any other outside network. \n Secure your PC ( www.maths.usyd.edu.au/u/psz/securepc.\nhtml ): A few notes on securing a Windows98 PC. \n Security Awareness, Inc. ( www.securityawareness.\ncom/ ): Security awareness products for all types of \norganizations, including security brochures, custom \nscreensavers, brochures and computer-based training. \n Security Officers Management and Analysis Project \n( www.somap.org/ ): An Open Source collaborative \nproject building an information security risk manage-\nment method, manuals and toolset. \n SecurityFriday Co. Ltd. ( www.securityfriday.com/ ): \nSoftware to monitor access to Windows file servers, \ndetect promiscuous mode network sniffers and quan-\ntify password strength. \n SeQureIT ( www.softcat.com/ ): Security solutions \nincluding WatchGuard firewalls, Check Point, \nClearswift, Nokia and Netilla SSL VPN. Also pro-\nvide managed and professional services. \n Service Strategies Inc. ( www.ssimail.com/ ): Email gate-\nway and messaging, firewall, VPN and SSL software \nand appliances for AS/400 and PC networks. \n Silanis Technology ( www.silanis.com/index.html ): \nElectronic and digital signature solution provider \nincludes resources, white papers, product news and \nrelated information. \n Simpliciti ( simpliciti.biz/ ): Browser lockdown software \nto restrict Web browsing. \n Smart PC Tools ( www.smartpctools.com/en/index.html ): \nOffers a range of PC software products, most of \nwhich relate to security. \n Softcat plc (UK) ( www.softcat.com/ ): Supplier of IT \nsolutions, dealing with software, hardware and \nlicensing. \n Softek Limited ( www.mailmarshal.co.uk/ ): Distributor \nof security software: anti-virus, anti-spam, firewall, \nVPN, Web filtering, USB device control etc. \n Softnet Security ( www.safeit.com/ ): Software to pro-\ntect confidential communication and information. \nProduct specifications, screenshots, demo down-\nloads, and contact details. \n Tech Assist, Inc. ( www.toolsthatwork.com/ ): \nApplications for data recovery, network security, and \ncomputer investigation. \n Tropical Software ( www.tropsoft.com/ ): Security and \nPrivacy products. \n UpdateEXPERT ( www.lyonware.co.uk/Update-Expert.\nhtm ): A hotfix and service pack security manage-\nment utility that helps systems administrators keep \ntheir hotfixes and service packs up-to-date. \n Visionsoft ( www.visionsoft.com/ ): Range of security \nand software license auditing software for busi-\nnesses, schools and personal users. \n Wave Systems Corp. ( www.wavesys.com/ ): Develops pro-\nprietary application specific integrated circuit which \nmeters usage of data, graphics, software, and video \nand audio sequences which can be digitally transmit-\nted and develops a software version of its application \nfor use over the Internet. (Nasdaq: WAVX). \n WhiteCanyon Security Software ( www.whitecanyon.\ncom/ ): Providing software products to securely clean, \nerase, and wipe electronic data from hard drives and \nremovable media. \n Wick Hill Group ( www.wickhill.co.uk/ ): Value added \ndistributor specializing in secure infrastructure solu-\ntions for ebusiness. Portfolio includes a range of \nsecurity solutions, from firewalls to SSL VPN, as \nwell as Web access and Web management products. \n Winability Software Corporation ( www.winability.com/\nhome/ ): Directory access control and inactivity time-\nout software for Windows systems. \n xDefenders Inc. ( www.xdefenders.com/ ): Security appli-\nances combining spam, virus and Web content filter-\ning with firewall and IDS systems, plus vulnerability \nassessment services. \n ZEPKO ( www.zepko.com/ ): SIM (Security Information \nManagement) technology provider. Assessing busi-\nness risks and technology vulnerabilities surrounding \nIT security products. \n RESEARCH \n Centre for Applied Cryptographic Research ( www.cacr.\nmath.uwaterloo.ca/ ): Cryptographic research organi-\nzation at the University of Waterloo. Downloads of \ntechnical reports, upcoming conferences list and \ndetails of graduate courses available. \n Cryptography Research, Inc ( www.cryptography.com/ ): \nResearch and system design in areas including \ntamper resistance, content protection, network secu-\nrity, and financial services. Service descriptions and \nwhite papers. \n" }, { "page_number": 824, "text": "Chapter | F List of Miscellaneous Security Resources\n791\n Dartmouth College Institute for Security Technology \nStudies (ISTS) ( www.ists.dartmouth.edu/ ): Research \ngroup focusing on United States national cyber-secu-\nrity and technological counterterrorism. Administers \nthe I3P consortium. \n Penn State S2 Group ( ist.psu.edu/s2/ ): General cyber \nsecurity lab at the United States university. Includes \ncurrent and past projects, software, publications, \nand events. \n The SANS Institute ( www.sans.org/ ): Offers computer \nsecurity research, training and information. \n SUNY Stony Brook Secure Systems Lab ( seclab.\ncs.sunysb.edu/seclab1/ ): Group aimed at research \nand education in computer and network security. \nProjects, academic programs, and publications. \nLocated in New York, United States. \n CONTENT FILTERING LINKS \n Content Filtering vs. Blocking ( www.securitysoft.com/\ncontent_filtering.html ): An interesting whitepaper \nfrom Security Software Systems discussing the pros \nand cons of content filtering and blocking. \n GateFilter Plug-in ( www.deerfield.com/products/\ngatefilter/ ): GateFilter Plug-in is a software Internet \nfilter providing content filtering for WinGate. The \nplug-in uses technology based on Artificial Content \nRecognition (ACR), which analyzes the content of a \nWeb site, determines if it is inappropriate, and blocks \nthe site if necessary. Supports English, German, \nFrench, and Spanish \n GFI Mail Essentials ( www.gfi.com/mes/ ): Mail \nEssentials provides email content checking, antivirus \nsoftware, and spam blocking for Microsoft Exchange \nand SMTP. \n InterScan eManager ( us.trendmicro.com/us/solutions/\nenterprise/security-solutions/web-security/index.\nhtml ): Provides real-time content filtering, spam \nblocking, and reporting. Optional eManager plug-\nin integrates seamlessly with InterScan VirusWall \nto safeguard intellectual property and confidential \ninformation, block inappropriate email and attach-\nments, and protect against viruses. eManager also \nenables Trend Micro Outbreak Prevention Services. \n NetIQ (MailMarshal) ( www.marshal.com/ ): NetIQ \nprovides MailMarshal, imMarshal, and WebMarshal \nfor content filtering coupled with antivirus protection \n(McAfee). \n Postfix Add-on Software ( www.postfix.org/addon.html ): \nList and links of add-on software for Postfix, includ-\ning content filtering and antivirus solutions. \n Qmail-Content filtering ( www.fehcom.de/qmail/filter.\nhtml ): Scripts that provide content filtering for \nincoming email with Qmail. \n SonicWALL ’ s Content Filtering Subscription Service \n( www.sonicguard.com/ContentFilteringService.asp ): \nIntegrated with SonicWALL’s line of Internet secu-\nrity appliances, the SonicWALL Content Filtering \nsubscription enables organizations such as busi-\nnesses, schools and libraries to maintain Internet \naccess policies tailored to their specific needs. \n SurfControl ( www.websense.com/site/scwelcome/index.\nhtml ): SurfControl is a London-based company \nproviding email and Web filtering solutions. as well \nas Internet monitoring and policy management soft-\nware. Recently purchased by Websense. \n Tumbleweed ( www.tumbleweed.com/ ): Tumbleweed \nprovides secure messaging and email policy manage-\nment solutions geared to the government, financial \nand healthcare industries. \n WebSense ( www.websense.com/content/home.aspx ): \nWebsense provides a wide range of solutions includ-\ning Internet filters, monitoring software, content fil-\ntering, tracking, and policy management. \n OTHER LOGGING RESOURCES \n IETF Security Issues in Network Event Logging ( www.\nietf.org/html.charters/syslog-charter.html ): USENIX \nSpecial Interest Group \n Building a Logging Infrastructure ( www.sage.org/\npubs/12_logging/ ): Loganalysis.org is a volunteer \nnot-for-profit organization devoted to furthering \nthe state of the art in computer systems log analysis \nthrough dissemination of information and sharing of \nresources. \n Warning: URLs may change or be deleted without \nnotice. \n" }, { "page_number": 825, "text": "This page intentionally left blank\n" }, { "page_number": 826, "text": "793\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Ensuring Built-in Frequency \nHopping Spread Spectrum \nWireless Network Security \n Appendix G \n The Sensors Directorate sponsored new technology \ndeveloped by Robert Gold Comm Systems, Inc. (RGCS) \nunder a Phase II Fast Track Small Business Innovation \nResearch program. This technology provides power-\nful security protection for wireless computer networks, \ncell phones, and other radio communications. Benefits \ninclude highly secure communications with the overhead \nof encryption and selective addressability of receivers, \nindividually or in groups. 1 \n ACCOMPLISHMENT \n Dr. Gold developed a built-in self-synchronizing and \nselective addressing algorithm based on times-of-arrival \n(TOA) measurements of a frequency-hopping radio sys-\ntem. These algorithms allow a monitor to synchronize \nto a frequency-hopping radio in a wireless network by \nmaking relatively brief observations of the TOAs on a \nsingle frequency. RGCS designed the algorithms for \nintegration into spread-spectrum, frequency-hopping \nsystems widely used for wireless communications such \nas wireless fidelity computer networks, cellular phones, \nand two-way radios used by the military, police, fire-\nfighters, ambulances, and commercial fleets. 1 \n BACKGROUND \n Although very convenient for users, wireless communica-\ntion is extremely vulnerable to eavesdropping. For exam-\nple, hackers frequently access wireless computer networks \n(laptop computers linking to the wireless network). 1 \n Encrypting the data increases the security of these \nwireless networks, but encryption is complex, inconven-\nient, time consuming for users, and adds a significant \namount of overhead information that reduces data through-\nput. In frequency-hopping (spread-spectrum) wireless net-\nworks now in wide use, users protect the data by sending \nit in brief spurts, with the transmitter and receiver skipping \nin a synchronized pattern among hundreds of frequencies. \nAn intruder without knowledge of the synchronization \npattern would just hear static. 1 \n A major vulnerability of many spread-spectrum wire-\nless networks involves compromising the network secu-\nrity by intercepting unprotected information. Originators \nmust send the sync pattern information to authorized \nreceivers, often unprotected. 1 \n The Gold algorithms support code-division multiple \naccess, frequency-hopping multiple access, and ultra-\nwide-band spread-spectrum communication systems. \nThey are designed for incorporation into enhanced ver-\nsions of existing products, most of which already include \ncircuitry that manufacturers can adapt to implement the \ntechnology. 1 \n ADDITIONAL INFORMATION \n To receive more information about the preceding or \nother activities in the Air Force Research Laboratory, \ncontact TECH CONNECT, AFRL/XPTC, (800) 203-\n6451, and you will be directed to the appropriate labora-\ntory expert. (03-SN-21). 1 \n 1 “ New technology provides powerful security protection for wireless \ncommunications, ” Air Force Research Laboratory AFRLAir AFRL, \n1864 4th St., Bldg. 15, Room 225, WPAFB, OH 45433-7131, 2008. \n" }, { "page_number": 827, "text": "This page intentionally left blank\n" }, { "page_number": 828, "text": "795\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Configuring Wireless Internet \nSecurity Remote Access \n Appendix H \n This Appendix describes how to configure and add \nwireless remote access points (APs) as RADIUS clients \nof the Microsoft 2003 and Vista Internet Authentication \nService (IAS) servers. \n ADDING THE ACCESS POINTS \nAS RADIUS CLIENTS TO IAS \n You must add wireless remote APs as RADIUS clients \nto IAS before they are allowed to use RADIUS authen-\ntication and accounting services. The wireless remote \nAPs at a given location will typically be configured to \nuse an IAS server at the same location for their primary \nRADIUS server and another IAS server at the same or a \ndifferent location as the secondary RADIUS server. The \nterms “ primary ” and “ secondary ” here do not refer to \nany hierarchical relationship, or difference in configura-\ntion, between the IAS servers themselves. The terms are \nrelevant only to the wireless remote APs, each of which \nhas a designated primary and secondary (or backup) \nRADIUS server. Before you configure your wireless \nremote APs, you must decide which IAS server will be \nthe primary and which will be the secondary RADIUS \nserver for each wireless remote AP 1 . \n The following procedures describe adding RADIUS \nclients to two IAS servers. During the first procedure, a \nRADIUS secret is generated for the wireless remote AP; \nthis secret, or key, will be used by IAS and the AP to \nauthenticate each other. The details of this client along \nwith its secret are logged to a file. This file is used in \nthe second procedure to import the client into the second \nIAS 1 . \n ADDING ACCESS POINTS \nTO THE FIRST IAS SERVER \n This part of the appendix describes the adding of wireless \nremote APs to the first IAS server. A script is supplied \nto automate the generation of a strong, random RADIUS \nsecret (password) and add the client to IAS. The script \nalso creates a file (defaults to Clients.txt) that logs the \ndetails of each wireless remote AP added. This file \nrecords the name, IP address, and RADIUS secret gener-\nated for each wireless remote AP. These will be required \nwhen configuring the second IAS server and wireless \nremote APs 1 .\n \n 1 “ Securing Wireless LANs with PEAP and Passwords, Chapter 5: \nBuilding the Wireless LAN Security Infrastructure, ” © 2008 Microsoft \nCorporation. All rights reserved. Microsoft Corporation, One Microsoft \nWay, Redmond, WA 98052-6399, 2007. \n Tip: You must not use this first procedure to add the same \nclient to two IAS servers. If you do this, the client entries \non each server will have a different RADIUS secret con-\nfigured and the wireless remote AP will not be able to \nauthenticate to both servers. \n Tip: The RADIUS clients are added to IAS as “ RADIUS \nStandard ” clients. Although this is appropriate for most wire-\nless remote APs, some APs may require that you configure \nvendor – specific attributes (VSA) on the IAS server. You can \nconfigure VSAs either by selecting a specific vendor device \nin the properties of the RADIUS clients in the Internet \nAuthentication Service MMC or (if the device is not listed) \nby specifying the VSAs in the IAS remote access policy. \n SCRIPTING THE ADDITION OF \nACCESS POINTS TO IAS SERVER \n(ALTERNATIVE PROCEDURE) \n If you do not want to add the wireless remote APs to the \nIAS server interactively using the previous procedure, \n" }, { "page_number": 829, "text": "PART | VIII Appendices\n796\n you can just generate the RADIUS client entries output \nfiles for each wireless remote AP without adding them \nto IAS. You can then import the RADIUS client entries \ninto both the first IAS server and the second IAS server. \nBecause you can script this whole operation, you may \nprefer to add your RADIUS clients this way if you have \nto add a large number of wireless remote APs 1 .\n \nwireless remote AP from a WLAN client using an unau-\nthenticated connection. You should test this before con-\nfiguring the authentication and security parameters listed \nlater in this appendix 1 . \n ENABLING SECURE WLAN \nAUTHENTICATION ON \nACCESS POINTS \n You must configure each wireless remote AP with a \nprimary and a secondary RADIUS server. The wireless \nremote AP will normally use the primary server for all \nauthentication requests, and switch over to the secondary \nserver if the primary server is unavailable. It is important \nthat you plan the allocation of wireless remote APs and \ncarefully decide which server should be made primary \nand which should be made secondary. To summarize: \n In a site with two (or more) IAS servers, balance your \nwireless remote APs across the available servers so that \napproximately half of the wireless remote APs use server 1 \nas primary and server 2 as secondary, and the remaining \nuse server 2 as primary and server 1 as secondary 1 . \n In sites where you have only one IAS server, this \nshould always be the primary server. You should config-\nure a remote server (in the site with most reliable con-\nnectivity to this site) as the secondary server 1 . \n In sites where there is no IAS server, balance the wire-\nless remote APs between remote servers using the server \nwith most resilient and lowest latency connectivity. Ideally, \nthese servers should be at different sites unless you have \nresilient wide area network (WAN) connectivity 1 . \n Table H.1 1 lists the settings that you need to config-\nure on your wireless remote APs. Although the names \nand descriptions of these settings may vary from one \nvendor to another, your wireless remote AP documen-\ntation helps you determine those that correspond to the \nitems in Table H.1 1 . \n Tip: This procedure is an alternative method for adding \nRADIUS clients in a scripted rather than an interactive \nfashion. \n CONFIGURING THE WIRELESS \nACCESS POINTS \n Having added RADIUS clients entries for the wireless \nremote APs to IAS, you now need to configure the wireless \nremote APs themselves. You must add the IP addresses of \nthe IAS servers and the RADIUS client secrets that each \nAP will use to communicate securely with the IAS serv-\ners. Every wireless remote AP will be configured with a \nprimary and secondary (or backup) IAS server. You should \nperform the procedures for the wireless remote APs at \nevery site in your enterprise 1 . \n The procedure for configuring wireless remote APs \nvaries depending on the make and model of the device. \nHowever, wireless remote AP vendors normally pro-\nvide detailed instructions for configuring their devices. \nDepending on the vendor, these instructions may also be \navailable online 1 . \n Prior to configuring the security settings for your \nwireless remote APs, you must configure the basic wire-\nless network settings. These will include but are not \nl imited to: \n ● IP Address and subnet mask of the wireless remote AP \n ● Default gateway \n ● Friendly name of the wireless remote AP \n ● Wireless Network Name (SSID) 1 \n The preceding list will include a number of other \nparameters that affect the deployment of multiple wire-\nless remote APs: settings that control the correct radio \ncoverage across your site, for example, 802.11 Radio \nChannel, Transmission Rate, and Transmission Power, \nand so forth. Discussion of these parameters is outside \nthe scope of this appendix. Use the vendor documenta-\ntion as a reference when configuring these settings or \nconsult a wireless network services supplier 1 . \n The guidance in this appendix assumes that you have \nset these items correctly and are able to connect to the \n Tip: The Key Refresh Time-out is set to 60 minutes for use \nwith dynamic WEP. The Session Timeout value set in the \nIAS remote access policy is the same or shorter than this. \nWhichever of these has the lower setting will take prec-\nedence, so you only need to modify the setting in IAS. If \nyou are using WPA, you should increase this setting in \nthe AP to eight hours. Consult your vendor’s documenta-\ntion for more information. \n Use the same RADIUS secrets procedure to add \nwireless remote APs to IAS. Although you may have \n" }, { "page_number": 830, "text": "Appendix | H Configuring Wireless Internet Security Remote Access\n797\n not yet configured a secondary IAS server as a backup \nto the primary server, you can still add the server’s IP \naddress to the wireless remote AP now (to avoid having \nto reconfigure it later) 1 . \n Depending on the wireless remote AP hardware \nmodel, you may not have separate configurable entries \nfor Authentication and Accounting RADIUS servers. If \nyou have separate configurable entries, set them both \nto the same server unless you have a specific reason for \ndoing otherwise. The RADIUS retry limit and timeout \nvalues given in Table H.1 are common defaults but these \nvalues are not mandatory 1 . \n ADDITIONAL SETTINGS TO SECURE \nWIRELESS ACCESS POINTS \n In addition to enabling 802.1X parameters, you should \nalso configure the wireless remote APs for highest secu-\nrity. Most wireless network hardware is supplied with \n Note: If you are currently using wireless remote APs with \nno security enabled or only static WEP, you need to plan \nyour migration to an 802.1X – based WLAN. \n TABLE H.1 Wireless Access Point Configuration \n Item \n Setting \n Authentication Parameters \n \n Authentication Mode \n 802.1X Authentication \n Re-authentication \n Enable \n Rapid/Dynamic Re-keying \n Enable \n Key Refresh Time-out \n 60 minutes \n Encryption Parameters (these settings usually relate to static WEP \nencryption) \n (Encryption parameters may be disabled or be \noverridden when rapid re-keying is enabled) \n Enable Encryption \n Enable \n Deny Unencrypted \n Enable \n RADIUS Authentication \n \n Enable RADIUS Authentication \n Enable \n Primary RADIUS Authentication Server \n Primary IAS IP Address \n Primary RADIUS Server Port \n 1812 (default) \n Secondary RADIUS Authentication Server \n Secondary IAS IP Address \n Secondary RADIUS Server Port \n 1812 (default) \n RADIUS Authentication Shared Secret \n XXXXXX (replace with generated secret) \n Retry Limit \n 5 \n Retry Timeout \n 5 seconds \n RADIUS Accounting \n \n Enable RADIUS Accounting \n Enable \n Primary RADIUS Accounting Server \n Primary IAS IP Address \n Primary RADIUS Server Port \n 1813 (default) \n Secondary RADIUS Accounting Server \n Secondary IAS IP Address \n Secondary RADIUS Server Port \n 1813 (default) \n RADIUS Accounting Shared Secret \n XXXXXX (replace with generated secret) \n Retry Limit \n 5 \n Retry Timeout \n 5 seconds \n" }, { "page_number": 831, "text": "PART | VIII Appendices\n798\n insecure management protocols enabled and administra-\ntor passwords set to well-known defaults, which poses a \nsecurity risk. You should configure the settings listed in \n Table H.2 1 ; however, this is not an exhaustive list. You \nshould consult your vendor’s documentation for authori-\ntative guidance on this topic. When choosing passwords \nand community names for Simple Network Management \nProtocol (SNMP), use complex values that include upper \nand lowercase letters, numbers, and punctuation charac-\nters. Avoid choosing anything that can be guessed easily \nfrom information such as your domain name, company \nname, and site address 1 . \n You should not disable SSID (WLAN network name) \nbroadcast since this can interfere with the ability of \nWindows XP to connect to the right network. Although \ndisabling the SSID broadcast is often recommended as \na security measure, it gives little practical security ben-\nefit if a secure 802.1X authentication method is being \nused. Even with SSID broadcast from the AP disabled, \nit is relatively easy for an attacker to determine the SSID \nby capturing client connection packets. If you are con-\ncerned about broadcasting the existence of your WLAN, \nyou can use a generic name for your SSID, which will \nnot be attributable to your enterprise 1 . \n REPLICATING RADIUS CLIENT \nCONFIGURATION TO OTHER \nIAS SERVERS \n Typically, the wireless remote APs in a given site are serv-\niced by an IAS server at that site. For example, the site A \nIAS server services wireless remote APs in site A, while \nthe site B server services wireless remote APs in site B \nand so on. However, other server settings such as the \nremote access policies will often be common to many IAS \nservers. For this reason the export and import of RADIUS \nclient information is handled separately by the proce-\ndures described in this appendix. Although you will find \nrelatively few scenarios where replicating RADIUS cli-\nent information is relevant, it is useful in certain circum-\nstances (for example, where you have two IAS servers on \nthe same site acting as primary and secondary RADIUS \nservers for all wireless remote APs on that site) 1 . \n TABLE H.2 Wireless Access Point Security Configuration \n Item \n Recommended \nSetting \n Notes \n General \n \n \n Administrator Password \n XXXXXX \n Set to complex password. \n Other Management \nPasswords \n XXXXXX \n Some devices use multiple management passwords to help protect access using \ndifferent management protocols; ensure that all are changed from the defaults to \nsecure values. \n Management Protocols \n \n \n Serial Console \n Enable \n If no encrypted protocols are available, this is the most secure method of \nconfiguring wireless remote APs although this requires physical serial cable \nconnections between the wireless remote APs and terminal and hence cannot be \nused remotely. \n Telnet \n Disable \n All Telnet transmissions are in plaintext, so passwords and RADIUS client secrets \nwill be visible on the network. If the Telnet traffic can be secured using Internet \nProtocol security (IPsec) or SSH, you can safely enable and use it. \n HTTP \n Disable \n HTTP management is usually in plaintext and suffers from the same weaknesses \nas unencrypted telnet. HTTPS, if available, is recommended. \n HTTPS (SSL or TLS) \n Enable \n Follow the vendor’s instructions for configuring keys/certificates for this. \n SNMP Communities \n \n SNMP is the default protocol for network management. Use SNMP v3 with \npassword protection for highest security. It is often the protocol used by GUI \nconfiguration tools and network management systems. However, you can disable \nit if you do not use it. \n Community 1 Name \n XXXXXX \n The default is usually “ public. ” Change this to a complex value. \n Community 2 Name \n Disabled \n Any unnecessary community names should be disabled or set to complex values. \n" }, { "page_number": 832, "text": "799\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Frequently Asked Questions \n Appendix I \n Q. What is a firewall? \n A. Firewall helps make your computer invisible to \nonline attackers and blocks some malicious software \nsuch as viruses, worms, and Trojans. A firewall can \nalso help prevent software on your computer from \naccessing the Internet to accept updates and modifi-\ncation without your permission 1 . \n Firewalls come in both software and hardware form, but \nhardware firewalls are intended for use in addition \nto a software firewall. It is important to have both a \nfirewall and antivirus software turned on before you \nconnect to the Internet 1 . \n Q. What is antivirus software? \n A. Antivirus software helps protect your computer against \n specific viruses and related malicious software, such \nas worms and Trojans. Antivirus software must be \nkept up to date. Updates are generally available \nthrough a subscription from your antivirus vendor 1 . \n Q. Do you need both a firewall and antivirus \nsoftware? \n A. Yes. A firewall helps stop hackers and viruses before \nthey reach your computer, while antivirus software \nhelps get rid of known viruses if they manage to \nbypass the firewall or if they’ve already infected \nyour computer. One way viruses get past a firewall \nis when you ignore its warning messages when you \ndownload software from the Internet or email 1 . \n Q. What is antispyware software? \n A. Antispyware software helps detect and remove spy-\nware from your computer. “ Spyware ” (also known as \n “ adware ” ) generally refers to software that is designed \nto monitor your activities on your computer 1 . \n Spyware can produce unwanted pop-up advertising, \ncollect personal information about you, or change \nthe configuration of your computer to the spyware \ndesigner’s specifications. At its worst, spyware can \nenable criminals to disable your computer and steal \nyour identity. Antispyware software is an important \ntool to help you keep your computer running prop-\nerly and free from intrusion 1 . \n \n 1 “ Security products and services FAQ, ” Microsoft TechNet, © 2008 \nMicrosoft Corporation. All rights reserved. Microsoft Corporation, \nOne Microsoft Way, Redmond, WA 98052-6399. 2008. \n Note: Some “ spyware ” actually helps desirable software \nto run as it’s intended, or enables some good software \nto be offered on a free-with-advertisements basis (such \nas many online email programs). Most antispyware soft-\nware allows you to customize your settings so you can \nenable or disable programs. \n Q. What is a spam filter? \n A. Spam filters (sometimes broadly referred to as \n “ email filters ” ) evaluate incoming email messages \nto determine if they contain elements that are com-\nmonly associated with unwanted or dangerous bulk \nmailing. If the filter determines that an email mes-\nsage is suspicious, the message usually goes to a \ndesignated folder, and links and other code in it are \ndisabled. Then you can evaluate the message more \nsafely at your convenience 1 . \n Q : What is a phishing filter? \n A : A phishing filter is usually a component of a Web \nbrowser or Internet toolbar. It evaluates Web sites for \nsigns that they are connected with phishing scams. \n Phishing scams use email and Web sites that look identi-\ncal to those that belong to legitimate sources (such as \nfinancial or government institutions) but are actually \nhoaxes. If you click links in the email or enter your \nuser name, password, and other data into these Web \n" }, { "page_number": 833, "text": "PART | VIII Appendices\n800\n sites, it gives scammers information they can use to \ndefraud you or to steal your identity 1 . \n Q: What are parental controls? \n A : Parental controls can help you protect your children \nfrom inappropriate content, both on the Internet and \nin computer video games 1 . \n Q: Does my brand new computer come with these \nsoftware security tools? \n A : Maybe. When you buy a new computer, check your \npacking list to see if the manufacturer has included \nfirewall, antivirus, or antispyware software in addi-\ntion to the operating system 1 . \n" }, { "page_number": 834, "text": "801\nComputer and Information Security Handbook\nCopyright © 2009, Morgan Kaufmann Inc. All rights of reproduction in any form reserved.\n2009\n Glossary \n Appendix J\n AAA: Administration, authorization, and authentication. \n Access: A specific type of interaction between a subject \nand an object that results in the flow of information \nfrom one to the other. The capability and opportunity \nto gain knowledge of, or to alter information or mate-\nrials including the ability and means to communicate \nwith (i.e., input or receive output), or otherwise make \nuse of any information, resource, or component in a \ncomputer system. \n Access Control: The process of limiting access to the \nresources of a system to only authorized persons, \nprograms, processes, or other systems. Synonymous \nwith controlled access and limited access. Requires \nthat access to information resources be controlled by \nor for the target system. In the context of network \nsecurity, access control is the ability to limit and con-\ntrol the access to host systems and applications via \ncommunications links. To achieve this control, each \nentity trying to gain access must first be identified, or \nauthenticated, so that access rights can be tailored to \nthe individual. \n Accreditation: The written formal management decision \nto approve and authorize an organization to operate a \nclassified information system (IS) to process, store, \ntransfer, or provide access to classified information. \n AES: Advanced Encryption Standard. \n Accreditation/Approval: \nThe \nofficial \nmanagement \nauthorization for operation of an MIS. It provides a \nformal declaration by an Accrediting Authority that a \ncomputer system is approved to operate in a particu-\nlar security mode using a prescribed set of safeguards. \nAccreditation is based on the certification process as \nwell as other management considerations. An accredi-\ntation statement affixes security responsibility with the \nAccrediting Authority and shows that proper care has \nbeen taken for security. \n Adequate Security: Security commensurate with the risk \nand magnitude of the harm resulting from the loss, \nmisuse, or unauthorized access to or modification of \ninformation. This includes assuring that systems and \napplications used by the agency operate effectively \nand provide appropriate confidentiality, integrity, and \navailability, through the use of cost-effective manage-\nment, personnel, operational and technical controls. \n ADP: Automatic Data Processing. See also: Management \nInformation System. \n Application: A software organization of related func-\ntions, or series of interdependent or closely related \nprograms, that when executed accomplish a specified \nobjective or set of user requirements. See also: Major \nApplication, Process. \n Application Control: The ability for next generation \ncontent filter gateways to inspect the application and \ndetermine its intention and block accordingly. \n Application Owner: The official who has the responsi-\nbility to ensure that the program or programs, which \nmake up the application accomplish the specified \nobjective or set of user requirements established for \nthat application, including appropriate security safe-\nguards. See also: Process Owner. \n Attachment: The blocking of certain types of file (exe-\ncutable programs). \n Audit: To conduct the independent review and examina-\ntion of system records and activities. \n Audit Capability: The ability to recognize, record, store, \nand analyze information related to security-relevant \nactivities on a system in such a way that the result-\ning records can be used to determine which activities \noccurred and which user was responsible for them. \n Audit Trail: A set of records that collectively provides \ndocumentary evidence of processing. It is used to aid \nin tracing from original transactions forward to related \nrecords and reports, and/or backwards from records \nand reports to their component source transactions. \n Automated Information Systems (AIS): The infrastruc-\nture, organization, personnel, and components for the \n" }, { "page_number": 835, "text": "PART | VIII Appendices\n802\n collection, processing, storage, transmission, display, \ndissemination, and disposition of information. \n Automatic Data Processing (ADP): The assembly of \ncomputer hardware, firmware, and software used to \ncategorize, sort, calculate, compute, summarize, store, \nretrieve, control, process, and/or protect data with a \nminimum of human intervention. ADP systems can \ninclude, but are not limited to, process control com-\nputers, embedded computer systems that perform \ngeneral purpose computing functions, supercomput-\ners, personal computers, intelligent terminals, offices \nautomation systems (which includes standalone \nmicroprocessors, memory typewriters, and terminal \nconnected to mainframes), firmware, and other imple-\nmentations of MIS technologies as may be developed: \nthey also include applications and operating system \nsoftware. See also: Management Information System. \n Authenticate/Authentication: The process to verify the \nidentity of a user, device, or other entity in a compu-\nter system, often as a prerequisite to allowing access \nto resources in a system. Also, a process used to \nverify that the origin of transmitted data is correctly \nidentified, with assurance that the identity is not false. \nTo establish the validity of a claimed identity. \n Authenticated User: A user who has accessed a MIS \nwith a valid identifier and authentication combination. \n Authenticator: A method of authenticating a classified \ninformation system (IS) in the form of knowledge or \npossession (for example, password, token card, key). \n Authorization: The privileges and permissions granted \nto an individual by a designated official to access or \nuse a program, process, information, or system. These \nprivileges are based on the individual’s approval and \nneed-to-know. \n Authorized Person: A person who has the need-to-know \nfor sensitive information in the performance of \nofficial duties and who has been granted author-\nized access at the required level. The responsibility \nfor determining whether a prospective recipient is \nan authorized person rests with the person who has \npossession, knowledge, or control of the sensitive \ninformation involved, and not with the prospective \nrecipient. \n Availability: The property of being accessible and usa-\nble upon demand by an authorized entity. Security \nconstraints must make MIS services available to \nauthorized users and unavailable to unauthorized \nusers. \n Availability of Data: The state when data are in the \nplace needed by the user, at the time the user needs \nthem, and in the form needed by the user. \n Backup: A copy of a program or data file for the pur-\nposes of protecting against loss if the original data \nbecomes unavailable. \n Backup and Restoration of Data: The regular copy-\ning of data to separate media and the recovery from a \nloss of information. \n Backup Operation: A method of operations to complete \nessential tasks as identified by a risk analysis. These \ntasks would be employed following a disruption of the \nMIS and continue until the MIS is acceptably restored. \nSee also: Contingency Plan, Disaster Recovery. \n Bad Reputation Domains: Sites that appear on one or \nmore security industry blacklists for repeated bad \nbehavior, including hosting malware and phishing \nsites, generating spam, or hosting content linked to by \nspam email. \n Botnet: Sites used by botnet herders for command and \ncontrol of infected machines. Sites that known mal-\nware and spyware connects to for command and con-\ntrol by cyber criminals. These sites are differentiated \nfrom the Malcode category to enable reporting on \npotentially infected computers inside the network. \n By URL: Filtering based on the URL. This is a suitable \nfor blocking Web sites or sections of Web sites. \n C2: A level of security safeguard criteria. See also: \nControlled Access Protection, TCSEC. \n Capstone: The U.S. Government’s long-term project to \ndevelop a set of standards for publicly-available cryp-\ntography, as authorized by the Computer Security \nAct of 1987. The Capstone cryptographic system \nwill consist of four major components and be con-\ntained on a single integrated circuit microchip that \nprovides nonDoD data encryption for Sensitive But \nUnclassified information. It implements the Skipjack \nalgorithm. See also: Clipper. \n Certification: The comprehensive analysis of the tech-\nnical and nontechnical features, and other safe-\nguards, to establish the extent to which a particular \nMIS meets a set of specified security requirements. \nCertification is part of the accreditation process and \ncarries with it an implicit mandate for accreditation. \nSee also: Accreditation. \n Channel: An information transfer path within a system \nor the mechanism by which the path is affected. \n CHAP: Challenge Handshake Authentication Protocol \ndeveloped by the IETF. \n Child Pornography: Sites that promote, discuss or \nportray children in sexual acts and activity or the \nabuse of children. Pornographic sites that advertise \nor imply the depiction of underage models and that \ndo not have a U.S.C. 2257 declaration on their main \n" }, { "page_number": 836, "text": "Appendix | J Glossary\n803\n page. As of March 13, 2007, all sites categorized as \nchild porn are actually saved into the URL Library \nin the Porn category and are automatically submitted \nto the Internet Watch Foundation for legal verifica-\ntion as child pornography ( http://www.iwf.org.uk/ ). \nIf the IWF agrees that a site and/or any of its hosted \npages are child pornography, they add it those URLs \nto their master list. The master list is downloaded \nnightly and saved into the URL Library in the Child \nPorn category. \n Cipher: An algorithm for encryption or decryption. A \ncipher replaces a piece of information (an element \nof plain text) with another object, with the intent to \nconceal meaning. Typically, the replacement rule \nis governed by a secret key. See also: Decryption, \nEncryption. \n Ciphertext: Form of cryptography in which the plain-\ntext is made unintelligible to anyone who intercepts \nit by a transformation of the information itself, based \non some key. \n CIO-Cyber Web Site: Provides training modules for \nCyber Security subjects. \n Classification: A systematic arrangement of information \nin groups or categories according to established crite-\nria. In the interest of national security it is determined \nthat the information requires a specific degree of pro-\ntection against unauthorized disclosure together with \na designation signifying that such a determination \nhas been made. \n Classified Distributive Information Network (CDIN): \nAny cable, wire, or other approved transmission \nmedia used for the clear text transmission of classified \ninformation in certain DOE access controlled envi-\nronments. Excluded is any system used solely for the \nclear text transmission and reception of intrusion/fire \nalarm or control signaling. \n Classified Information System (CIS): A discrete set of \ninformation resources organized for the collection, \nprocessing, maintenance, transmission, and dissemi-\nnation of classified information, in accordance with \ndefined procedures, whether automated or manual. \nGuidance Note: For the purposes of this document, \nan IS may be a standalone, single- or multiuser sys-\ntem or a network comprised of multiple systems \nand ancillary supporting communications devices, \ncabling, and equipment. \n Classified Information Systems Security Plan (ISSP): \nThe basic classified system protection document \nand evidence that the proposed system, or update \nto an existing system, meets the specified protec-\ntion requirements. The Classified ISSP describes the \nclassified IS, any interconnections, and the security \nprotections and countermeasures. This plan is used \nthroughout the certification, approval, and accredita-\ntion process and serves for the lifetime of the classi-\nfied system as the formal record of the system and its \nenvironment as approved for operation. It also serves \nas the basis for inspections of the system. \n Classified Information Systems Security Program: \nThe Classified Information Systems Security Program \nprovides for the protection of classified information \non information systems at LANL. \n Classified Information Systems Security Site Manager \n(ISSM): The manager responsible for the LANL \nClassified Information Systems Security Program. \n Clear or Clearing (MIS Storage Media): The removal \nof sensitive data from MIS storage and other periph-\neral devices with storage capacity, at the end of a \nperiod of processing. It includes data removal in such \na way that assures, proportional to data sensitivity, it \nmay not be reconstructed using normal system capa-\nbilities, i.e., through the keyboard. See also : Object \nReuse, Remanence. \n Clipper: Clipper is an encryption chip developed and \nsponsored by the U.S. government as part of the \nCapstone project. Announced by the White House \nin April 1993, Clipper was designed to balance com-\npeting concerns of federal law-enforcement agencies \nand private citizens by using escrowed encryption \nkeys. See also: Capstone, Skipjack. \n Collaborator: A person not employed by the Laboratory \nwho (1) is authorized to remotely access a LANL \nunclassified computer system located on the site or \n(2) uses a LANL system located off the site. Guidance \nnote: A collaborator does not have an active Employee \nInformation System record. \n Commercial-off-the-Shelf (COTS): Products that are \ncommercially available and can be utilized as gener-\nally marketed by the manufacturer. \n Compromise: The disclosure of sensitive informa-\ntion to persons not authorized access or having a \nneed-to-know. \n Computer Fraud and Abuse Act of 1986: This law \nmakes it a crime to knowingly gain access to a \nfederal government computer without authorization \nand to affect its operation. \n Computer Security: Technological and managerial \nprocedures applied to MIS to ensure the availability, \nintegrity, and confidentiality of information managed \nby the MIS. See also: Information Systems Security. \n Computer Security Act of 1987: The law provides \nfor improving the security and privacy of sensitive \n" }, { "page_number": 837, "text": "PART | VIII Appendices\n804\n i nformation in “ federal computer systems ” — “ a com-\nputer system operated by a federal agency or other \norganization that processes information (using a \ncomputer system) on behalf of the federal govern-\nment to accomplish a federal function. ” \n Computer Security Incident: Any event or condition \nhaving actual or potentially adverse effects on an infor-\nmation system. See the Cyber Security Handbook. \n Computing, Communications, and Networking (CCN) \nDivision Web Sites: Describes network services and \ntheir use by system users. \n Confidentiality: The condition when designated infor-\nmation collected for approved purposes is not dissem-\ninated beyond a community of authorized knowers. It \nis distinguished from secrecy, which results from the \nintentional concealment or withholding of informa-\ntion. [OTA-TCT-606] Confidentiality refers to: 1) how \ndata will be maintained and used by the organization \nthat collected it; 2) what further uses will be made of \nit; and 3) when individuals will be required to consent \nto such uses. It includes the protection of data from \npassive attacks and requires that the information (in \nan MIS or transmitted) be accessible only for read-\ning by authorized parties. Access can include printing, \ndisplaying, and other forms of disclosure, including \nsimply revealing the existence of an object. \n Configuration Management (CM): The management \nof changes made to an MIS hardware, software, \nfirmware, documentation, tests, test fixtures, test \ndocumentation, communications interfaces, operating \nprocedures, installation structures, and all changes \nthere to throughout the development and operational \nlife-cycle of the MIS. \n Contingency Plan: The documented organized process \nfor implementing emergency response, backup oper-\nations, and post-disaster recovery, maintained for an \nMIS as part of its security program, to ensure the \navailability of critical assets (resources) and facilitate \nthe continuity of operations in an emergency. See \nalso: Disaster Recovery. \n Contingency Planning: The process of preparing a doc-\numented organized approach for emergency response, \nbackup operations, and post-disaster recovery that \nwill ensure the availability of critical MIS resources \nand facilitate the continuity of MIS operations in an \nemergency. See also: Contingency Plan, Disaster \nRecovery. \n Controlled Access Protection (C2): A category of safe-\nguard criteria as defined in the Trusted Computer \nSecurity Evaluation Criteria (TCSEC). It includes \nidentification and authentication, accountability, \nauditing, object reuse, and specific access restrictions \nto data. This is the minimum level of control for SBU \ninformation. \n Conventional Encryption: A form of cryptosystem \nin which encryption and decryption are performed \nusing the same key. See also: Symmetric Encryption. \n COTS: See: Commercial-off-the-Shelf. \n COTS Software: Commercial-off the Shelf Software – \nsoftware acquired by government contract through a \ncommercial vendor. This software is a standard prod-\nuct, not developed by a vendor for a particular gov-\nernment project. \n Countermeasures: See: Security Safeguards. \n Cracker: See: Hacker. \n Criminal Skills: Sites that promote crime or illegal activ-\nity such as credit card number generation, illegal sur-\nveillance and murder. Sites which commercially sell \nsurveillance equipment will not be saved. Sample \nsites: www.illegalworld.com , www.password-crackers.\ncom , and www.spy-cam-surveillance-equipment.com \n Critical Assets: Those assets, which provide direct sup-\nport to the organization’s ability to sustain its mission. \nAssets are critical if their absence or unavailability \nwould significantly degrade the ability of the organi-\nzation to carry out its mission, and when the time that \nthe organization can function without the asset is less \nthan the time needed to replace the asset. \n Critical Processing: Any applications, which are so \nimportant to an organization, that little or no loss of \navailability is acceptable; critical processing must \nbe defined carefully during disaster and contingency \nplanning. See also: Critical Assets. \n Cryptanalysis: The branch of cryptology dealing with \nthe breaking of a cipher to recover information, or \nforging encrypted information what will be accepted \nas authentic. \n Cryptography: The branch of cryptology dealing with \nthe design of algorithms for encryption and decryp-\ntion, intended to ensure the secrecy and/or authentic-\nity of messages. \n Cryptology: The study of secure communications, which \nencompasses both cryptography and cryptanalysis. \n Cyber Security Program: The program mandated to \nensure that the confidentiality, integrity, and avail-\nability of electronic data, networks and computer \nsystems are maintained to include protecting data, \nnetworks and computing systems from unauthorized \naccess, alteration, modification, disclosure, destruc-\ntion, transmission, denial of service, subversion of \nsecurity measures, and improper use. \n DAC: See: C2, Discretionary Access Control and TCSEC. \n" }, { "page_number": 838, "text": "Appendix | J Glossary\n805\n DASD (Direct Access Storage Device): A physical \nelectromagnetic data storage unit used in larger com-\nputers. Usually these consist of cylindrical stacked \nmultiunit assemblies, which have large capacity stor-\nage capabilities. \n Data: A representation of facts, concepts, information, or \ninstructions suitable for communication, interpreta-\ntion, or processing. It is used as a plural noun meaning \n “ facts or information ” as in: These data are described \nfully in the appendix , or as a singular mass noun \nmeaning “ information ” as in: The data is entered into \nthe computer . \n Data Custodian: The person who ensures that infor-\nmation is reviewed to determine if it is classified or \nsensitive unclassified. This person is responsible for \ngeneration, handling and protection, management, \nand destruction of the information. Guidance Note: \nAn alternative name for the data custodian is classi-\nfied information systems application owner. \n DES: Digital Encryption Standard. \n Data Encryption Standard (DES): Data Encryption \nStandard is an encryption block cipher defined and \nendorsed by the U.S. government in 1977 as an offi-\ncial standard (FIPS PUB 59). Developed by IBM®, \nit has been extensively studied for over 15 years and \nis the most well known and widely used cryptosys-\ntem in the world. See also: Capstone, Clipper, RSA, \nSkipjack. \n Data Integrity: The state that exists when computerized \ndata are the same as those that are in the source docu-\nments and have not been exposed to accidental or \nmalicious alterations or destruction. It requires that the \nMIS assets and transmitted information be capable of \nmodification only by authorized parties. Modification \nincludes writing, changing, changing status, deleting, \ncreating, and the delaying or replaying of transmitted \nmessages. See also: Integrity, System Integrity. \n Deciphering: The translation of encrypted text or data \n(called ciphertext) into original text or data (called \nplaintext). See also: Decryption. \n Decryption: The translation of encrypted text or data \n(called ciphertext) into original text or data (called \nplaintext). See also: Deciphering. \n Dedicated Security Mode: An operational method \nwhen each user with direct or indirect individual \naccess to a computer system, its peripherals, and \nremote terminals or hosts has a valid personnel secu-\nrity authorization and a valid need-to-know for all \ninformation contained within the system. \n Dedicated System: A system that is specifically and \nexclusively dedicated to and controlled for a specific \nmission, either for full time operation or a specified \nperiod of time. See also: Dedicated Security Mode. \n Default: A value or setting that a device or program auto-\nmatically selects if you do not specify a substitute. \n Degaussing Media: Method to magnetically erase data \nfrom magnetic tape. \n Denial of Service: The prevention of authorized access \nto resources or the delaying of time-critical opera-\ntions. Refers to the inability of a MIS system or any \nessential part to perform its designated mission, either \nby loss of, or degradation of operational capability. \n Department of Defense (DOD) Trusted Computer \nSystem Evaluation Criteria: The National Computer \nSecurity Center (NCSC) criteria intended for use in \nthe design and evaluation of systems that will process \nand/or store sensitive (or classified) data. This docu-\nment contains a uniform set of basic requirements and \nevaluation classes used for assessing the degrees of \nassurance in the effectiveness of hardware and soft-\nware security controls built in the design and evalua-\ntion of MIS. See also: C2, Orange Book, TCSEC. \n DES: See: Data Encryption Standard. See also: Capstone, \nClipper, RSA, Skipjack. \n Designated Accrediting Authority (DAA): A DOE offi-\ncial with the authority to formally grant approval for \noperating a classified information system; the person \nwho determines the acceptability of the residual risk \nin a system that is prepared to process classified infor-\nmation and either accredits or denies operation of the \nsystem. \n Designated Security Officer: The person responsible to \nthe designated high level manager for ensuring that \nsecurity is provided for and implemented throughout \nthe life-cycle of an MIS from the beginning of the \nsystem concept development phase through its design, \ndevelopment, operations, maintenance, and disposal. \n Dial-up: The service whereby a computer terminal can \nuse the telephone to initiate and effect communica-\ntion with a computer. \n Digital Signature Standard: DSS is the Digital Signature \nStandard, which specifies a Digital Signature Algorithm \n(DSA), and is part of the U.S. government’s Capstone \nproject. It was selected by NIST and NSA to be the dig-\nital authentication standard of the U.S. government, but \nhas not yet been officially adopted. See also: Capstone, \nClipper, RSA, Skipjack. \n Disaster Recovery Plan: The procedures to be followed \nshould a disaster (fire, flood, etc.) occur. Disaster \nrecovery plans may cover the computer center and \nother aspects of normal organizational functioning. \nSee also: Contingency Plan. \n" }, { "page_number": 839, "text": "PART | VIII Appendices\n806\n Discretionary Access Control (DAC): A means of \nrestricting access to objects based on the identity of \nsubjects and/or groups to which they belong or on \nthe possession of an authorization granting access to \nthose objects. The controls are discretionary in the \nsense that a subject with a certain access permission \nis capable of passing that permission (perhaps indi-\nrectly) onto any other subject. \n Discretionary access controls: Controls that limit access \nto information on a system on an individual basis. \n Discretionary processing: Any computer work that can \nwithstand interruption resulting from some disaster. \n DSS: See: Capstone, Clipper, Digital Signature Standard, \nRSA, Skipjack. \n Dubious/Unsavory: Sites of a questionable legal or ethi-\ncal nature. Sites which promote or distribute products, \ninformation, or devices whose use may be deemed \nunethical or, in some cases, illegal: Warez, Unlicensed \nmp3 downloads, Radar detectors, and Street rac-\ning. Sample sites: www.thepayback.com and www.\nstrangereports.com . \n Emergency Response: A response to emergencies such \nas fire, flood, civil commotion, natural disasters, bomb \nthreats, etc., in order to protect lives, limit the damage \nto property and the impact on MIS operations. \n Enciphering: The conversion of plaintext or data into \nunintelligible form by means of a reversible transla-\ntion that is based on a translation table or algorithm. \nSee also: Encryption. \n Encryption: The conversion of plaintext or data into \nunintelligible form by means of a reversible transla-\ntion that is based on a translation table or algorithm. \nSee also: Enciphering. \n Entity: Something that exists as independent, distinct or \nself-contained. For programs, it may be anything that \ncan be described using data, such as an employee, \nproduct, or invoice. Data associated with an entity are \ncalled attributes. A product’s price, weight, quantities \nin stock, and description all constitute attributes. It is \noften used in describing distinct business organiza-\ntions or government agencies. \n Environment: The aggregate of external circumstance, \nconditions, and events that affect the development, \noperation, and maintenance of a system. Environment \nis often used with qualifiers such as computing envi-\nronment, application environment, or threat environ-\nment, which limit the scope being considered. \n Evaluation: Evaluation is the assessment for con-\nformance with a preestablished metric, criteria, or \nstandard. \n Facsimile: A document that has been sent, or is about to \nbe sent, via a fax machine. \n Firewall: A collection of components or a system that \nis placed between two networks and possesses the \nfollowing properties: 1) all traffic from inside to out-\nside, and vice-versa, must pass through it; 2) only \nauthorized traffic, as defined by the local security \npolicy, is allowed to pass through it; 3) the system \nitself is immune to penetration. \n Firmware: Equipment or devices within which computer \nprogramming instructions necessary to the perform-\nance of the device’s discrete functions are electrically \nembedded in such a manner that they cannot be elec-\ntrically altered during normal device operations. \n Friendly Termination: The removal of an employee from \nthe organization when there is no reason to believe that \nthe termination is other than mutually acceptable. \n Gateway: A machine or set of machines that provides \nrelay services between two networks. \n General Support System: An interconnected set of infor-\nmation resources under the same direct management \ncontrol which shares common functionality. A sys-\ntem normally includes hardware, software, informa-\ntion, data, applications, communications, and people. \nA system can be, for example, a local area network \n(LAN) including smart terminals that support a branch \noffice, an agency-wide backbone, a communications \nnetwork, a departmental data processing center includ-\ning its operating system and utilities, a tactical radio \nnetwork, or a shared information processing service \norganization (IPSO). \n Generic Remote Access: Web sites pertaining to the use \nof, or download of remote access clients. \n Green Network: See Open Network. \n Hack: Any software in which a significant portion of the \ncode was originally another program. Many hacked \nprograms simply have the copyright notice removed. \nSome hacks are done by programmers using code \nthey have previously written that serves as a boiler-\nplate for a set of operations needed in the program \nthey are currently working on. In other cases it sim-\nply means a draft. Commonly misused to imply theft \nof software. See also: Hacker. \n Hacker: Common nickname for an unauthorized person \nwho breaks into or attempts to break into an MIS by \ncircumventing software security safeguards. Also, com-\nmonly called a “ cracker. ” See also: Hack, Intruder. \n Hacking: Sites discussing and/or promoting unlawful or \nquestionable tools or information revealing the ability \nto gain access to software or hardware/communications \n" }, { "page_number": 840, "text": "Appendix | J Glossary\n807\n equipment and/or passwords: Password ge neration, \nCompiled binaries, Hacking tools and Software piracy \n(game cracking). Sample sites: www.happyhacker.org , \nand www.phreak.com . \n Hardware: Refers to objects that you can actually touch, \nlike disks, disk drives, display screens, keyboards, \nprinters, boards, and chips. \n Heuristic: Filtering based on heuristic scoring of the \ncontent based on multiple criteria. \n Hostmaster Database: A relational database maintained \nby the Network Engineering Group (CCN-5) that con-\ntains information about every device connected to the \nLaboratory unclassified yellow and green networks. \n HTML Anomalies: Legitimate companies keep their \nWeb sites up to date and standards based to support \nthe newest browser version support and features and \nare malicious code free. Malicious sites frequently \nhave HTML code that is not compliant to standards. \n Identification: The process that enables recognition of \nan entity by a system, generally by the use of unique \nmachine-readable usernames. \n Information Security: The protection of information \nsystems against unauthorized access to or modifica-\ntion of information, whether in storage, processing \nor transit, and against the denial of service to author-\nized users or the provision of service to unauthorized \nusers, including those measures necessary to detect, \ndocument, and counter such threats. \n Information Security Officer (ISO): The person respon-\nsible to the designated high level manager for ensuring \nthat security is provided for and implemented through-\nout the life-cycle of an MIS from the beginning of the \nsystem concept development phase through its design, \ndevelopment, operations, maintenance, and disposal. \n Information System (IS): The entire infrastructure, \norganizations, personnel and components for the col-\nlection, processing, storage, transmission, display, \ndissemination and disposition of information. \n Information Systems Security (INFOSEC): The pro-\ntection of information assets from unauthorized access \nto or modification of information, whether in storage, \nprocessing, or transit, and against the denial of serv-\nice to authorized users or the provision of service to \nunauthorized users, including those measures neces-\nsary to detect, document, and counter such threats. \nINFOSEC reflects the concept of the totality of MIS \nsecurity. See also: Computer Security. \n Information System Security Officer (ISSO): The \nworker responsible for ensuring that protection meas-\nures are installed and operational security is maintained \nfor one or more specific classified information systems \nand/or networks. \n IKE: Internet Key Exchange. \n Integrated Computing Network (ICN): LANL’s pri-\nmary institutional network. \n Integrity: A subgoal of computer security which ensures \nthat: 1) data is a proper representation of information; \n2) data retains its original level of accuracy; 3) data \nremains in a sound, unimpaired, or perfect condition; \n3) the MIS perform correct processing operations; and \n4) the computerized data faithfully represent those in \nthe source documents and have not been exposed to \naccidental or malicious alteration or destruction. See \nalso: Data Integrity, System Integrity. \n Interconnected System: An approach in which the net-\nwork is treated as an interconnection of separately \ncreated, managed, and accredited MIS. \n Internet: A global network connecting millions of comput-\ners. As of 1999, the Internet has more than 200 million \nusers worldwide, and that number is growing rapidly. \n Intranet: A network based on TCP/IP protocols (an \nInternet) belonging to an organization, usually a cor-\nporation, accessible only by the organization’s mem-\nbers, employees, or others with authorization. An \nintranet’s Web sites look and act just like any other \nWeb sites, but the firewall surrounding an intranet \nfends off unauthorized access. \n Intruder: An individual who gains, or attempts to gain, \nunauthorized access to a computer system or to gain \nunauthorized privileges on that system. See also: \nHacker. \n Intrusion Detection: Pertaining to techniques, which \nattempt to detect intrusion into a computer or network \nby observation of actions, security logs, or audit data. \nDetection of break-ins or attempts either manually or \nvia software expert systems that operate on logs or \nother information available on the network. \n Invalid Web Pages: Sites where a domain may be \nr egistered but no content is served or the server is \noffline. \n Ipsec: Internet Protocol Security is a framework for \na set of security protocols at the network or packet \nprocessing layer of network communications. IPsec \nis ubiquitous amongst firewall, VPNs, and routers. \n ISO/AISO: The persons responsible to the Office Head \nor Facility Director for ensuring that security is pro-\nvided for and implemented throughout the life-cycle \nof an IT from the beginning of the concept develop-\nment plan through its design, development, opera-\ntion, maintenance, and secure disposal. \n" }, { "page_number": 841, "text": "PART | VIII Appendices\n808\n Issue-Specific Policy: Policies developed to focus on \nareas of current relevance and concern to an office or \nfacility. Both new technologies and the appearance of \nnew threats often require the creation of issue-specific \npolicies (email, Internet usage). \n IT Security: Measures and controls that protect an IT \nagainst denial of and unauthorized (accidental or \nintentional) disclosure, modification, or destruction \nof ITs and data. IT security includes consideration of \nall hardware and/or software functions. \n IT Security Policy: The set of laws, rules, and practices \nthat regulate how an organization manages, protects, \nand distributes sensitive information. \n IT Systems: An assembly of computer hardware, soft-\nware and/or firmware configured to collect, create, \ncommunicate, compute, disseminate, process, store, \nand/or control data or information. \n Kerberos: Kerberos is a secret-key network authenti-\ncation system developed by MIT and uses DES for \nencryption and authentication. Unlike a public-key \nauthentication system, it does not produce digital \nsignatures. Kerberos was designed to authenticate \nrequests for network resources rather than to authen-\nticate authorship of documents. See also: DSS. \n Key (digital): A set of code synonymous with key pairs \nas part of a public key infrastructure. The key pairs \ninclude ‘ private ’ and ‘ public ’ keys. Public keys are \ngenerally used for encrypting data and private keys \nare generally used for signing and decrypting data. \n Key Distribution Center: A system that is authorized \nto transmit temporary session keys to principals \n(authorized users). Each session key is transmitted \nin encrypted form, using a master key that the key \ndistribution shares with the target principal. See also: \nDSS, Encryption, Kerberos. \n Label: The marking of an item of information that \nreflects its information security classification. An \ninternal label is the marking of an item of information \nthat reflects the classification of that item within the \nconfines of the medium containing the information. \nAn external label is a visible or readable marking on \nthe outside of the medium or its cover that reflects \nthe security classification information resident within \nthat particular medium. See also: Confidentiality. \n LAN (Local Area Network): An interconnected system \nof computers and peripherals. LAN users can share \ndata stored on hard disks in the network and can \nshare printers connected to the network. \n Language: Content Filtering systems can be used to \nlimit the results of an Internet search to those that are \nin your native language. \n LANL Unclassified Network: The LANL unclassified \nnetwork that consists of two internal networks: the \nunclassified protected network (Yellow Network) and \nthe open network (Green Network). \n LDAP: Short for Lightweight Directory Access Protocol, \na set of protocols for accessing information directo-\nries. LDAP is based on the standards contained within \nthe X.500 standard, but is significantly simpler. And \nunlike X.500, LDAP supports TCP/IP, which is nec-\nessary for any type of Internet access. \n Least Privilege: The principle that requires each sub-\nject be granted the most restrictive set of privileges \nneeded for the performance of authorized tasks. The \napplication of this principle limits the damage that \ncan result from accident, error, or unauthorized use. \n Local Area Network: A short-haul data communica-\ntions systems that connects IT devices in a build-\ning or group of buildings within a few square miles, \nincluding (but not limited to) workstations, front end \nprocessors, controllers, switches, and gateways. \n Mail header: Filtering based solely on the analysis of e-\nmail headers. Antispam systems try to use this tech-\nnique as well, but it is not very effective due to the \nease of message header forgery. \n Mailing List: Used to detect mailing list messages and \nfile them in appropriate folders. \n Major Application (MA): A computer application that \nrequires special management attention because of its \nimportance to an organization’s mission; its high devel-\nopment, operating, and/or maintenance costs; or its sig-\nnificant role in the administration of an organization’s \nprograms, finances, property, or other resources. \n Malicious Code/Virus: Sites that promote, demonstrate \nand/or carry malicious executable, virus or worm \ncode that intentionally cause harm by modifying or \ndestroying computer systems often without the user’s \nknowledge. \n Management Controls: Security methods that focus on \nthe management of the computer security system and \nthe management of risk for a system. \n Management Information System (MIS): An MIS is \nan assembly of computer hardware, software, and/or \nfirmware configured to collect, create, communicate, \ncompute, disseminate, process, store, and/or control data \nor information. Examples include: information storage \nand retrieval systems, mainframe computers, minicom-\nputers, personal computers and workstations, office \nautomation systems, automated message processing sys-\ntems (AMPSs), and those supercomputers and process \ncontrol computers (e.g., embedded computer systems) \nthat perform general purpose computing functions. \n" }, { "page_number": 842, "text": "Appendix | J Glossary\n809\n MIS Owner: The official who has the authority to decide \non accepting the security safeguards prescribed for an \nMIS and is responsible for issuing an accreditation \nstatement that records the decision to accept those \nsafeguards. See also: Accreditation Approval (AA), \nApplication Owner, Process Owner. \n MIS Security: Measures or controls that safeguard or \nprotect an MIS against unauthorized (accidental or \nintentional) disclosure, modification, destruction of \nthe MIS and data, or denial of service. MIS security \nprovides an acceptable level of risk for the MIS and \nthe data contained in it. Considerations include: 1) all \nhardware and/or software functions, characteristics, \nand/or features; 2) operational procedures, account-\nability procedures, and access controls at all compu-\nter facilities in the MIS; 3) management constraints; \n4) physical structures and devices; and 5) personnel \nand communications controls. \n Microprocessor: A semiconductor central processing \nunit contained on a single integrated circuit chip. \n Modem: An electronic device that allows a microcom-\nputer or a computer terminal to be connected to \nanother computer via a telephone line. \n Multiuser Systems: Any system capable of supporting \nmore than one user in a concurrent mode of operation. \n National Computer Security Center (NCSC): The \ngovernment agency part of the National Security \nAgency (NSA) and that produces technical refer-\nence materials relating to a wide variety of computer \nsecurity areas. It is located at 9800 Savage Rd., Ft. \nGeorge G. Meade, Maryland. \n National Institute of Standards and Technology \n(NIST): The federal organization that develops and \npromotes measurement, standards, and technology \nto enhance productivity, facilitate trade, and improve \nthe quality of life. \n National \nTelecommunications \nand \nInformation \nSystems Security Policy: Directs federal agencies, \nby July 15, 1992, to provide automated Controlled \nAccess Protection (C2 level) for MIS, when all users \ndo not have the same authorization to use the sensi-\ntive information. \n Need-to-Know: Access to information based on clearly \nidentified need to know the information to perform \nofficial job duties. \n Network: A communications medium and all compo-\nnents attached to that medium whose responsibility \nis the transference of information. Such components \nmay include MISs, packet switches, telecommunica-\ntions controllers, key distribution centers, and techni-\ncal control devices. \n Network Security: Protection of networks and their \nservices unauthorized modification, destruction, dis-\nclosure, and the provision of assurance that the net-\nwork performs its critical functions correctly and \nthere are no harmful side-effects. \n NIST: National Institute of Standards and Technology in \nGaithersburg, Maryland. NIST publishes a wide vari-\nety of materials on computer security, including FIPS \npublications. \n Nonrepudiation: Method by which the sender is pro-\nvided with proof of delivery and the recipient is \nassured of the sender’s identity, so that neither can \nlater deny having processed the data. \n Nonvolatile Memory Units: Devices which continue to \nretain their contents when power to the unit is turned \noff (bobble memory, Read-Only Memory/ROM). \n Object: A passive entity that contains or receives infor-\nmation. Access to an object potentially implies access \nto the information it contains. Examples of objects \nare records, blocks, pages, segments, files, directo-\nries, directory tree, and programs as well as bits, bytes, \nwords, fields, processors, video displays, keyboards, \nclocks, printers, network nodes, etc. \n Object Reuse: The reassignment to some subject of a \nmedium (e.g., page frame, disk sector, or magnetic \ntape) that contained one or more objects. To be \nsecurely reassigned, no residual data from previously \ncontained object(s) can be available to the new sub-\nject through standard system mechanisms. \n Obscene/Tasteless: Sites that contain explicit graphi-\ncal or text depictions of such things as mutilation, \nmurder, bodily functions, horror, death, rude behav-\nior, executions, violence, and obscenities etc. Sites \nwhich contain or deal with medical content will not \nbe saved. Sample sites: www.celebritymorgue.com , \n www.rotten.com , and www.gruesome.com \n Offline: Pertaining to the operation of a functional unit \nwhen not under direct control of a computer. See \nalso: Online. \n Online: Pertaining to the operation of a functional unit \nwhen under the direct control of a computer. See \nalso: Offline. \n Open Network: A network within the LANL Unclassified \nNetwork that supports LANL’s public Internet pres-\nence and external collaborations. See LANL unclassi-\nfied network. \n Operating System: The most important program that \nruns on a computer. Every general-purpose computer \nmust have an operating system to run other programs. \nOperating systems perform basic tasks, such as rec-\nognizing input from the keyboard, sending output to \n" }, { "page_number": 843, "text": "PART | VIII Appendices\n810\n the display screen, keeping track of files and direc-\ntories on the disk, and controlling peripheral devices \nsuch as disk drives and printers. \n Operation Controls: Security methods that focus on \nmechanisms that primarily are implemented and exe-\ncuted by people (as opposed to systems). \n Orange Book: Named because of the color of its cover, \nthis is the DoD Trusted Computer System Evaluation \nCriteria, DoD 5200.28-STD. It provides the informa-\ntion needed to classify computer systems as security \nlevels of A, B, C, or D, defining the degree of trust \nthat may be placed in them. See also: C2, TCSEC. \n Organizational Computer Security Representative \n(OCSR): A LANL person who has oversight respon-\nsibilities for one or more single-user, standalone clas-\nsified or unclassified systems. \n Overwrite Procedure: A process, which removes or \ndestroys data recorded on a computer storage medium \nby writing patterns of data over, or on top of, the data \nstored on the medium. \n Overwriting media: Method for clearing data from mag-\nnetic media. Overwriting uses a program to write (1 s, \n0s, or a combination) onto the media. Overwriting \nshould not be confused with merely deleting the \npointer to a file (which typically happens when a \n “ delete ” command is used). \n Parity: The quality of being either odd or even. The \nfact that all numbers have parity is commonly used \nin data communication to ensure the validity of data. \nThis is called parity checking. \n PBX: Short for private branch exchange, a private tel-\nephone network used within an enterprise. Users of \nthe PBX share a certain number of outside lines for \nmaking telephone calls external to the PBX. \n Pass Code: A one-time-use “ authenticator ” that is gen-\nerated by a token card after a user inputs his or her \npersonal identification number (PIN) and that is sub-\nsequently used to authenticate a system user to an \nauthentication server or workstation. \n Password: A protected word, phrase, or string of sym-\nbols used to authenticate a user’s identity to a sys-\ntem or network. Guidance note: One-time pass codes \nare valid only for a single authentication of a user to \na system; reusable passwords are valid for repeated \nauthentication of a user to a system. \n Peripheral Device: Any external device attached to a \ncomputer. Examples of peripherals include printers, \ndisk drives, display monitors, keyboards, and mice. \n Personal Identification Number (PIN): A number \nknown only to the owner of the token card and which, \nonce entered, generates a one-time pass-code. \n Personnel Security: The procedures established to \nensure that all personnel who have access to any \nsensitive information have all required authorities or \nappropriate security authorizations. \n Phishing: Deceptive information pharming sites that \nare used to acquire personal information for fraud \nor theft. Typically found in hoax e-mail, these sites \nfalsely represent themselves as legitimate Web sites \nto trick recipients into divulging user account infor-\nmation, credit-card numbers, usernames, passwords, \nSocial Security numbers, etc. Pharming, or crimeware \nmisdirects users to fraudulent sites or proxy servers, \ntypically through DNS hijacking or poisoning. \n Phrases: Filtering based on detecting phrases in the con-\ntent text and their proximity to other target phrases. \n Physical Security: The application of physical barri-\ners and control procedures as preventative meas-\nures or safeguards against threats to resources and \ninformation. \n Pornography/Adult Content: Sites that portray sexual \nacts and activity. \n Port: An interface on a computer to which you can con-\nnect a device. \n Port Protection Device: A device that authorizes access \nto the port itself, often based on a separate authen-\ntication independent of the computer’s own access \ncontrol functions. \n Privacy Act of 1974: A US law permitting citizens to \nexamine and make corrections to records the gov-\nernment maintains. It requires that Federal agencies \nadhere to certain procedures in their record keep-\ning and interagency information transfers. See also: \nSystem of Records. \n Private Branch Exchange: Private Branch eXchange \n(PBX) is a telephone switch providing speech con-\nnections within an organization, while also allowing \nusers access to both public switches and private net-\nwork f acilities outside the organization. The terms \nPABX, PBX, and PABX are used interchangeably. \n Process: An organizational assignment of responsibili-\nties for an associated collection of activities that takes \none or more kinds of input to accomplish a specified \nobjective that creates an output that is of value. \n Process Owner: The official who defines the process \nparameters and its relationship to other Customs pro-\ncesses. The process owner has Accrediting Authority \n(AA) to decide on accepting the security safeguards \nprescribed for the MIS process and is responsible \nfor issuing an accreditation statement that records \nthe decision to accept those safeguards. See also: \nApplication Owner. \n" }, { "page_number": 844, "text": "Appendix | J Glossary\n811\n Protected Distribution System (PDS): A type of pro-\ntected conduit system used for the protection of cer-\ntain levels of information. PDS is the highest level \nof protection and is used in public domain areas for \nSRD and lower. \n Protected Transmission System: A cable, wire, conduit, \nor other carrier system used for the clear text transmis-\nsion of classified information in certain DOE envi-\nronments. Protected transmission systems comprise \nprotected distribution systems (PDSs) and classified \ndistributive information networks (CDINs). A wire-\nline or fiber-optic telecommunications system that \nincludes the acoustical, electrical, electromagnetic, and \nphysical safeguards required to permit its use for the \ntransmission of unencrypted classified information. \n Public Law 100-235: Established minimal acceptable \nstandards for the government in computer security \nand information privacy. See also: Computer Security \nAct of 1987. \n RADIUS: Remote Authentication Dial-in User Service. \nA long-established de-facto standard whereby user \nprofiles are maintained in a database that remote \nservers can share and authenticate dial-in users and \nauthorize their request to access a system or service. \n Rainbow Series: A series of documents published by \nthe National Computer Security Center (NCSC) to \ndiscuss in detail the features of the DoD, Trusted \nComputer System Evaluation Criteria (TCSEC) and \nprovide guidance for meeting each requirement. The \nname “ rainbow ” is a nickname because each docu-\nment has a different color of cover. See also: NCSC. \n Read: A fundamental operation that results only in the \nflow of information from an object to a subject. \n Real Time: Occurring immediately. Real time can refer \nto events simulated by a computer at the same speed \nthat they would occur in real life. \n Recovery: The process of restoring an MIS facility and \nrelated assets, damaged files, or equipment so as to be \nuseful again after a major emergency which resulted \nin significant curtailing of normal ADP operations. \nSee also: Disaster Recovery. \n Regular Expression: Filtering based on rules written as \nregular expressions. \n Remanence: The residual information that remains on \nstorage media after erasure. For discussion purposes, \nit is better to characterize magnetic remanence as the \nmagnetic representation of residual information that \nremains on magnetic media after the media has been \nerased. The magnetic flux that remains in a magnetic \ncircuit after an applied magnetomotive force has \nbeen removed. See also: Object Reuse. \n Remote Access: Sites that provide information about \nor facilitate access to information, programs, online \nservices or computer systems remotely. Sample sites: \npcnow.webex.com, and www.remotelyanywhere.com. \n Residual Risk: The risk of operating a classified infor-\nmation system that remains after the application of \nmitigating factors. Such mitigating factors include, \nbut are not limited to minimizing initial risk by \nselecting a system known to have fewer vulnerabili-\nties, reducing vulnerabilities by implementing coun-\ntermeasures, reducing consequence by limiting the \namounts and kinds of information on the system, \nand using classification and compartmentalization to \nlessen the threat by limiting the adversaries ’ knowl-\nedge of the system. \n Risk: The probability that a particular threat will exploit \na particular vulnerability of the system. \n Risk Analysis: The process of identifying security risks, \ndetermining their magnitude, and identifying areas \nneeding safeguards. An analysis of an organization’s \ninformation resources, its existing controls, and its \nremaining organizational and MIS vulnerabilities. \nIt combines the loss potential for each resource or \ncombination of resources with an estimated rate of \noccurrence to establish a potential level of damage \nin dollars or other assets. See also: Risk Assessment, \nRisk Management. \n Risk Assessment: Process of analyzing threats to and \nvulnerabilities of an MIS to determine the risks \n(potential for losses), and using the analysis as a basis \nfor identifying appropriate and cost-effective meas-\nures. See also: Risk Analysis, Risk Management. \nRisk analysis is a part of risk management, which is \nused to minimize risk by specifying security meas-\nures commensurate with the relative values of the \nresources to be protected, the vulnerabilities of \nthose resources, and the identified threats against \nthem. The method should be applied iteratively dur-\ning the system life-cycle. When applied during the \nimplementation phase or to an operational system, \nit can verify the effectiveness of existing safeguards \nand identify areas in which additional measures are \nneeded to achieve the desired level of security. There \nare numerous risk analysis methodologies and some \nautomated tools available to support them. \n Risk Management: The total process of identify-\ning, measuring, controlling, and eliminating or \nminimizing uncertain events that may affect system \nresources. Risk management encompasses the entire \nsystem life-cycles and has a direct impact on system \ncertification. It may include risk analysis, cost/benefit \n" }, { "page_number": 845, "text": "PART | VIII Appendices\n812\n analysis, safeguard selection, security test and evalu-\nation, safeguard implementation, and system review. \nSee also: Risk Analysis, Risk Assessment. \n Router: An interconnection device that is similar to a \nbridge but serves packets or frames containing cer-\ntain protocols. Routers link LANs at the network \nlayer. \n ROM: Read Only Memory. See also: Nonvolatile \nMemory Units. \n RSA: A public-key cryptosystem for both encryption \nand authentication based on exponentiation in modu-\nlar arithmetic. The algorithm was invented in 1977 by \nRivest, Shamir, and Adelman and is generally accepted \nas practical or secure for public-key encryption. See \nalso: Capstone, Clipper, DES, RSA, Skipjack. \n Rules of Behavior: Rules established and implemented \nconcerning use of, security in, and acceptable level \nof risk for the system. Rules will clearly delineate \nresponsibilities and expected behavior of all individu-\nals with access to the system. Rules should cover such \nmatters as work at home, dial-in access, connection to \nthe Internet, use of copyrighted works, unofficial use \nof Federal Government equipment, the assignment \nand limitation of system privileges, and individual \naccountability. \n Safeguards: Countermeasures, specifications, or controls, \nconsisting of actions taken to decrease the organiza-\ntions existing degree of vulnerability to a given threat \nprobability, that the threat will occur. \n Security Incident: An MIS security incident is any \nevent and/or condition that has the potential to impact \nthe security and/or accreditation of an MIS and may \nresult from intentional or unintentional actions. See \nalso: Security Violation. \n Security Plan: Document that details the security con-\ntrols established and planned for a particular system. \n Security Policy: The set of laws, rules, directives, and \npractices that regulate how an organization manages, \nprotects, and distributes controlled information. \n Security Requirements: Types and levels of protection \nnecessary for equipment, data, information, applica-\ntions, and facilities to meet security policies. \n Security Safeguards (Countermeasures): The protective \nmeasures and controls that are prescribed to meet the \nsecurity requirements specified for a system. Those \nsafeguards may include, but are not necessarily limited \nto: hardware and software security features; operating \nprocedures; accountability procedures; access and dis-\ntribution controls; management constraints; personnel \nsecurity; and physical structures, areas, and devices. \nAlso called safeguards or security controls. \n Security Specifications: A detailed description of the \nsecurity safeguards required to protect a system. \n Security Violation: An event, which may result in dis-\nclosure of sensitive information to, unauthorized \nindividuals, or that results in unauthorized modifi-\ncation or destruction of system data, loss of compu-\nter system processing capability, or loss or theft of \nany computer system resources. See also: Security \nIncident. \n Sensitive Data: Any information, the loss, misuse, mod-\nification of, or unauthorized access to, could affect \nthe national interest or the conduct of federal pro-\ngrams, or the privacy to which individuals are enti-\ntled under Section 552a of Title 5, U.S. Code, but has \nnot been specifically authorized under criteria estab-\nlished by an Executive order or an act of Congress to \nbe kept classified in the interest of national defense \nor foreign policy. \n Sensitive \nUnclassified \nInformation: \nInformation \nfor which disclosure, loss, misuse, alteration, or \ndestruction could adversely affect national secu-\nrity or other federal government interests. Guidance \nNote: National security interests are those unclassi-\nfied matters that relate to the national defense or to \nUnited States (US) foreign relations. Other govern-\nment interests are those related to, but not limited to, \na wide range of government or government-derived \neconomic, human, financial, industrial, agricultural, \ntechnological, and law-enforcement information, and \nto the privacy or confidentiality of personal or com-\nmercial proprietary information provided to the U.S. \ngovernment by its citizens. Examples are Unclassified \nControlled Nuclear Information (UCNI), Official Use \nOnly (OUO) information, Naval Nuclear Propulsion \nInformation (NNPI), Export Controlled Information \n(ECI), In Confidence information, Privacy Act infor-\nmation (such as personal/medical information), pro-\nprietary information, for example, from a cooperative \nresearch and development agreement (CRADA), State \nDepartment Limited Official Use (LOU) information, \nand Department of Defense For Official Use Only \n(FOUO) information. \n Sensitivity Level: Sensitivity level is the highest classi-\nfication level and classification category of informa-\ntion to be processed on an information system. \n Separation of Duties: The dissemination of tasks and \nassociated privileges for a specific computing pro cess \namong multiple users to prevent fraud and errors. \n Server: The control computer on a local area network \nthat controls software access to workstations, print-\ners, and other parts of the network. \n" }, { "page_number": 846, "text": "Appendix | J Glossary\n813\n Site: Usually a single physical location, but it may be \none or more MIS that are the responsibility of the \nDSO. The system may be a standalone MIS, a remote \nsite linked to a network, or workstations intercon-\nnected via a local area network (LAN). \n Skipjack: A classified NSA designed encryption algo-\nrithm contained in the Clipper Chip. It is substantially \nstronger than DES and intended to provide a federally \nmandated encryption process, which would enable \nlaw enforcement agencies to monitor and wiretap pri-\nvate communications. See also: Capstone, Clipper, \nDES, RSA, Skipjack. \n Smart Card: A credit-card – sized device with embed-\nded microelectronics circuitry for storing informa-\ntion about an individual. This is not a key or token, \nas used in the remote access authentication process. \n SNMP: Simple Networking Management Protocol. \n Software: Computer instructions or data. Anything that \ncan be stored electronically is software. \n Software Copyright: The right of the copyright \nowner to prohibit copying and/or issue permis-\nsion for a customer to employ a particular computer \nprogram. \n SPAM: To crash a program by overrunning a fixed-site \nbuffer with excessively large input data. Also, to \ncause a person or newsgroup to be flooded with irrel-\nevant or inappropriate messages. \n Spyware: Sites that promote, offer or secretively install \nsoftware to monitor user behavior, track personal \ninformation, record keystrokes, and/or change user \ncomputer configuration without the user’s knowl-\nedge and consent malicious or advertising purposes. \nIncludes sites with software that can connect to \n “ phone home ” for transferring user information. \n Standard Security Procedures: Step-by-step security \ninstructions tailored to users and operators of MIS \nthat process sensitive information. \n Standalone System: A single-user MIS not connected \nto any other systems. \n Symmetric Encryption: See: Conventional Encryption. \n System: An organized hierarchy of components (hard-\nware, software, data, personnel, and communications, \nfor example) having a specified purpose and perform-\nance requirements. \n System Administrator: The individual responsible for \nthe installation and maintenance of an information \nsystem, providing effective information system utili-\nzation, required security parameters, and implemen-\ntation of established requirements. \n System Availability: The state that exists when required \nautomated informations can be performed within \nan acceptable time period even under adverse \ncircumstances. \n System Failure: An event or condition that results in a \nsystem failing to perform its required function. \n System Integrity: The attribute of a system relating to \nthe successful and correct operation of computing \nresources. See also: Integrity. \n System of Records: A group of any records under the \ncontrol of the Department from which information is \nretrieved by the name of an individual, or by some \nother identifying number, symbol, or other identi-\nfying particular assigned to an individual. See also: \nPrivacy Act of 1974. \n System Owner: The person, team, group, or division \nthat has been assigned and accepted responsibility \nfor Laboratory computer assets. \n System Recovery: Actions necessary to restore a sys-\ntem’s operational and computational capabilities, and \nits security support structure, after a system failure or \npenetration. \n System User: An individual who can receive informa-\ntion from, input information to, or modify infor-\nmation on a LANL information system without an \nindependent review. Guidance Note: This term is \nequivalent to computer information system user, or \ncomputer user, found in other Laboratory documen-\ntation. System users may be both LANL workers and \ncollaborators. For desktop systems, a single individ-\nual may be a system user and system owner. \n TCP/IP: Transmission Control Protocol/Internet Proto-\ncol. The Internet Protocol is based on this suite of \nprotocols. \n TCSEC: Trusted Computer System Evaluation Criteria \n(TCSEC). DoD 5200.28-STD, National Institute of \nStandards and Technology (NIST), Gaithersburg, \nMaryland, 1985. Establishes uniform security require-\nments, administrative controls, and technical measures \nto protect sensitive information processed by DoD \ncomputer systems. It provides a standard for security \nfeatures in commercial products and gives a metric \nfor evaluating the degree of trust that can be placed in \ncomputer systems for the securing of sensitive infor-\nmation. See also: C2, Orange Book. \n Technical Controls: Security methods consisting of \nhardware and software controls used to provide \nautomated protection to the system or applications. \nTechnical controls operate within the technical sys-\ntem and applications. \n Technical Security Policy: Specific protection condi-\ntions and/or protection philosophy that express the \nboundaries and responsibilities of the IT product in \n" }, { "page_number": 847, "text": "PART | VIII Appendices\n814\n supporting the information protection policy control \nobjectives and countering expected threats. \n Telecommunications: Any transmission, emission, or \nreception of signals, writing, images, sound or other \ndata by cable, telephone lines, radio, visual or any \nelectromagnetic system. \n Terrorist/Militant/Extremist: Sites that contain informa-\ntion regarding militias, anti-government groups, terror-\nism, anarchy, etc.: Anti-government/Anti-establishment \nand bomb-making/usage (Should also be saved in crim-\ninal skills). Sample sites: www.michiganmilitia.com , \n www.militiaofmontana.com , and www.ncmilitia.org \n Test Condition: A statement defining a constraint that \nmust be satisfied by the program under test. \n Test Data: The set of specific objects and variables that \nmust be used to demonstrate that a program produces \na set of given outcomes. See also: Disaster Recovery, \nTest Program. \n Test Plan: A document or a section of a document \nwhich describes the test conditions, data, and cov-\nerage of a particular test or group of tests. See also: \nDisaster Recovery, Test Condition, Test Data, Test \nProcedure (Script). \n Test Procedure (Script): A set of steps necessary to \ncarry out one or a group of tests. These include steps \nfor test environment initialization, test execution, and \nresult analysis. The test procedures are carried out by \ntest operators. \n Test Program: A program which implements the test \nconditions when initialized with the test data and \nwhich collects the results produced by the pro-\ngram being tested. See also: Disaster Recovery, Test \nCondition, Test Data, Test Procedure (Script). \n The Computer Security Plans for General Support \nSystems (GSS) and Major Applications (MA): Plans \nthat detail the specific protection requirements for \nmajor applications and general support systems. \n The Cyber Security Handbook: A Web-site handbook \nthat details the Cyber Security requirements required \nby system users, system administrators, and SRLMs \nwho access electronic information. \n Threat: An event, process, activity (act), substance, or \nquality of being perpetuated by one or more threat \nagents, which, when realized, has an adverse effect \non organization assets, resulting in losses attributed \nto: direct loss, related direct loss, delays or denials, \ndisclosure of sensitive information, modification of \nprograms or databases and intangible (good will, \nreputation, etc.). \n Threat Agent: Any person or thing, which acts, or has \nthe power to act, to cause, carry, transmit, or support \na threat. See also: Threat. \n Token Card: A device used in conjunction with a unique \nPIN to generate a one-time pass code (for example, \nCRYPTOCard® or SecureID®). \n Trapdoor: A secret undocumented entry point into a \ncomputer program, used to grant access without \nnormal methods of access authentication. See also: \nMalicious Code. \n Trojan Horse : A computer program with an apparently \nor actually useful function that contains additional \n(hidden) functions that surreptitiously exploit the \nlegitimate authorizations of the invoking process to the \ndetriment of security. See also: Malicious Code. \nThreat Agent. \n Trusted Computer Base (TCB): The totality of protec-\ntion mechanisms within a computer system, including \nhardware, firmware, and software, the combination of \nwhich is responsible for enforcing a security policy. \nA TCB consists of one or more components that \ntogether enforce a security policy over a product or \nsystem. See also: C2, Orange Book, TCSEC. \n Trusted Computing System: A computer and operating \nsystem that employs sufficient hardware and software \nintegrity measures to allow its use for simultaneously \nprocessing a range of sensitive information and can \nbe verified to implement a given security policy. \n Unclassified Cyber Security Program Plan: A plan \nthat provides a single source of unclassified compu-\nter security program information, and specifies the \nminimum protections and controls and references the \ndetailed source material that pertains to the program. \n Unclassified \nInformation \nSystems \nSecurity \nSite \nManager: The manager responsible for the LANL \nUnclassified Information Systems Security Program. \n Unclassified Protected Network: A network within the \nLANL unclassified network that is designed to pro-\ntect the resident systems from unauthorized access \nand is separated from the Internet by a firewall that \ncontrols external access to the network. See also: \nLANL Unclassified Network. \n Unfriendly Termination: The removal of an employee \nunder involuntary or adverse conditions. This may \ninclude termination for cause, RIF, involuntary trans-\nfer, resignation for “ personality conflicts, ” and situa-\ntions with pending grievances. \n UPS (Uninterruptible Power Supply): A system of \nelectrical components to provide a buffer between \nutility power, or other power source, and a load that \nrequires uninterrupted, precise power. This often \nincludes a trickle-charge battery system which per-\nmits a continued supply of electrical power during \nbrief interruption (blackouts, brownouts, surges, elec-\ntrical noise, etc.) of normal power sources. \n" }, { "page_number": 848, "text": "Appendix | J Glossary\n815\n User: Any person who is granted access privileges to a \ngiven IT. \n User Interface: The part of an application that the user \nworks with. User interfaces can be text-driven, such \nas DOS, or graphical, such as Windows. \n Verification: The process of comparing two levels of \nsystem specifications for proper correspondence. \n Virus: Code imbedded within a program that causes \na copy of itself to be inserted in one or more other \nprograms. In addition to propagation, the virus usu-\nally performs some unwanted function. Note that a \nprogram need not perform malicious actions to be a \nvirus; it need only infect other programs. See also: \nMalicious Code. \n VSAN: Virtual SAN. \n Vulnerability: A weakness, or finding that is non-\ncompliant, non-adherent to a requirement, a specifica-\ntion or a standard, or unprotected area of an otherwise \nsecure system, which leaves the system open to poten-\ntial attack or other problem. \n WAN (Wide Area Network): A network of LANs, \nwhich provides communication, services over a geo-\ngraphic area larger than served by a LAN. \n WWW: See: World Wide Web. \n World Wide Web: An association of independent infor-\nmation databases accessible via the Internet. Often \ncalled the Web, WWW, or W. \n Worm: A computer program that can replicate itself and \nsend copies from computer to computer across net-\nwork connections. Upon arrival, the worm may be \nactivated to replicate and propagate again. In addi-\ntion to propagation, the worm usually performs some \nunwanted function. See also: Malicious Code. \n Write: A fundamental operation that results only in the \nflow of information from a subject to an object. \n Yellow Network: See LANL Unclassified Network. \n" }, { "page_number": 849, "text": "This page intentionally left blank\n" }, { "page_number": 850, "text": "817\n Index \n A \n AAA . See Authentication, authorization, and \naccounting (AAA) \n Aaditya Corporation , 777 \n Abelian group , 404 \n Abstract model \n network interactions and , 189 – 191 \n cross-infrastructure cyber cascading \nattacks , 191 – 192 \n sample cascading attack , 191 \n vulnerabilities, isolating , 192 \n Abstract Syntax Notation (aka ASN.1) , 444 \n aCAT . See Advanced cellular network \nvulnerability assessment toolkit (aCAT) \n Access \n control , 48 – 49 , 59 , 476 – 478 \n EPAL , 477 \n P3P , 477 \n standards , 261 \n subsystem , 640 \n XACML , 477 \n mesh routers , 172 \n SAN \n ACL , 570 \n DIF , 570 \n partitioning , 573 – 574 \n physical , 571 \n securing management interfaces , 573 \n separation of functions , 573 \n source ID (S_ID) checking , 574 \n Access control entries (ACE) , 49 , 89 \n Access control list (ACL) , 49 , 66 , 69 , 89 , 161 , \n 162 , 257 , 374 , 570 \n Access-list , 101 , 162 \n Access point (AP) mode , 170 \n Accountability, IT security management , 261 \n Accounting, user access control , 51 \n Accurate Background, Inc. , 8 \n ACE . See Access control entries (ACE) \n ACL . See Access control lists (ACL) \n Acquisti, A. , 475 \n Active Directory ® , 766 \n ActivePorts , 20 \n Activeworx , 781 \n Ad-Aware 2008 , 781 \n Ad-Aware 2008 Defi nition File , 782 \n Additive cipher , 401 \n Address resolution protocol (ARP) , 98 \n Address space layout randomization (ASLR) , \n 698 \n Adelman, Leonard , 516 \n Ad hoc networks, wireless \n bootstrapping in , 178 \n characteristics , 171 – 172 \n mesh networks , 171 – 172 \n sensor networks , 171 \n Administrative controls, security management \nsystem , 258 \n Administrative service protocols, fi rewalls , \n 361 – 363 \n central log fi le management , 362 – 363 \n DHCP , 363 \n ICMP , 362 \n NTP , 362 \n routing protocols , 361 – 362 \n ADSL . See Asymmetric Digital Subscriber \nLine (ADSL) \n Advanced cellular network vulnerability \nassessment toolkit (aCAT) , 198 – 199 \n Advanced Encryption Standard (AES) , 38 , \n 173 , 215 , 404 , 516 , 765 \n with various Windows Operating Systems , \n 770 \n Advent Information Management Ltd , 777 \n AES . See Advanced Encryption Standard \n(AES) \n AES cipher suites , 766 – 767 \n AF . See Application fi rewalls (AF) \n Aggregators, instant messaging (IM) , 462 \n Agrawal, R. , 479 \n Aide , 89 \n Air Force Research Laboratory , 793 \n Aka ASN.1 . See Abstract Syntax Notation \n(aka ASN.1) \n AlAboodi, Saad Saleh , 226 , 227 \n Aladdin Knowledge Systems , 49 \n Alert.ids fi le , 156 \n Algebraic attack , 38 \n Algebraic Packet Marking (APM) scheme , \n 343 \n Algebraic structure, data encryption , 404 – 407 \n Allen, Julia , 6 \n Alphanumeric symbols , 399 \n Ambient intelligence (AmI) world \n mobile user-centric identity management \nin , 290 – 292 \n AmI scenario , 290 – 291 \n requirements for , 291 – 292 \n American National Standards Institute \n(ANSI) , 512 \n American Standard Code for Information \nInterchange (ASCII) , 24 \n AmI . See Ambient intelligence (AmI) world \n AMPEG Security Lighthouse , 777 \n Analysis, risk , 610 – 611 \n Ancheta, Jeanson James , 129 \n Anderson, T. , 129 \n Anomaly-based analysis , 166 – 167 \n Anomaly detection systems , 240 \n Anonymity, and privacy \n crowd system , 485 \n freedom network , 485 \n k- anonymity , 479 – 480 \n mix networks , 484 – 485 \n Anonymous Internet proxies , 485 \n Ansari, N. , 343 \n ANSI . See American National Standards \nInstitute (ANSI) \n Anti-malware software , 301 – 302 \n Antispam technology , 747 \n Antivirus \n protection , 747 \n software . See Anti-malware software \n Anton, A. , 504 \n APM . See Algebraic Packet Marking (APM) \nscheme \n AppArmor , 48 \n APPEL . See P3P Preference Exchange \nLanguage (APPEL) \n Apple , 11 \n Application fi rewalls (AF) , 48 \n Application Layer, TCP/IP , 298 \n Application layer fi rewalls , 163 , 241 , \n 354 – 355 \n Application penetration test , 378 \n Application-specifi c integrated chip (ASIC) , \n 141 \n ARAN . See Authenticated Routing for ad hoc \nNetworks (ARAN) \n Architecture \n cellular network , 184 \n" }, { "page_number": 851, "text": "Index\n818\n RFID system \n back-end database , 207 \n readers , 206 – 207 \n tags . See RFID tags \n Unix \n access rights , 84 \n fi le system , 82 \n kernel , 82 \n process , 84 \n users and groups , 82 , 84 \n ArcSight , 781 \n Ardagna, C. A. , 478 , 482 \n Aref, W. G. , 481 \n Ariadne, routing protocol , 176 \n Armstrong, P. , 474 \n ARP . See Address resolution protocol (ARP) \n Array-level encryption , 586 – 587 \n Arsenault, B. , 54 \n ASCII . See American Standard Code for \nInformation Interchange (ASCII) \n Ashampoo FireWall , 782 \n ASIC . See Application-specifi c integrated \nchip (ASIC) \n ASIS International , 10 \n ASN.1 Object Identifi er (OID) , 446 \n AS-REQ/REP . See Authentication service \nrequest/ response (AS-REQ/REP) \n Assets \n defi ned , 606 \n management, and IM , 463 \n primary , 611 \n supporting , 611 \n Asymmetric cryptography , 516 \n Asymmetric Digital Subscriber Line (ADSL) , \n 512 \n Asymmetric encryption , 412 \n Asymmetric keys authentication , 115 – 116 \n Asynchronous Transfer Mode (ATM) , 514 \n A-Team , 124 \n ATM . See Asynchronous Transfer Mode (ATM) \n Atsec , 777 \n AT & T , 775 – 776 \n Attackers , 167 , 296 – 297 \n Attacks \n methods . See also Intrusion \n exploit attack , 54 – 55 \n password cracking , 54 \n reconnaissance techniques . See \n Reconnaissance techniques \n social engineering attack , 55 \n phase , 373 \n traceback and attribution , 341 – 346 \n IP traceback , 341 – 344 \n stepping-stone attack attribution , \n 344 – 246 \n Attribution \n attack traceback and , 341 – 346 \n defi ned , 339 \n Stepping-stone attack , 244 – 246 \n Audit , 59 \n instant messaging (IM) , 464 – 465 \n risk analysis , 47 \n SAN , 572 – 573 \n systems (both manual and automated) , 747 \n trails, for IT security management , 261 \n Authenticated Routing for ad hoc Networks \n(ARAN) , 175 – 176 \n Authentication , 59 , 67 , 110 , 138 – 139 , \n 653 – 654 \n asymmetric keys , 115 – 116 \n identities , 116 \n PKI , 87 \n symmetric key , 114 – 115 \n two-factor , 87 \n user access control , 49 – 50 \n vs. authorization , 238 \n Authentication, authorization, and accounting \n(AAA) , 17 , 568 \n Authentication service request/ response \n(AS-REQ/REP) , 770 \n Authenticity, identity management , 271 \n AuthN . See Authentication \n Authority key identifi er , 445 \n Authorization , 59 , 67 , 87 – 88 \n user access control , 50 – 51 \n AuthZ . See Authorization \n Automated network defense , 697 – 698 \n Availability, of information , 256 \n Avast Virus Cleaner Tool , 782 \n AVG Anti-Virus , 781 \n AVG Internet Security , 782 \n The Aviation and Transportation Security Act \nof 2001 (PL 107-71) , 663 \n Avira AntiVir Personal-Free Antivirus , 781 \n Avira Premium Security Suite , 782 \n Azapo case , 689 \n B \n B ä cher, P. , 123 \n Backbone mesh routers , 172 \n Backdoor threat , 295 \n Back-end database, in RFID system , 207 \n Background knowledge attack, to k-\n anonymity , 480 \n Backups, IT security management , 261 \n Backup wizard , 765 \n Bandwidth usage in content fi ltering , 742 \n Banned word lists , 726 \n Base station (BS) , 185 , 187 \n Basson case , 689 \n Bastille , 91 , 92 \n Baudot code , 24 \n Bayardo, R. J. , 479 \n Bayesian fi ltering , 62 , 210 \n Bayesian fi lters , 727 \n Bayes Theorem , 615 \n BCP . See Business continuity planning (BCP) \n Behavior-based detection , 61 , 65 \n Bejtlich, Richard , 63 \n Belenky, A. , 343 \n Bell labs , 32 \n Bellovin, S. M. , 342 \n Beresford, A. R. , 481 \n Berinato, S. , 119 \n BGP . See Border Gateway Protocol (BGP) \n Bhalla, N. , 54 \n BIA . See Business impact analysis (BIA) \n Bigram , 31 \n BindView Policy Compliance , 777 \n Binkley, J. , 125 \n Biometrics , 640 \n architecture , 647 \n data capture , 648 \n data storage subsystem , 649 \n decision subsystem , 649 – 652 \n matching subsystem , 649 \n signal processing subsystem , 648 – 649 \n current ISO/IEC standards for , 647 \n defi ned , 645 \n designing of , 646 \n main operations of \n authentication , 653 – 654 \n enrollment , 652 – 653 \n identifi cation , 654 \n relevant standards , 646 – 647 \n security considerations \n birthday attacks , 656 – 657 \n comparison of selected biometric \ntechnologies , 657 – 658 \n Doddington’s Zoo , 656 \n error rates , 655 – 656 \n storage of templates , 658 – 659 \n Birthday attacks , 656 – 657 \n Bit-fl ipping attack , 109 – 110 \n Bizeul, D. , 123 \n BlackBerry , 397 \n Black-box test , 371 – 372 \n Black hats, and instant messaging (IM) , 455 \n Black-hole attack , 105 \n Blacklist and whitelist determination , 740 \n Blended malware , 295 \n Blind signature , 709 \n Blind test , 367 \n Block-based IP storage , 594 \n" }, { "page_number": 852, "text": "Index\n819\n Block ciphers , 35 – 36 , 106 – 107 \n Bloom, Burton H. , 343 \n Bloom fi lters , 343 – 344 \n Blue Security Inc. , 694 \n Blum, A. , 345 \n Bonatti, P. , 478 \n Boneh, D. , 451 \n Boolean probabilities , 199 – 200 \n Boot/root fl oppies , 82 \n Boot-sector viruses , 684 \n Bootstrapping , 177 \n in ad hoc networks , 178 \n in sensor networks , 178 \n Border Gateway Protocol (BGP) , 100 \n The Border Security and Visa Entry Reform \nAct of 2002 , 664 \n Bot-herder . See Botmaster \n BotHunter , 125 \n Botmaster , 119 , 127 \n C & C traffi c-laundering techniques , 128 , \n 129 \n locating and identifying , 128 \n traceback , 128 \n beyond internet , 130 – 132 \n challenges , 129 – 130 \n Botnets , 42 , 57 , 119 , 230 \n business model , 123 – 124 \n defenses \n botmaster, locating and identifying , 128 \n C & C channels, encryption to protect , \n 126 – 128 \n C & C servers, detection and \nneutralization , 125 – 126 \n C & C traffi c, detection of , 125 \n removing individual bots , 124 – 125 \n infection sequence, centralized IRC-based , \n 122 \n origins of , 120 \n protocols , 120 – 122 \n topologies , 120 \n centralized , 121 \n P2P , 121 – 122 \n tracking , 346 \n Bots , 42 , 57 , 119 . See also Botnets \n life cycle , 122 – 123 \n BotSniffer , 126 \n Bounced messages, computer forensics , \n 322 – 324 \n Bowen, Pauline , 248 \n Boyd, Chris , 41 \n Bridge CA , 443 – 444 \n Bridging , 95 \n Brocade Secure Fabric OS , 779 \n Brodie, C. , 503 \n BS . See Base station (BS) \n Buffer overfl ow , 54 – 55 \n Burch, H. , 342 \n Business, and privacy , 475 – 476 \n Business communications, security \n additional guidelines for , 242 \n mobile IT systems, rules , 242 \n open networks , 242 \n protection resources, handling , 242 \n self-protection, rules , 241 \n Business continuity planning (BCP) , 235 \n Business continuity strategy \n IT security management and , 263 \n Business impact analysis (BIA) , 143 , 233 \n Business reputation management , 704 \n Butler, Jamie , 55 \n Byzantine failures , 100 \n C \n CA . See Certifi cate authorities (CA) \n Caeser cipher . See Shift cipher \n California Offi ce of Information Security and \nPrivacy Protection (OISPP) , 670 \n Callas, Jon , 450 \n Call delivery service , 185 – 186 \n Call forwarding service (CFS) \n cross-infrastructure cyber cascading attacks \non , 191 – 192 \n email-based , 184 \n Callio Secura 17799 , 618 – 619 \n Call walking , 554 \n Cameron, Kim , 272 \n Cameron’s principles , 272 \n Canadian Standards Association Privacy \nPrinciples (CSAPP) , 488 – 490 \n Caralli, Richard A. , 225 \n Cardholder unique identifi er (CHUID) , 640 \n Carroll, B. , 59 \n Carving, fi le , 318 – 320 \n Casper , 481 – 482 \n Cassandra , 7 \n Casual surfi ng mistake , 740 \n CAT . See Cellular network vulnerability \nassessment toolkit (CAT) \n Category blocking , 726 – 727 \n CBC . See Cipher block chaining (CBC) \n CCleaner , 781 \n CCMP . See Cipher Block Chaining Message \nAuthentication Code Protocol (CCMP) \n CCTA Risk Analysis and Management \nMethodology (CRAMM) , 616 – 617 \n CDMA . See Code division multiple access \n(CDMA) \n Cell phones , 170 \n penetration test , 377 – 378 \n Cellular networks , 169 \n architecture , 184 \n attack taxonomy \n abstract model . See Abstract model \n three-dimensional , 192 – 193 \n call delivery service , 185 – 186 \n cellular telephone networks , 170 \n security of \n in core network , 187 – 188 \n core network organization , 185 \n Internet connectivity and , 188 \n PSTN connectivity and , 188 – 189 \n in radio access network , 186 – 187 \n vulnerability analysis , 193 \n aCAT , 198 – 199 \n CAT , 195 – 198 \n eCAT , 199 – 201 \n wireless LANs , 170 – 171 \n Cellular network vulnerability assessment \ntoolkit (CAT) \n attack graph , 195 \n attack scenario , 197 – 198 \n edges , 197 \n nodes , 196 – 197 \n trees , 197 \n effect detection rules, cascading , 195 \n Cellular network vulnerability assessment \ntoolkit for evaluation (eCAT) , 199 – 201 \n Cellular telephone networks , 170 \n Center for Education and Research in \nInformation Assurance and Security \n(CERIAS) , 7 \n Center for Internet Security (CIS) , 598 \n CenterTrack , 342 \n Centralized management solution , 275 \n vs. federation identity management , \n 275 – 276 \n Central log fi le management , 362 – 363 \n Central Scans \n pros and cons of , 387 \n vs. local scans , 387 – 388 \n Centrex systems , 508 – 509 \n CERIAS . See Center for Education and \nResearch in Information Assurance and \nSecurity (CERIAS) \n CERT . See Computer Emergency Response \nTeam (CERT) \n Cert (ID A , V) , 115 \n Certifi cate \n PGP , 449 \n X.509 , 440 – 442 , 444 – 446 \n delta CRL , 441 \n OSCP , 441 – 442 \n validation , 439 – 440 \n X.509 V1 format , 445 \n" }, { "page_number": 853, "text": "Index\n820\n Certifi cate (Continued) \n X.509 V2 format , 445 \n X.509 V3 format , 445 \n Certifi cate authorities (CA) , 115 , 436 , 701 \n Certifi cate Practice Statement (CPS) , 447 \n Certifi cate Revocation List (CRL) , 440 \n data fi elds in , 441 \n delta , 441 \n format of revocation record in , 441 \n Certifi cation process, to security management \nsystem , 255 – 256 \n Certifi ed Information Systems Security \nProfessional (CISSP) certifi cation , 10 \n CFAA . See Computer Fraud and Abuse Act \n(CFAA) \n CFB . See Cipher feedback (CFB) \n CF6 Luxembourg S.A. , 777 \n CFS . See Call forwarding service (CFS) \n Cha, A. E. , 123 \n Change management \n intranet security , 142 – 143 \n SAN , 571 \n Chaum, D. , 484 \n Checkpoint , 239 \n Checks parameter , 442 \n Chemical, radiological, and biological \nhazards , 632 \n Chen, S. , 130 \n Cheswick, B. , 342 \n Chiang, K. , 121 \n Chief information offi cer/director of \ninformation technology \n role in security policies , 256 \n Chief information security offi cer (CISO) , 152 \n Chief privacy offi cer (CPO) , 502 \n Children’s Internet Protection Act (CIPA) , \n 724 , 725 – 726 , 735 – 736 \n Chi-square test , 33 \n Chmod command, access rights and , 84 \n ChoicePoint , 473 , 474 \n Chosen-ciphertext attack , 416 \n Chow, C. Y. , 481 \n Chroot jail , 85 – 86 \n Cipher block chaining (CBC) , 36 , 412 \n Cipher Block Chaining Message \nAuthentication Code Protocol (CCMP) , \n 245 \n Cipher Block Chaining mode , 108 – 109 \n Cipher feedback (CFB) , 412 \n Ciphers , 14 , 15 \n block , 35 – 36 \n cracking , 33 – 34 \n defi nition of , 24 \n Kasiski/Kerckhoff method , 30 – 31 \n one-time pad , 32 – 33 \n polyalphabetic , 29 – 30 \n shift , 26 – 29 \n stream , 31 – 32 \n substitution , 25 – 26 \n suites \n AES , 766 – 767 \n ECC , 767 – 768 \n Verman , 31 – 32 \n XOR , 34 – 35 \n Ciphertext , 24 , 106 , 577 \n Circle of trust, Liberty Alliance , 277 \n Cisco , 159 , 161 , 162 \n CISO . See Chief information security offi cer \n(CISO) \n CISSP . See Certifi ed Information Systems \nSecurity Professional (CISSP) \n CISSP 10 domains, of information security , \n 225 \n CBK , 226 \n Citicus ONE-security risk management , 777 \n Claburn, T. , 124 \n Clarke, Arthur C. , 423 \n Classical cryptography , 399 – 402 \n Clear-to-send (CTS) packets , 171 \n Click fraud , 124 \n Client-based proxies , 737 – 738 \n Cloud computing , 225 \n COBRA application, for risk management , \n 619 \n Code division multiple access (CDMA) , 170 \n CodeRed , 684 \n Coldsite , 144 \n Commercial business and content fi ltering , 725 \n Commercial uses, applied computer forensics , \n 330 \n Common Body of Knowledge (CBK) , 226 \n Common Object Resource Broker \nArchitecture (CORBA) , 355 , 360 \n Common vulnerabilities and exposures \n(CVE) , 54 \n Common vulnerability scoring system \n(CVSS) , 54 \n Communications \n applied computer forensics , 331 \n channel \n downlink , 423 \n uplink , 423 \n risk management , 614 \n Comodo Firewall Pro , 782 \n Company’s information system (IS) , 754 \n Competency, eDiscovery , 311 \n Compliance, instant messaging (IM) , 464 \n Computer Crime Report , 339 \n Computer Emergency Response Team \n(CERT) , 150 , 685 \n Computer forensics , 748 \n applied , 329 – 332 \n computer programming, knowledge \nof , 331 \n education/certifi cation in , 330 \n experience in , 329 \n job description, technologist , 329 – 330 \n job description management , 330 \n professional practitioner, background \nof , 330 \n publishing articles , 331 – 332 \n testimonial , 329 \n tracking information , 329 \n in court system , 310 – 311 , 334 – 337 \n correcting mistakes , 336 – 337 \n defendants/plaintiffs/prosecutors , \n 334 – 335 \n direct and cross-examination , 335 \n pretrial motions , 335 \n surrebuttal , 335 \n testifying , 335 – 336 \n data analysis , 308 – 310 \n database reconstruction , 310 \n ethics and green-home-plate-gallery-\nview , 309 – 310 \n defi ned , 307 \n email headers/email receipts/bounced \nmessages , 322 – 324 \n expert testifying , 332 – 334 \n certainty without doubt , 334 \n degrees of certainty , 332 – 333 \n fi rst principles , 325 \n hacking XP password , 325 – 328 \n internet history , 312 \n intrusion detection and , 305 – 306 \n network analysis , 328 – 329 \n practitioner , 333 \n steganography tools , 324 – 325 \n TRO and labor disputes , 312 – 325 \n creating images by software and \nhardware write blockers , 313 – 314 \n divorce , 313 \n fi le carving , 318 – 320 \n fi le system analyses , 314 – 315 \n live capture of relevant fi les , 314 \n NTFS , 315 \n password recovery , 317 – 318 \n patent infringement , 313 \n RAID , 314 \n role of forensic examiner in \ninvestigations and fi le recovery , \n 315 – 317 \n timestamps , 320 – 324 \n Computer Fraud and Abuse Act (CFAA) , \n 264 – 267 \n" }, { "page_number": 854, "text": "Index\n821\n Computer Policy Guide , 777 \n Computer Professionals for Social \nResponsibility (CPSR) , 385 \n Computer programming, applied computer \nforensics , 331 \n Confi dentiality, of information , 256 \n Confi guration, wireless remote access points \n(AP) , 796 \n Congruence , 400 \n defi ned , 401 \n Connections , 97 \n Consumer n-correspondence , 498 \n Contactless smart cards, ISO standards for , \n 207 \n Content-based image fi ltering (CBIF) , \n 727 – 728 \n Content blocking methods \n banned word lists , 726 \n Bayesian fi lters , 727 \n category blocking , 726 – 727 \n content-based image fi ltering (CBIF) , \n 727 – 728 \n safe search integration to search engines \nwith content labeling , 727 \n URL block method , 726 \n Content fi ltering , 723 \n bandwidth usage in , 742 \n commercial business , 725 \n fi nancial organizations , 725 \n healthcare organizations , 725 \n on IM , 463 \n libraries , 725 – 726 \n other Governments , 725 \n parents , 726 \n performance issues , 742 \n problems with , 723 – 724 \n reporting in , 742 \n scalability and usability , 741 – 742 \n in schools , 725 \n U.S. Government , 725 \n Content-fi ltering control \n categories , 732 – 734 \n issues and problems with , 737 – 743 \n legal issues , 735 – 737 \n related products , 743 \n Content-fi ltering control, techniques and \ntechnology for \n Internet gateway-based products/unifi ed \nthreat appliances , 728 – 732 \n Content-fi ltering systems, precision and recall \nin , 742 – 743 \n Content labeling, safe search integration to \nsearch engines with , 727 \n Content monitoring and fi ltering (CMF) , 748 \n Context , 285 \n Conversation eavesdropping , 555 – 556 \n Cooke, E. , 120 \n Coppersmith, Don , 37 \n CORBA . See Common Object Resource \nBroker Architecture (CORBA) \n Core impact , 389 \n Core network , 184 \n organization , 185 \n security in , 187 – 188 \n service nodes , 185 \n Corporate intranets, Wi-Fi intrusions \nprevention in , 139 – 140 \n Corporate or facilities security , 629 \n Correcting mistakes, computer forensics , \n 336 – 337 \n cross-examination , 336 – 337 \n direct testimony , 336 \n CoSoSys SRL , 777 \n Counter (CTR) modes , 412 \n CounterMeasures (product), for risk \nmanagement , 619 \n Court system, computer forensics , 310 – 311 , \n 334 – 337 \n correcting mistakes , 336 – 337 \n defendants/plaintiffs/prosecutors , \n334 – 335 \n direct and cross-examination , 335 \n pretrial motions , 335 \n surrebuttal , 335 \n testifying , 335 – 336 \n CPO . See Chief privacy offi cer (CPO) \n CPS . See Certifi cate Practice Statement (CPS) \n CPSR . See Computer Professionals for Social \nResponsibility (CPSR) \n Crackers , 517 \n defi ne , 40 \n vs. hackers , 40 – 41 \n CRAMM . See CCTA Risk Analysis and \nManagement Methodology (CRAMM) \n Cranor, L. , 475 \n CRC algorithm . See Cyclic redundancy check \n(CRC) \n Credential Check, Inc. , 8 \n Credential Security Service Provider (Cred \nSSP) , 765 – 766 \n Cred SSP, Credential Security Service \nProvider (Cred SSP) \n Cremonini, M. , 482 \n Crimeware , 519 \n implication , 546 – 548 \n Crispo, B. , 214 \n Critical needs analysis. network forensics , \n 346 – 347 \n CRL . See Certifi cate Revocation List (CRL) \n CRLSign , 446 \n Cross-examination, computer forensics , 335 \n for testifying expert , 336 – 337 \n Cross-infrastructure cyber cascading attacks , \n 191 – 192 \n Cross-network services , 184 \n Crossover error rate (CER) , 651 \n Cross-Site Scripting (XSS) , 557 \n Crowd system , 485 \n Cryptanalysis of RSA \n discrete logarithm problem , 417 \n factorization attack , 416 – 417 \n Cryptogram , 24 \n Cryptographic algorithms, PKI \n digital signatures , 433 – 434 \n public key encryption , 434 – 435 \n Cryptographic hash functions , 419 – 420 \n Cryptographic keys , 640 \n Cryptographic protocols \n authentication , 398 \n confi dentiality , 398 \n integrity , 398 \n nonrepudiation , 398 \n Cryptographic techniques , 503 \n Cryptography , 106 , 301 , 397 – 398 \n assymetric , 516 \n classical , 399 – 402 \n congruence , 400 \n congruence relation defi ned , 401 \n Euclidean algorithm , 399 \n fundamental theorem of arithmetic , \n 400 – 401 \n inverses , 400 \n modular arithmetic , 399 – 400 \n residue class , 400 \n substitution cipher , 401 – 402 \n transposition cipher , 402 \n computer age , 36 – 38 \n data protection with , 238 \n defi nition of , 23 – 24 \n DES . See Data encryption standard (DES) \n devices \n Enigma , 24 – 25 \n Lorenz cipher, the , 24 \n mathematical prelude to , 398 – 399 \n complexity , 398 – 399 \n functions , 398 \n mapping , 398 \n probability , 398 \n modern \n one-time pad cipher , 32 – 33 \n Vernam cipher (stream cipher) , 31 – 32 \n process of , 24 \n RSA , 38 \n S-boxes , 37 \n statistical tests for , 34 \n" }, { "page_number": 855, "text": "Index\n822\n Cryptology , 24 \n CSAPP . See Canadian Standards Association \nPrivacy Principles (CSAPP) \n CSI/FBI Computer Crime and Security \nSurveys , 339 \n 2008 CSI/FBI Security Report , 231 \n CSMA/CA protocol , 170 \n CTR . See Counter (CTR) modes \n CTS packets . See Clear-to-send (CTS) \npackets \n Culnan, M. , 474 \n CVE . See Common vulnerabilities and \nexposures (CVE) \n CVSS . See Common vulnerability scoring \nsystem (CVSS) \n Cyclic group, defi ned , 405 \n Cyclic redundancy check (CRC) algorithm , \n 98 \n D \n Dagon, D. , 121 , 125 , 126 \n Damiani, E. , 482 \n DAP . See Directory Access Protocol (DAP) \n Data acquisition, computer forensics , 313 \n Data analysis, computer forensics , 308 – 310 \n database reconstruction , 310 \n ethics and green-home-plate-gallery-view , \n 309 – 310 \n Data architecture, TCP/IP , 298 – 300 \n Data at rest , 758 \n Database penetration test , 378 \n Database reconstruction, computer forensics , \n 310 \n Data capturing, computer forensics , 313 \n Data classifi cation, SAN , 571 – 572 \n Data encapsulation, TCP/IP , 298 – 300 \n DataEncipherment , 446 \n Data encryption . See also Encryption \n algebraic structure , 404 – 408 \n cryptography , 397 – 398 \n classical , 399 – 402 \n mathematical prelude to , 398 – 399 \n modern symmetric ciphers , 402 – 404 \n Data Encryption Standard (DES) , 36 , 516 \n implementation , 38 \n operational theory , 37 – 38 \n Data Execution Prevention (DEP) , 48 \n DataFort , 779 \n Datagram , 96 \n Data/information owners \n role in IT security management , 262 \n Data in motion , 757 \n Data integrity fi eld (DIF) , 570 \n Data in use , 758 – 760 \n Data loss prevention (DLP) \n application , 756 – 757 \n data at rest , 758 \n data in motion , 757 \n data in use , 758 – 760 \n defi ned , 748 \n and IM , 464 \n issues addressed by , 755 – 756 \n precision versus recall , 756 \n precursors of , 747 – 748 \n starting , 753 – 754 \n Data Privacy Legislation and Standards, \ncurrent , 748 – 752 \n Data Protection Directive , 488 \n Data storage subsystem , 649 \n Data stores , 748 \n Data transport, OpenID , 282 \n DDoS attack . See Distributed denial-of-\nservice (DDoS) \n Dean, D. , 343 \n De Canni è re, Christophe , 515 \n De Capitani di Vimercati, S. , 482 \n DecipherOnly , 446 \n Decision subsystem , 649 – 652 \n Decru Dataform Security Appliances , 779 \n Defendants, computer forensics , 334 – 335 \n Defense in depth , 58 – 59 , 388 \n mind map , 23 \n principle of , 233 \n SAN security , 571 , 599 \n Delegated Path Discovery (DPD) , 442 \n Delegated Path Validation (DPV) , 442 \n DeleGate proxy , 126 , 127 \n Delio, M. , 123 \n Delta Risk LLC , 777 \n Demilitarized zones (DMZ) , 60 , 149 , 153 , \n 160 , 357 \n Demonstrative evidence, eDiscovery , 311 \n Denial-of-service (DoS) attacks , 8 , 42 , 56 , \n 102 , 151 , 188 , 361 , 681 – 683 , 694 . \n See also Distributed denial-of-service \n(DDoS) attacks \n information security and , 230 \n load-based , 554 \n malformed request , 554 \n penetration test , 377 \n RFID and , 210 \n DEP . See Data Execution Prevention (DEP) \n Department of Homeland Security (DHS) , \n 775 \n subcomponents , 669 \n Dependency, defi end , 185 \n Deployment options, for encryption , 582 – 588 \n application level , 582 – 583 \n device level , 586 – 588 \n host level , 584 – 585 \n DER . See Determined Encoding Rules (DER) \n DES . See Data Encryption Standard (DES) \n Destination-sequenced distance vector \n(DSDV) routing , 175 \n Determined Encoding Rules (DER) , 444 \n Deterministic Packet Marking (DPM) \nscheme , 343 \n Developing countries response to IW , 689 . \n See also Information warfare (IW) \n Device level, encryption , 586 – 588 \n array-level , 586 – 587 \n tape encryption , 587 – 588 \n De Vigen è re, Blaise , 29 \n tableau of , 30 \n DeWitt, D. J. , 479 \n DHCP . See Dynamic Host Confi guration \nProtocol (DHCP) \n DIF . See Data integrity fi eld (DIF) \n Differential cryptanalysis , 37 \n Diffi e-Hellman algorithm , 117 , 179 \n problem , 418 \n Diffusion , 402 \n Digital forensics . See Computer forensics \n Digital identity \n connected vertexes by , 271 \n defi ned , 270 \n management , 271 \n Digital signature , 433 – 434 \n for message authentication , 420 \n Digital signature transponder , 215 \n Digital video recording (DVR) , 145 \n Direct examination, computer forensics , \n335 \n Directory Access Protocol (DAP) , 274 \n Directory servers , 740 \n Direct testimony, computer forensics , 336 \n Disaster recovery (DR) , 143 – 145 , 699 – 700 \n Disaster Recovery Institute International \n(DRII) , 263 \n Disaster recovery planning (DRP) , 235 – 236 \n Discrete exponentiation , 417 \n Discrete logarithm , 417 \n Discretionary Access Control (DAC) , 238 \n Disposal of media, IT security management , \n 261 \n Disposal of printed matter, IT security \nmanagement , 262 \n Distributed denial-of-service (DDoS) attacks , \n 119 , 124 , 151 , 678 , 683 – 684 , 694 \n information security and , 230 \n Distributions, Linux , 80 , 82 , 83 \n Dittrich, David , 57 \n Divorce, computer forensics in , 313 \n DIY vulnerability assessment , 393 \n" }, { "page_number": 856, "text": "Index\n823\n DLP . See Data loss prevention (DLP) \n DLP Applications , 756 \n a case study , 757 \n vendors in , 762 \n DMZ . See Demilitarized zones (DMZ) \n DNS . See Domain Name Service (DNS) \n DNSBL . See DNS blackhole lists (DNSBL) \n DNS blackhole lists (DNSBL) , 125 \n Documentary evidence, eDiscovery , 311 \n Document Inspector, Microsoft Offi ce 2007 , \n 15 \n Document management systems , 748 \n Doddington’s Zoo , 656 \n Dodge, Adam , 472 \n Dolev-Yao adversary model \n delay and rushing , 104 \n eavesdropping , 101 – 102 \n forgeries , 102 – 103 \n message deletion , 105 \n reorder , 104 – 105 \n replay , 103 – 104 \n Domain names , 99 \n Domain Name Service (DNS) , 99 , 160 , 363 , \n 699 \n DoS . See Denial-of-service (DoS) attacks \n DOS rules fi le , 155 \n Dotted quad , 298 \n Douceur’s Sybil attack , 708 \n Downlink encryption , 429 – 430 \n DPD . See Delegated Path Discovery (DPD) \n DPM . See Deterministic Packet Marking \n(DPM) scheme \n DPV . See Delegated Path Validation (DPV) \n DR . See Disaster recovery (DR) \n Dreyer, L. C. J. , 503 \n DRII . See Disaster Recovery Institute \nInternational (DRII) \n Drive-by download , 58 \n DropSend , 4 \n DSDV routing . See Destination-sequenced \ndistance vector (DSDV) routing \n DSR . See Dynamic source routing protocol \n(DSR) \n DSS PCI Compliance , 384 \n Dual-channel authentication , 289 \n Dual-homed host , 358 \n Duckham, M. , 482 \n Dust , 632 – 633 \n DVR . See Digital video recording (DVR) \n Dynamic Host Confi guration Protocol \n(DHCP) , 98 – 99 , 363 \n DynamicPolicy-Effi cient Policy Management , \n 777 \n Dynamic source routing protocol (DSR) , \n176 \n E \n E. I. Du Pont De Nemours and Company , \n 745 – 746 \n EAP . See Extensible Authentication Protocol \n(EAP) \n Earth orbit , 430 \n Eavesdropping , 101 – 102 \n defending against , 106 \n controlling output , 108 \n keys independence , 107 – 108 \n key size , 108 \n operating mode , 108 – 110 \n VoIP , 555 – 556 , 558 – 559 \n eBay , 708 , 711 – 713 \n reputation service categories , 712 – 713 \n EBCD . See Emergency Boot CD (EBCD) \n EBIOS . See Expression des Besoins et \nIdentifi cation des Objectifs de S é curit é \n(EBIOS) \n ECAT . See Cellular network vulnerability \nassessment toolkit for evaluation (eCAT) \n ECB . See Electronic codebook (ECB) \n ECC . See Elliptic curve cryptography (ECC) \n EC-council LPT methodology \n application penetration test , 378 \n cell phone penetration test , 377 – 378 \n database penetration test , 378 \n denial-of-service penetration test , 377 \n external penetration test , 377 \n fi rewall penetration test , 377 \n IDS penetration test , 377 \n information gathering , 376 – 377 \n internal network penetration test , 377 \n password-cracking penetration test , 377 \n PDA penetration test , 377 – 378 \n physical security penetration test , 378 \n router penetration test , 377 \n social engineering penetration test , 377 \n stolen laptop penetration test , 377 – 378 \n VoIP penetration test , 378 \n VPN penetration test , 378 \n vulnerability analysis , 377 \n wireless network penetration test , 377 \n Echelon system , 686 \n Edge systems , 516 \n EDI . See Electronic Document Interchange \n(EDI) \n EDiscovery, preserving digital evidence , 311 \n Educational Security Incidents (ESI) Year in \nReview – 2007 , 472 , 473 \n Education/certifi cation, in applied computer \nforensics , 330 \n Egelman, S. , 475 \n E-Government Act of 2002 (PL 107-347) , \n 666 – 667 \n Electricity Sector Information Sharing and \nAnalysis Center (ESISAC) , 675 \n Electromagnetic interference (EMI) as threat , \n 634 \n Electronically stored information (ESI) , 311 \n Electronic article surveillance (EAS) tag , \n 206 , 208 \n Electronic Code Book (ECB) , 109 , 412 \n Electronic Communications and Transactions \n(ECT) Act , 689 \n Electronic Communications Privacy Act \n(ECPA) , 735 \n Electronic Document Interchange (EDI) , 446 \n Elliptic curve \n cryptosystems , 417 – 419 \n Diffi e-Hellman algorithm , 419 \n security , 419 \n Elliptic curve cryptography (ECC) , 766 \n cipher suites , 767 – 768 \n Ellis, Scott R. , 329 \n Email \n computer forensics , 322 – 324 \n extraction, hacking XP password , 327 \n and identity theft \n authentic message , 525 – 527 , 528 , 529 \n authentic payment notifi cation , \n 522 – 523 \n narrative attacks , 548 \n phishing message , 525 , 527 \n securing , 238 – 239 \n security risk , 257 \n use of , 753 – 754 \n Emergency Boot CD (EBCD) , 326 \n EnCase , 309 \n EncipherOnly , 446 \n Encryption , 24 , 56 , 106 , 130 , 138 – 139 , 516 \n Cipher Block Chaining mode , 108 – 109 \n defi ned , 301 \n SAN , 600 – 601 \n algorithms , 578 – 579 \n confi guration management , 580 \n deployments , 582 – 588 \n key management , 579 \n modeling threats , 580 – 581 \n process , 577 – 578 \n risk assessment , 580 \n specifi c use cases , 581 – 582 \n use consideration , 582 \n End-of-fi le (EOF) , 319 \n End-to-end identity, with SBC , 563 – 664 \n End-to-End Security (EndSec) protocol , 188 \n End-user license agreement (EULA) , 57 \n End users \n and information security , 256 \n role in protecting information assets , 262 \n" }, { "page_number": 857, "text": "Index\n824\n Engels, D. , 216 \n Enhanced Border Security and Visa Entry \nReform Act of 2002 (PL 107-173) , \n 663 – 664 \n Enigma, cryptographic devices , 24 – 25 \n Enrollment , 652 – 653 \n Ensor network, wireless \n bootstrapping in , 178 \n SPINS, security protocols , 173 \n μ TESLA , 174 – 175 \n SNEP , 174 \n Enterprise instant messaging , 461 – 462 \n Enterprise Privacy Authorization Language \n(EPAL) , 477 \n Enterprise@Risk: 2007 Privacy & Data \nProtection , 472 , 473 \n Environmental conditions and data capture \nsubsystem , 648 \n Environmental threats to physical security \nprevention and mitigation measures \n fi re and smoke , 634 – 635 \n inappropriate temperature and humidity , \n 634 \n other environmental threats , 635 \n water damage , 635 \n Environmental threats to service of \ninformation systems and data \n chemical, radiological, and biological \nhazards , 632 \n dust , 632 – 633 \n fi re and smoke , 632 \n inappropriate temperature and humidity , \n 631 – 632 \n infestation , 633 \n water damage , 632 \n EOF . See End-of-fi le (EOF) \n EPAL . See Enterprise Privacy Authorization \nLanguage (EPAL) \n EPCglobal , 207 , 213 \n EPC standard , 207 – 208 \n Equal error rate (EER) , 651 \n E-Rate Program , 735 \n ERR message . See Error state by using an \nerror (ERR) message \n Error rates , 655 – 656 \n Error state by using an error (ERR) message , \n 177 \n ESET NOD32 Antivirus , 781 \n ESI . See Electronically stored information \n(ESI) \n ESSID . See Extended service set identifi er \n(ESSI) \n Ethereal , 166 \n Ethernet , 354 \n EtherSnoop light , 167 \n E th roots problem , 417 \n Etoh, H. , 344 \n Euclidean algorithm , 399 \n EULA . See End-user license agreement \n(EULA) \n Evers, J. , 120 \n Exec() , 69 \n Executive management, in protecting \ninformation assets , 262 \n Expert testifying, computer forensics , \n 332 – 334 \n certainty without doubt , 334 \n degrees of certainty , 332 – 334 \n Expert testimony, computer forensics , \n 335 – 336 \n Exploit attacks, intrusions , 54 – 55 \n Exponential key exchange , 516 \n Expression des Besoins et Identifi cation des \nObjectifs de S é curit é (EBIOS) , 617 \n Extended copy protection (XCP) , 57 \n Extended Key Usage, in SCVP , 442 \n Extended service set identifi er (ESSID) , 139 \n EXtensible Access Control Markup Language \n(XACML) , 477 \n Extensible Authentication Protocol (EAP) , \n 173 \n workfl ow of , 140 \n Extensible Resource Description Sequence . \n See XRDS (Extensible Resource \nDescription Sequence) \n EXtensible Resource Identifi er . See XRI \n(EXtensible Resource Identifi er) \n External penetration test , 377 \n Extranet , 139 \n Extraplanetary link encryption , 428 – 429 \n Extrusion prevention system (EPS) , 748 \n F \n Fabric, SAN switches , 593 \n Factorization attack , 417 – 418 \n Failure to enroll rate (FER) , 653 \n Fair Information Practices , 503 \n False acceptance , 650 \n False match , 650 \n False nonmatch rate (FNMR) , 650 \n False rejection , 650 \n FastSLAM algorithm , 210 \n FAT . See File Allocation Table (FAT) \n FBI . See Federal Bureau of Investigation \n(FBI) \n FCAP . See Fibre-Channel Authentication \nProtocol (FCAP) \n FCPAP . See Fibre-Channel Password \nAuthentication Protocol (FCPAP) \n FCS . See Frame check sequence (FCS) \n FC-SP . See Fibre-Channel Security Protocol \n(FC-SP) \n Feamster, N. , 125 \n Federal Bureau of Investigation (FBI) , 57 \n Federal Information Security Management \nAct (FISMA) , 248 , 255 – 256 \n IT security management , 259 – 260 \n security management system , 255 – 256 \n specifi cations in , 259 \n Federal Trade Commission , 473 \n Federated identity management , 277 – 278 \n standards , 277 \n vs. centralized management solution , \n 275 – 276 \n Feistel cipher , 404 \n Feistel function , 37 \n Feng, J. , 503 \n Fermat’s Little theorem , 413–414 \n Fibre-Channel Authentication Protocol \n(FCAP) , 570 \n Fibre Channel over TCP/IP (FCIP) , 594 \n Fibre-Channel Password Authentication \nProtocol (FCPAP) , 570 \n Fibre Channel Protocol (FCP) , 594 \n Fibre-Channel Security Protocol (FC-SP) , 570 \n Fibre Channel Storage (FCS) , 594 \n Field, defi ned , 405 \n File Allocation Table (FAT) , 314 – 315 \n File carving , 318 – 320 \n end-of-fi le (EOF) , 319 \n GREP , 320 \n File recovery \n forensic examiner role in , 315 – 317 \n hacking XP password , 327 \n FilesAnywhere , 4 \n File system analyses, computer forensics , \n 314 – 315 \n File Transfer Protocol (FTP) , 359 \n Financial organizations and content fi ltering , \n 725 \n Financial Services Modernization Act of \n1999 , 263 \n Finite fi elds , 405 – 406 \n Finite groups, defi ned , 404 \n FIPS , 201-1 , 640 \n system model , 641 – 642 \n Fire damage , 632 \n Firewalls , 60 – 61 , 141 , 158 – 159 , 240 , 747 \n administrative service protocols , 361 – 363 \n application layer , 241 \n application-layer , 163 \n confi guration , 358 – 359 \n defi ned , 349 \n design , 161 \n" }, { "page_number": 858, "text": "Index\n825\n hardware implimentations , 355 \n highly available , 366 – 367 \n host , 355 \n host-based , 239 \n installation , 358 – 359 \n interconnection of , 366 – 367 \n internal IP services protection , 363 – 364 \n intrusion , 44 , 47 \n load balancing arrays , 365 – 366 \n management , 367 \n network , 349 – 350 , 355 \n for video application , 360 – 361 \n for voice , 360 – 361 \n network topology , 356 – 358 \n NIDS complements , 163 \n packet fi ltering , 162 – 163 \n penetration test , 377 \n placement , 356 – 358 \n policy optimization , 352 – 353 \n remote access confi guration , 364 \n secure external services provisioning , 360 \n security policies , 350 – 351 \n security policy , 159 \n selection , 355 – 356 \n sf router, confi guration script for , 160 \n simple mathematical model , 351 – 352 \n software implimentations , 355 \n stateful inspection , 163 \n supporting outgoing services , 359 – 360 \n types \n application layer , 354 – 355 \n packet fi lter , 354 \n stateful packet , 354 \n Firewire , 386 \n FIRM . See Fundamental Information Risk \nManagement (FIRM) \n First-match fi rewall policy anomalies , 352 \n First principles, computer forensics , 325 \n FISMA . See Federal Information Security \nManagement Act (FISMA) \n Fitzpatrick, Brad , 281 \n Fleissig, Adrian , 34 \n Floerkemeier, C. , 214 \n Fluhrer-Mantin-Shamir (FMS) attack , 246 \n FlyTrap , 345 \n Folder Lock , 782 \n Fong, M. , 125 \n FoolProof Security , 777 \n Forensics \n computer . See Computer forensics \n network \n attack traceback and attribution , \n 341 – 346 \n critical needs analysis , 346 \n critical needs analysis and , 346 \n principles of , 340 – 341 \n research directions for , 346 – 247 \n research directions in , 346 – 347 \n scientifi c overview , 339 – 340 \n Forking, in SIP , 552 \n HERFP , 560 – 561 \n Forristal, J. , 640 \n Fortinet , 728 – 729 \n Forwarding loops , 105 \n Forwarding table , 96 \n Foster, J. , 54 \n Fport , 20 \n Fragmentation , 56 \n Frame check sequence (FCS) , 96 \n Frames , 95 \n Frankel Bernard, Sheila , 243 – 245 \n Franklin, M. , 451 \n Free Culture , 471 \n Freedom network , 485 \n Freeware tools, for LAN , 166 – 167 \n Freiling, F. , 126 \n Frequency analysis , 27 \n Frequency hopping spread spectrum wireless \nnetwork security , 793 \n Froomkin, A. M. , 475 \n FTP . See File Transfer Protocol (FTP) \n Fundamental Information Risk Management \n(FIRM) , 617 \n Fundamental theorem of arithmetic , 400 – 401 \n G \n Galois fi eld GF (2 ) , 406 \n fi nite fi eld , 407 \n with generator element , 406 –407 \n Game theory, and instant messaging (IM) , \n 455 – 457 \n Gateway mesh router , 172 \n Gateway Mobile Switching Center (GMSC) , \n 185 , 186 , 191 , 196 \n GCHQ . See Government Communications \nHeadquarters (GCHQ) \n Gedik, B. , 481 \n General expression searching (GREP) , 320 \n Generational gaps, and IM , 456 – 457 \n Generic Security Service application \nprogramming interface (GSS-API) , 770 \n “ Get out of jail free ” card , 379 \n GFI LANguard , 389 \n GFI LANguard S.E.L.M. (HIDS) , 153 \n Ghinita, G. , 482 \n GID . See Group identifi er (GID) \n Gilder’s Law , 454 \n GLBA . See Gramm-Leach-Bliley Act of 1999 \n(GLBA) \n Global positioning system (GPS) , 425 \n Global system for mobile communication \n(GSM) , 170 \n Gmail , 5 , 11 – 12 \n Gnu Privacy Guard (GPG) , 449 \n Goebel, J. , 125 \n Goodrich, M. T. , 129 \n Google Hacking , 370 \n Google Web Accelerator , 737 – 738 \n The GORB , 719 \n Government Communications Headquarters \n(GCHQ) , 38 \n Governor’s Offi ce of Homeland Security \n(OHS) , 670 \n GPG . See Gnu Privacy Guard (GPG) \n Gramm-Leach-Bliley Act of 1999 (GLBA) , \n 3 , 263 \n Graphical user interface (GUI) , 20 \n Gray-box analysis , 126 , 371 \n Gray-hole attack , 105 \n Great Firewall of China , 725 \n Greenberg, A. , 120 \n Green-home-plate-gallery-view, computer \nforensics , 309 – 310 \n GREP . See General expression searching \n(GREP) \n Gresecurity (GRSec/PAX) , 90 \n Grizzard, J. , 121 \n Grossklags, J. , 475 \n Group, defi ned , 404 \n Group identifi er (GID) , 69 , 82 \n GRSec/PAX . See Gresecurity (GRSec/PAX) \n Grunwald, D. , 481 \n Gruteser, M. , 481 \n GSM . See Global system for mobile \ncommunication (GSM) \n Gspace , 5 , 11 – 12 \n GSS-API . See Generic Security Service \napplication programming interface \n(GSS-API) \n Gu, G. , 125 , 126 \n Guan, Y. , 344 \n GUI . See Graphical user interface (GUI) \n Gullutto, V. , 54 \n Gutmann, Peter , 450 \n H \n Hackers , 517 \n defi ne , 40 \n vs. crackers , 40 – 41 \n Hacking . See Information warfare (IW) \n Hacking XP password, computer forensics , \n 325 – 328 \n email , 327 \n" }, { "page_number": 859, "text": "Index\n826\n Hacking ( Continued ) \n internet history , 327 – 328 \n LM hashes and rainbow tables , 325 – 326 \n memory analysis and Trojan defense , 326 \n net user password hack , 325 \n password reset disk , 326 \n recovering lost and deleted fi les , 327 \n user artifact analysis , 326 – 327 \n Hackworth, A. , 120 \n Hailstorm , 386 \n Hann, Il-H. , 475 \n Hardening , 239 \n SAN devices , 598 \n Unix and Linux , 84 – 90 \n Hardware \n fi rewall implementations , 355 \n write blockers , 313 – 314 \n Harley, David , 56 \n Harmful to minors, defi ned , 736 \n Harris, Shon , 232 \n Harwood, Matthew , 775 \n Hasan, R. , 473 \n Hash, Joan , 248 \n Hash Message Authentication Code (HMAC) , \n 515 \n HBA . See Host Bus Adapter (HBA) \n HBF . See Hierarchical Bloom fi lter (HBF) \n He, Q. , 504 \n Healthcare organizations and content fi ltering , \n 725 \n Health Insurance Portability and Accountability \nAct (HIPAA) , 3 , 150 , 263 , 725 \n Herders , 42 \n Herzog, Pete , 375 \n Heuristic-based analysis , 166 \n Heuristics , 695 \n HIBE . See Hierarchical Identity-Based \nEncryption (HIBE) \n Hierarchical Bloom fi lter (HBF) , 344 \n Hierarchical Identity-Based Encryption \n(HIBE) , 451 \n Hifn 4300 HIPP III Storage Security \nProcessor , 779 \n Higgins Trust Framework , 285 – 286 \n High Technology Crime Investigation \nAssociation (HTCIA) , 10 \n HIPAA . See Health Insurance Portability and \nAccountability Act (HIPAA) \n HIPS . See Host-based intrusion prevention \nsystems (HIPS) \n HMAC . See Hash Message Authentication \nCode (HMAC) \n Hoefl in, D. , 125 \n Hoglund, Greg , 55 \n Holz, T. , 119 , 123 , 125 , 126 \n Homeland security \n E-Government Act of 2002 (PL 107-347) , \n 666 – 667 \n Enhanced Border Security and Visa Entry \nReform Act of 2002 (PL 107-173) , \n 663 – 664 \n USA PATRIOT Act of 2001 , 661 – 663 \n Homeland Security Act of 2002 (PL 107-\n296) , 665 – 666 \n Homeland Security Presidential Directives \n(HSPD) , 667 – 669 , 674 \n California Offi ce of Information Security \nand Privacy Protection (OISPP) , 670 \n Department of Homeland Security \nSubcomponents , 669 \n Governor’s Offi ce of Homeland Security \n(OHS) , 670 \n private sector organizations for information \nsharing , 670 – 671 \n State and Federal Organizations , 669 – 670 \n Home Location Register (HLR) , 185 , 186 \n Homogeneity attack, to k- anonymity , 480 \n Honeynet project , 385 \n Honeynets , 305 \n Honeypots , 62 – 63 , 305 \n Host-based intrusion prevention systems \n(HIPS) , 304 \n Host-based monitoring , 64 \n Host Bus Adapter (HBA) , 573 , 593 \n Host fi rewalls , 355 \n Host hardening \n access rights , 88 \n ACL , 89 \n administrative user accounts , 89 \n groups , 89 \n GRSec/PAX , 90 \n intrusion detection \n audit trails , 89 \n monitoring fi le changes , 89 \n SELinux , 90 \n Host level encryption , 584 – 585 \n Hotsite , 144 \n Hotspot Shield , 781 \n H.323 protocol , 551 \n HP StorageWorks Secure Fabric OS , 779 \n HP Web Inspect , 386 \n HTCIA . See High Technology Crime \nInvestigation Association (HTCIA) \n HTML, and instant messaging (IM) , 462 \n HTTP . See HyperText Transfer Protocol (HTTP) \n HTTP web-based proxies , 739 \n Hui, K. L. , 475 , 476 \n Human-caused physical threats to physical \nsecurity prevention and mitigation \nmeasures , 635 – 636 \n Human-caused threats to service of \ninformation systems and data \n misuse , 634 \n theft , 634 \n unauthorized physical access , 634 \n vandalism , 634 \n Humidity , 631 – 632 \n Hutton, M. , 81 \n Hypercube and Octopus (H & O) algorithm , \n 179 – 180 \n HyperText Transfer Protocol (HTTP) , 60 , \n 153 , 160 \n Hypervisor , 698 – 699 \n I \n IA . See Information assurance (IA) \n IAB . See Internet Architecture Board (IAB) \n Ianelli, N. , 120 \n IAS servers \n adding access points as RADIUS clients \nto , 795 \n adding access points to fi rst , 795 \n replicating RADIUS client confi guration \nto , 798 \n scripting addition of access point , \n795 – 796 \n IBE . See Identity-Based Encryption (IBE) \n IBM , 285 \n IC3 . See Internet Crime Complaint Center (IC3) \n ICMP . See Internet Control Message Protocol \n(ICMP) \n ICMP traceback (iTrace) , 342 \n Identifi cation , 654 \n Identity 2.0 , 278 – 286 \n evaluating technology , 287 \n Higgins trust framework , 285 – 286 \n ID-WSF , 280 – 281 \n InfoCard , 283 – 284 \n initiatives , 278 – 286 \n LID , 279 \n for mobile users , 286 – 292 \n mobile identity , 287 – 290 \n Mobile Web 2.0 , 286 – 287 \n OpenID 2.0 , 281 – 282 \n OpenID Stack , 282 \n SAML , 279 – 280 \n Shibboleth project , 280 \n SXIP 2.0 , 284 – 285 \n URL-based , 278 \n XRI/XDI , 279 \n Identity, model of , 270 \n Identity-Based Encryption (IBE) , 450 \n Identity Federation Framework (ID-FF) , 281 \n Identity management \n" }, { "page_number": 860, "text": "Index\n827\n authentication , 270 – 271 \n core facets of , 271 \n current technology requirements , 274 – 286 \n evolution of identity management , \n 274 – 278 \n Identity 2.0 , 278 – 286 \n metadirectories for , 276 \n model of identity , 270 \n overview , 270 – 272 \n requirements for , 269 – 274 \n digital identity , 270 \n privacy requirement , 272 \n usability requirement , 273 – 274 \n user-centricity , 272 – 273 \n research directions for , 292 \n Silo model for , 275 \n simple centralized model , 276 \n virtual directories for , 276 – 277 \n Identity management 1.0 , 274 – 275 \n Identity privacy, in location privacy , 481 \n Identity Provider (IdP) , 270 \n Identity Services Interface Specifi cation \n(ID-SIS) , 281 \n Identity theft \n crimewire , 519 \n implications , 546 – 548 \n experimental design , 520 – 522 \n analysis , 535 , 538 , 541 , 543 – 546 \n emails . See Emails, identity theft \n web pages . See Web pages \n overview , 519 – 520 \n reduction in , 271 \n Identity Web Services Framework (ID-WSF) , \n 280 – 281 \n ID-FF . See Identity Federation Framework \n(ID-FF) \n IDisk , 11 – 12 \n IdP . See Identity Provider (IdP) \n IdP-centric model , 273 \n IDS . See Intrusion detection system (IDS) \n ID-SIS . See Identity Services Interface \nSpecifi cation (ID-SIS) \n ID-WSF . See Identity Web Services \nFramework (ID-WSF) \n IEC . See International Electrotechnical \nCommission (IEC) \n IEEE . See Institute of Electrical and \nElectronic Engineers (IEEE) \n IETF . See Internet Engineering Task Force \n(IETF) \n IETF RFC 2440 , 449 \n IETF RFC 3511 , 355 \n IM . See Instant messaging (IM) \n Impersonation , 556 \n I-names , 282 \n Incident-handling process , 152 \n Incident response \n instructions, for end users , 91 \n Red Team/Blue Team exercises , 91 \n roles identifi cation , 91 \n security management systems , 258 \n Incident response (IR) plan , 233 – 235 \n Inetd . See Internet daemon (Inetd) \n Infection propagation (IP) rules , 199 \n Inferential statistics , 33 \n Infestation , 633 \n Infi nite groups, defi ned , 404 \n InfoCard , 283 – 284 \n Information \n assets , 630 \n gathering , 376 – 377 \n integrity , 146 – 147 \n security principles , 256 \n Information assurance (IA) , 388 \n Information leak detection and prevention \n(ILDP) , 748 \n Information Management Technologies , 777 \n Information ownership, IT security \nmanagement , 262 \n Information Risk Scorecard , 617 \n Information Security and IT Security Austria , \n 777 \n Information Security Forum , 260 \n Information Security Forum’s (ISF) Standard \nof Good Practice , 617 – 618 \n Information security management \n application security , 247 \n business communications security , \n 241 – 242 \n CISSP 10 domains of , 225 \n CBK , 226 \n common attacks , 228 \n botnets , 230 \n DoS and DDoS , 230 \n industrial espionage , 229 – 230 \n malware , 229 \n social engineering , 229 \n spam, phishing, and hoaxes , 230 \n data security \n access control models , 238 – 239 \n classifi cation models , 237 – 238 \n incidence response and forensic \ninvestigations , 251 \n mission-critical systems, protecting . See \n Mission-critical systems, protecting \n monitoring and effectiveness , 249 \n mechanisms , 250 – 251 \n physical security , 236 – 237 \n policies for , 247 – 248 \n scope of , 225 \n security breaches, impact of , 231 \n security effectiveness, validating , 251 – 252 \n SETA program , 248 – 249 \n standards , 259 – 260 \n systems and network security \n host-based , 239 – 240 \n intrusion detection , 240 \n prevention detection , 240 – 241 \n threats , 227 – 228 \n process , 229 \n web security , 246 – 247 \n wireless security , 242 \n access control , 243 \n availability , 244 \n confi dentiality , 243 – 244 \n controls, enhancing , 244 – 246 \n data integrity , 244 \n Information Security Management System \n27000 Family of Standards , 255 \n Information Shield, Inc. , 777 \n Information system auditor, in information \nsecurity policies and procedures , 262 \n Information system hardware , 629 \n Information Systems Audit and Control \nAssociation (ISACA) , 10 , 600 \n Information Systems Security Association \n(ISSA) , 10 , 600 \n Information technology (IT) , 9 \n personnel in building security controls , \n 262 – 263 \n security management \n required processes for , 263 – 267 \n rules and regulations for , 263 – 267 \n security policies , 261 – 263 \n security procedures , 261 – 263 \n Information warfare (IW) \n defi ned , 678 \n holistic view of , 689 – 690 \n legal aspects of , 686 – 689 \n model , 677 – 678 \n offensive strategies , 680 – 685 \n preventive strategies , 685 – 686 \n reality of , 678 – 680 \n Info World , 385 \n Infragard , 10 \n Infrastructure security , 629 \n Ingemarsson, Tang, and Wong (ING) \nalgorithm , 179 \n InhibitPolicyMapping , 442 \n Inoue, S. , 213 \n Insider threat, to information , 293 – 294 \n Instant messaging (IM) , 130 \n aggregators , 462 \n application , 461 – 462 \n asset management , 463 \n" }, { "page_number": 861, "text": "Index\n828\nInstant messaging (Continued)\n basic consideration , 453 \n built-in security , 461 \n and chat-monitoring services , 748 \n compliance , 464 \n content fi ltering on , 463 \n defensive strategies , 462 – 463 \n DLP , 464 \n features , 454 \n game theory and , 455 – 457 \n and HTML , 462 \n infrastructure for , 457 \n knowledge of business transactions and , \n 459 – 460 \n malicious threat , 458 – 459 \n man-in-the-middle attacks , 459 \n mobile technologies and , 462 \n overview , 453 – 454 \n process , 464 – 465 \n real-time transactions and , 457 \n regulatory concerns , 461 \n security policy , 463 – 464 \n SIEM , 464 \n social engineering , 459 \n technological evolution and , 454 – 455 \n unintentional threats , 460 – 461 \n vulnerabilities , 459 \n Institute of Electrical and Electronic \nEngineers (IEEE) , 79 , 243 , 511 – 512 \n 802.11 standard \n RSN, defi ned , 245 \n WEP , 172 \n wireless LANs , 170 \n WPA , 173 \n Intangible assets , 137 \n Integrated Services Digital Network (ISDN) , \n 507 – 508 \n Integrity, of information , 256 \n Integrity check value (ICV) , 244 \n Intelinet Spyware Remover , 782 \n Intention-Driven iTrace , 342 \n Interface , 93 \n IntermediateCerts , 442 \n Internal IP services protection , 363 – 364 \n Internal network penetration test , 377 \n International Electrotechnical Commission \n(IEC) , 255 , 260 , 512 \n International law \n liability under , 686 – 687 \n remedies under , 687 – 689 \n International Organization for Standardization \n(ISO) , 255 , 260 , 512 \n best-practice areas of , 261 \n for contactless smart cards , 207 \n for RFID systems , 207 \n International Telecommunications Union \n(ITU) , 184 , 361 \n International Telecommunications Union \nTelecommunications Standardization \nSector (ITU-T) , 436 \n Internet \n architecture of , 93 \n bridging , 95 \n communication, architecture of \n primitives , 94 \n communication layers \n MAC , 95 – 96 \n network , 96 – 97 \n PHY , 95 \n sockets , 98 \n transport , 97 – 98 \n computer forensics , 312 \n connectivity , 188 \n core network and , 184 \n defense against attacks , 105 \n layer session defense . See Session-\noriented defenses \n session startup defenses . See Session \nestablishment \n hacking XP password and , 327 – 328 \n IP addresses , 96 \n links . See Links, communication \n network topology , 96 \n routers , 96 \n threat model , 100 \n Dolev-Yao adversary model , 101 – 105 \n use, security risk , 258 \n Internet Age , 150 \n Internet Architecture Board (IAB) , 260 \n Internet Cloud-Based Solutions , 731 – 732 \n Internet Control Message Protocol (ICMP) , \n 99 , 362 \n Internet Crime Complaint Center (IC3) , 347 \n Internet daemon (inetd) , 85 \n Internet Engineering Task Force (IETF) , 260 , \n 511 , 512 \n Internet FraudWatch , 347 \n Internet gateway-based products/unifi ed threat \nappliances , 728 – 732 \n Internet protocol (IP) , 96 \n Internet Protocol Security (IPSec) , 187 \n Internet Relay Chat (IRC) , 55 , 294 \n Internet Security Protocol (IPsec) , 109 , 113 , \n 360 , 512 \n issues with , 509 – 510 \n Internet service providers (ISP) , 97 , 123 , 725 \n and content fi ltering , 725 \n Interoperability, SXIP 2.0 , 284 \n Intranet security , 133 – 135 \n access control, NAC and , 136 – 137 \n audits , 137 – 138 \n authentication and encryption , 138 – 139 \n change management , 142 – 143 \n DR , 143 – 145 \n information and system integrity , 146 – 147 \n network protection , 141 – 142 \n personnel security , 146 \n physical and environmental protection , \n 145 – 146 \n risk assessment , 148 \n security assessment , 147 – 148 \n user training , 142 \n wireless security , 139 – 140 \n Intrusion , 39 – 40 , 294 – 297 \n abuse of privileges , 293 – 294 \n bots , 42 \n crackers and , 40 – 41 \n defense-in-depth strategy , 58 – 59 \n defense structure against , 43 \n network requirements , 44 – 45 \n network security best practices , 45 \n detection , 63 \n behavior-based , 65 \n signature-based , 64 – 65 \n directed attacks , 53 – 56 \n fi rewalls and , 44 , 47 \n hackers and , 40 – 41 \n malicious software , 56 – 58 \n monitoring , 63 \n host-based , 64 \n IPS , 65 \n traffi c , 64 \n motive , 41 \n physical theft , 293 \n prevention tools \n ACS , 48 – 49 \n AF , 48 \n fi rewalls , 47 \n IPS , 47 – 48 \n UTM , 49 \n preventive measures \n access control , 59 \n antispyware software , 61 – 62 \n antivirus software , 61 – 62 \n closing ports , 60 \n fi rewalls , 60 – 61 \n honeypots , 62 – 63 \n NAC , 63 \n patching , 60 \n penetration testing , 60 \n spam fi ltering , 62 \n vulnerability testing , 59 – 60 \n reactive measures \n quarantine , 65 – 66 \n traceback , 66 \n" }, { "page_number": 862, "text": "Index\n829\n reconnaissance techniques . See \n Reconnaissance techniques \n risk analysis . See Risk analysis \n security policies to avoid , 45 – 46 \n statistics , 41 \n survey of detection and prevention \ntechnologies , 300 – 301 \n symptoms , 43 \n TCP/IP . See Transmission Control \nProtocol/internet Protocol (TCP/IP) \n tools , 41 – 42 \n Intrusion detection system (IDS) , 17 , 44 , 56 , \n 88 , 121 , 141 , 240 , 293 , 355 , 596 , 697 \n anti-malware software , 301 – 302 \n behavior-based , 65 \n data capturing and storing , 164 \n defi ned , 153 \n digital forensics , 305 – 306 \n host-based , 64 \n network-based , 64 , 65 , 154 , 156 , 302 – 303 \n features of , 156 – 158 \n penetration test , 377 \n session data analysis , 304 – 305 \n signature-based , 64 – 65 \n survey , 300 – 301 \n TCP/IP . See Transmission Control \nProtocol/Internet Protocol (TCP/IP) \n Intrusion prevention system (IPS) , 47 – 48 , 65 , \n 88 , 141 , 293 , 355 , 559 – 560 , 747 \n fi rewalls , 240 \n application layer , 241 \n host-based , 304 \n network-based , 303 – 304 \n packet fi ltering , 240 – 241 \n proxies , 241 \n SIM , 304 \n survey , 300 – 301 \n Intrusion protection systems (IPS) , 697 \n Inverses , 400 \n INVITE , 560 \n IP . See Internet protocol (IP) \n “ Ip access-group 101 in ” , 162 \n IP addresses , 96 \n IP address spoofi ng , 157 \n IP header , 97 \n IPS . See Intrusion prevention system (IPS) \n IPsec . See Internet Security Protocol (IPsec) \n IP traceback , 341 – 344 \n active probing , 342 \n iTrace , 342 \n log-based traceback , 343 – 344 \n packet marking , 342 – 343 \n IRC . See Internet Relay Chat (IRC) \n IRC bots , 230 \n Irwin, K. , 504 \n ISACA . See Information Systems Audit and \nControl Association (ISACA) \n ISCSI , 594 \n ISDN . See Integrated Services Digital \nNetwork (ISDN) \n ISDN User part (ISUP) , 185 \n ISMM framework , 226 – 227 \n ISO . See International Organization for \nStandardization (ISO) \n ISO/IEC 27000 Family of Standards . See \n Information Security Management \nSystem 27000 Family of Standards \n ISO 17799:2005 security model , 226 – 227 \n ISP . See Internet service provider (ISP) \n ISP-Based Solutions , 731 \n ISSA . See Information Systems Security \nAssociation (ISSA) \n IT . See Information technology (IT) \n IT-Grundschutz , 618 \n ITrace . See ICMP traceback (iTrace) \n IT Security Essentials Guide , 777 \n IT security governance planning , 263 \n IT security management . See Information \ntechnology security management \n ITU . See International Telecommunications \nUnion (ITU) \n ITU-T . See International Telecommunications \nUnion Telecommunications \nStandardization Sector (ITU-T) \n IW . See Information warfare (IW) \n J \n Jahanian, F. , 120 \n Jain, N. , 504 \n Jajodia, S. , 130 \n Jamming , 102 \n Java-based WLIrc , 131 \n Jensen, C. , 504 \n JmIrc , 131 \n Job description, applied computer forensics , \n 329 – 330 \n management , 330 \n Job satisfaction, and IM , 456 \n Jones, C. E. , 130 \n Joy rider attacker , 296 \n JTC 1/SC 37 technical committee , 647 \n Juels, A. , 214 \n K \n Kalnis, P. , 482 \n Kang, B. , 121 \n K- anonymity, privacy , 479 – 480 \n background knowledge attack to , 480 \n homogeneity attack to , 480 \n Karasaridis, A. , 125 \n Karat, C.-M. , 503 \n Karat, J. , 503 \n Karlin, A. , 129 \n Kasiski/Kerckhoff method , 30 – 31 \n Kaspersky Anti-Virus , 781 – 782 \n Kaspersky Anti-Virus Defi nition Complete \nUpdate , 782 \n Kaspersky Internet Security , 782 \n Kasten Chase Assurency , 779 \n Kaufman, C. , 450 \n Kent, S. T. , 130 \n Kerberos authentication protocol , 766 \n Kerberos enhancements , 769 – 770 \n AES , 769 – 770 \n RODC , 770 \n Kerckhoff, Auguste , 30 \n Kernel-level root kit , 295 \n Kernel space, vs. userland , 68 \n KeyAgreement , 446 \n KeyCertSign , 446 \n KeyEncipherment , 446 \n Key identifi er , 107 \n Keyloggers , 57 \n KeyScrambler Personal , 782 \n Key stream , 107 \n Keystroke loggers , 42 \n KeyUsage , 442 \n Kiwi Syslog Daemon , 17 \n Klaus, Christopher , 389 \n Know Your Enemy: Honeynets , 385 \n Koerner, B. I. , 123 \n Koning, Ralph , 122 \n K ö tter, M. , 123 \n Krause, Micki , 226 \n Kroll Ontrack , 310 \n Kulik, L. , 482 \n Kutz, G. , 54 \n L \n Labor disputes, computer forensics , 312 – 325 \n creating images by software and hardware \nwrite blockers , 313 – 314 \n divorce , 313 \n fi le carving , 318 – 320 \n fi le system analyses , 314 – 315 \n live capture of relevant fi les , 314 \n NTFS , 315 \n password recovery , 317 – 318 \n patent infringement , 313 \n RAID , 314 \n role of forensic examiner in investigations \nand fi le recovery , 315 \n timestamps , 320 – 324 \n" }, { "page_number": 863, "text": "Index\n830\n Lagrange’s theorem , 414 \n Lampson, Butler , 448 \n LAN . See Local area network (LAN) \n Language support, content-fi ltering gateway \nand , 741 \n Lanman (LM) hashes, hacking XP password , \n 325 – 326 \n Laptops, security risk , 258 \n Latent fi ngerprints , 655 \n Law enforcement agencies and biometrics , \n 646 \n Law enforcement deployment teams \n(LEDTs) , 775 \n Layer 2 Forwarding Protocol (L2F) , 513 \n Layers, communication \n MAC , 95 – 96 \n network , 96 – 97 \n PHY , 95 \n sockets , 98 \n transport , 97 – 98 \n Layer 2 Tunneling Protocol (L2TP) , 512 – 513 \n Layout, of network , 45 \n Layton Technology Inc. , 778 \n Leased lines , 509 \n Lee, T. S. , 475 \n Lee, W. , 125 , 126 \n LEF . See Logical evidence fi le (LEF) \n LeFevre, K. , 479 \n Legal issues in content fi ltering , 735 – 737 \n Lemos, R. , 121 \n Lessig, L. , 471 \n Li, J. , 130 , 344 \n Li, L. , 130 \n Liability issues, penetration tests , 378 – 379 \n Liberty Alliance’s work , 280 – 281 \n circle of trust , 277 \n Libraries and content fi ltering , 725 – 726 \n LID . See Light-Weight Identity (LID) \n Light-Weight Identity (LID) , 279 \n LinkedIn.com , 716 , 717 – 718 \n Links, communication \n physical , 95 \n virtual , 95 \n Link state update (LSU) packet , 177 \n Linksys , 154 , 159 , 160 , 161 \n Linux \n as community , 80 \n distributions , 80 , 82 , 83 \n hardening \n host . See Host hardening \n kernel , 80 \n Open Source movement and , 80 \n proactive defense for \n incident response preparation . See \n Incident response \n vulnerability assessment . See \n Vulnerability, assessment \n standard base specifi cations , 82 \n Unix and , 80 \n Linux Journal , 385 \n Linux Rootkit (LRK) , 295 \n Litchfi eld, D. , 55 \n Liu, L. , 481 \n Live capture, of relevant fi les , 314 \n Lloyd, L. , 121 \n Load balancing \n advantages , 366 \n defi ned , 365 \n disadvantages , 366 \n interconnection of , 366 – 367 \n operation , 366 \n procedure for , 365 – 366 \n in real life , 365 \n Load-based DoS , 554 – 555 \n call data fl oods , 554 – 555 \n control packet fl oods , 554 – 555 \n distributed , 555 \n Local area network (LAN ) , 354 \n access list details , 162 \n demilitarized zone (DMZ) , 149 , 153 , 160 \n DOS rules fi le , 155 \n fi rewalls , 158 – 159 \n application-layer , 163 \n design , 161 \n NIDS complements , 163 \n packet fi ltering , 162 – 163 \n security policy , 159 \n sf router, confi guration script for , 160 \n stateful inspection , 163 \n types of , 162 \n freeware tools , 166 – 167 \n IDS, defi ned , 153 \n incident-handling process , 152 \n levels of trust , 149 \n NAT confi guration , 160 \n network access controls \n design through , 152 – 153 \n establishment , 150 \n network-based IDS (NIDS) , 154 \n features of , 156 – 158 \n objectives , 149 \n policies for , 151 – 152 \n resources , 151 \n risk assessment , 151 \n signature algorithms \n anomaly-based analysis , 166 – 167 \n heuristic-based analysis , 166 \n pattern matching , 164 – 165 \n protocol decode-based analysis , \n 165 – 166 \n stateful pattern matching , 165 \n signature analysis , 164 \n statistical analysis , 164 \n system activities, monitor and analyze , \n 163 – 164 \n TCP SYN (half-open) scanning , 155 – 156 \n threats, identifi cation , 151 \n disruptive type , 150 \n unauthorized access type , 150 \n UDP attacks , 154 \n 802.11 wireless LANs , 170 – 171 \n Local Scans \n pros and cons of , 387 \n vs. central scans , 387 – 388 \n Local Security Authority (LSA) , 772 \n Location privacy \n adversaries , 480 \n categories , 481 \n concept , 480 \n Log-based traceback , 243 – 244 \n Logging, SAN , 601 – 603 \n log reports , 603 \n Logical evidence fi le (LEF) , 314 \n Logical security , 629 \n Logical vulnerabilities , 369 \n Login page , 528 – 535 \n bogus URL , 532 , 534 \n content alignment , 529 , 531 – 532 \n security , 532 , 535 , 536 – 538 , 539 \n LogMeIn Rescue , 511 \n Long, Johnny , 370 \n Lorenz cipher, The , 24 \n Love-bug , 684 \n LRK . See Linux Rootkit (LRK) \n LSA . See Local Security Authority (LSA) \n Lsat , 91 \n LSU packet . See Link state update (LSU) \npacket \n L2TP . See Layer 2 Tunneling Protocol (L2TP) \n Lu, H. J. , 82 \n Lundqvist, A. , 83 \n Lynis , 91 \n Lyon, Gordon , 385 \n M \n MAC . See Medium Access Control (MAC); \nMessage authentication code (MAC) \n Machanavajjhala, A. , 479 \n Macintosh operating system (MacOS) , 16 , 48 \n MacOS . See Macintosh operating system \n(MacOS) \n Macro viruses , 684 \n Malicious software \n bots , 57 \n" }, { "page_number": 864, "text": "Index\n831\n keyloggers , 57 \n RAT , 57 \n rootkits , 57 \n spyware , 57 \n stealth , 56 – 57 \n web based attacks , 57 – 58 \n Malicious threat, instant messaging (IM) , \n 458 – 459 \n Malloc() , 69 \n Malware , 229 . See also Malicious software \n infection , 294 – 295 \n Malwarebytes ’ Anti-Malware , 782 \n Manager’s responsibility, IT security \nmanagement , 262 \n Mandatory Access Control (MAC) , 238 \n Mandatory locking, fi le and record , 70 \n Man-in-the-middle (MITM) attacks , 126 – 127 , \n 290 , 459 , 556 \n Massachusetts Institute of Technology (MIT) , \n 38 \n Master File Table (MFT) , 315 \n Matching privacy policy , 498 – 499 \n contents, unexpected outcomes , 495 – 496 \n collector fi eld , 495 \n disclose-to fi eld , 495 – 496 \n retention time . See Retention time, of \nprivate information \n valid fi eld , 495 \n downgrading , 494 – 495 \n upgrading , 494 \n Matching subsystem , 649 \n Materiality, eDiscovery , 311 \n Mathematical prelude, cryptography , \n398 – 399 \n Matsuda, S. , 343 \n Mattord, H. J. , 233 , 235 , 251 \n Mauborgne, Joseph , 32 \n Maximum Transfer Unit (MTU) , 362 \n Maxwell, Christopher , 129 \n MBSA . See Microsoft Baseline Security \nAnalyzer (MBSA) \n McAfee , 147 , 239 \n McAfee’s Anonymizer , 737 – 738 \n McAfee VirusScan Plus , 782 \n McClure, S. , 54 \n McData SANtegrity Security Suite Software , \n 778 – 779 \n McNealy, Scott , 469 \n McPherson, D. , 120 \n Mead, Margaret , 470 \n Medium Access Control (MAC) , 359 , 517 \n address fi ltering , 243 \n address spoofi ng , 157 \n layer , 95 – 96 \n Megaprime , 778 \n MEHARI . See M é thode Harmonis é e \nd’Analyse de Risques Informatiques \n(MEHARI) \n Melissa , 56 \n Mell, Peter , 240 \n Mercenary attacker , 296 \n Mesh PKI , 443 – 444 \n Message \n deletion \n black-hole attack , 105 \n gray-hole attack , 105 \n fl ooding , 102 \n integrity , 420 \n Hash function in signing message , 420 \n Message authentication code (MAC) , 110 , \n 174 – 175 , 177 , 420 \n Message Digest , 5 , 515 \n Message Integrity Code (MIC) , 173 \n Metadirectories, for identity management , \n 276 \n Metasploit, exploits , 55 \n Metcalfe’s Law , 454 \n M é thode Harmonis é e d’Analyse de Risques \nInformatiques (MEHARI) , 618 \n MFT . See Master File Table (MFT) \n MIC . See Message Integrity Code (MIC) \n Microsoft Baseline Security Analyzer \n(MBSA) , 389 – 390 \n Microsoft Operations Manager (MOM) , 390 \n Microsoft Passport , 275 \n Microsoft Point-to-Point Encryption Protocol \n(MPPE) , 514 \n Microsoft Technet Library , 14 \n Microsoft Update (MU) , 389 \n Microsoft Word 2003, security options for , \n 14 – 15 \n Micro timed, effi cient, streaming, loss-\ntolerant authentication protocol ( μ \nTESLA , 174 – 175 \n MId . See Mobile identity (MId) \n Mill, John Stuart , 470 \n Misconceptions , 8 – 9 \n Mission-critical components , 151 \n Mission-critical systems, protecting \n contingency planning \n business continuity planning (BCP) , \n 235 \n business impact analysis , 233 \n disaster recovery planning (DRP) , \n 235 – 236 \n incident response (IR) plan , 233 – 235 \n defense in depth , 233 \n information assurance , 231 \n information risk management , 231 \n administrative controls , 232 \n physical controls , 232 \n risk analysis , 232 – 233 \n technical controls , 232 \n MIT . See Massachusetts Institute of \nTechnology (MIT) \n MIT Auto-ID Center, RFID tags classifi cation \nand , 206 , 207 \n Mitchell, J. , 125 \n MITM attacks . See Man-in-the-middle \n(MITM) attacks \n Mix networks , 484 – 485 \n Mix zones , 481 \n Mobile Application Part (MAP) protocol , \n 185 , 187 \n Mobile Application Part Security (MAPSec) , \n 187 , 195 \n Mobile identity (MId) , 287 – 290 \n PDA as solution to strong authentication , \n 288 – 289 \n types of strong authentication through \nmobile PDA , 289 – 290 \n Mobile IT system, rules for , 242 \n Mobile Switching Center (MSC) , 185 , 186 , \n 191 \n Mobile system , 629 \n Mobile technologies, and IM , 462 \n Mobile user-centric identity management, in \nAmI world , 290 – 292 \n AmI scenario , 290 – 291 \n principles of , 291 – 292 \n requirements for , 291 – 292 \n Mobile users, Identity 2.0 for , 286 – 292 \n mobile identity , 287 \n evolution , 287 – 290 \n user-centric identity management in \nAmI , 290 – 292 \n Mobile Web 2.0 , 286 – 287 \n Mobile Web 2.0 , 286 – 287 \n Model of identity , 270 \n Modern block ciphers \n CBC , 412 \n ECB , 412 \n Modern encryption algorithms , 404 \n Modern symmetric ciphers , 402 – 404 \n Modular arithmetic , 399 – 400 \n Modular polynomial arithmetic , 406 \n Mokbel, M. F. , 481 \n MOM . See Microsoft Operations Manager \n(MOM) \n Monitoring, case study in , 759 \n Monrose, F. , 120 \n Moore, Gordon , 509 \n Moore’s Law , 454 \n and IPSec , 509 – 510 \n Morris, Robert, Jr. , 56 \n" }, { "page_number": 865, "text": "Index\n832\n Motion Picture Association of America \n(MPAA) , 724 \n MPLS . See MultiProtocol Label Switching \n(MPLS) \n MPPE . See Microsoft Point-to-Point \nEncryption Protocol (MPPE) \n MPVPN . See Multi Path Virtual Private \nNetwork (MPVPN) \n MSBlaster worms , 695 – 697 \n MTU . See Maximum Transfer Unit (MTU) \n MU . See Microsoft Update (MU) \n Multi Path Virtual Private Network (MPVPN) , \n 514 \n MultiProtocol Label Switching (MPLS) , 514 \n MyChild , 720 \n MyEdge , 720 \n MyPrivacy , 720 \n MyReputation , 720 \n N \n NAC . See Network access control (NAC) \n Nachi worms , 695 – 697 \n Naraine, R. , 128 \n NAS . See Network attached storage (NAS) \n NAT . See Network Address Translation (NAT) \n NATing operations , 355 \n National Association of Professional \nBackground Screeners , 8 \n National Bureau of Standards (NBS) , 36 \n National Commission on Terrorist Attacks \nUpon the United States (The 9/11 \nCommission) , 671 – 674 \n National Conference of State Legislatures , 3 \n National Electric Reliability Council , 675 \n National Institute of Standards and \nTechnology (NIST) , 5 , 37 , 259 – 260 , \n 374 , 420 , 766 \n National Security Agency (NSA) , 37 , 388 , \n 598 \n National Security Letter , 736 \n National Vulnerability Database , 5 \n Nation-state backed attacker , 296 – 297 \n Natural disasters , 630 \n characteristics of , 631 \n Naymz , 717 – 718 \n Naymz Reputation Repair , 718 \n NBS . See National Bureau of Standards (NBS) \n Near well-formed (NWF), privacy policy \n defi ned , 496 \n obtaining , 497 \n rules for specifying , 496 – 497 \n Neighbor Lookup Protocol (NLP) , 177 \n Nessus , 91 , 386 , 388 – 389 \n NetFlow , 304 \n Netstat , 19 – 20 \n Netstat command, Unix , 85 \n Net user password hack , 325 \n Network access control (NAC) , 63 \n access control and , 136 – 137 \n design through , 152 – 153 \n functions , 150 \n Network Address Translation (NAT) , \n 159 – 160 , 349 , 510 \n confi guration , 160 \n internal IP services protection , 363 \n Network attached storage (NAS) , 593 \n Network-based IDS (NIDS) , 154 , 156 , \n 302 – 303 \n complements fi rewalls , 163 \n features of , 156 – 158 \n Network-based IPS (NIPS) , 303 – 304 \n Network File System (NFS) , 359 , 594 \n Network information system (NIS) , 72 \n Network interface cards (NIC) , 358 \n Network Time Protocol (NTP) , 362 \n NFS . See Network File System (NFS); \nNetwork fi le system (NFS) \n NIC . See Network interface cards (NIC) \n Nichols, S. , 129 \n NIDS . See Network-based IDS (NIDS) \n NIPS . See Network-based intrusion \nprevention systems (NIPS) \n NIS . See Network information system (NIS) \n NIST . See National Institute of Standards and \nTechnology (NIST) \n Niuniu76 worms , 684 \n NLP . See Neighbor Lookup Protocol (NLP) \n Nmap , 91 , 154 , 166 \n SYN attack and , 156 \n Nmap security scanner , 385 \n command-line inteface , 386 \n Nonrepudiation , 398 , 420 , 446 , 454 , 740 \n Norton , 360 , 782 \n Norton AntiVirus 2009 , 781 \n Norton AntiVirus 2009 Defi nitions Update , \n 782 \n Norton Internet Security , 782 \n NSA . See National Security Agency (NSA) \n NTFS , 315 \n NTP . See Network Time Protocol (NTP) \n Nugache , 122 \n Null policy , 498 \n Number theory, asymmetric-key encryption \n cardinality of primes , 412 – 413 \n coprimes , 412 \n discrete logarithm , 414 \n Fermat’s little theorem , 413 – 414 \n primitive roots , 414 – 416 \n Nunnery, C. , 121 \n O \n OCR . See Optical character recognition \n(OCR) \n OCSP . See Online Certifi cate Status Protocol \n(OCSP) \n OCTAVE . See Operationally Critical Threat, \nAsset, and Vulnerability Evaluation \n(OCTAVE) \n Odlyzko, A. M. , 475 \n OFB . See Output feedback (OFB) \n Offi ce of Personnel Management (OPM) , 248 \n OID . See ASN.1 Object Identifi er (OID) \n Okamoto, T. , 217 \n Olivier, M. S. , 503 \n Once in- a-lifetime (1L) pseudonyms , 709 \n One-time pad cipher , 32 – 33 \n One-time-passcode (OTP) , 50 \n Onion routers , 346 \n Onion routing , 483 – 484 \n Online Armor Personal Firewall , 782 \n Online Certifi cate Status Protocol (OCSP) , \n 440 , 441 – 442 , 771 \n Online fi xed content , 594 \n Online fraudster detection , 347 \n Online Privacy Alliance , 470 \n Open group, The , 79 \n OpenID 2.0 , 281 – 282 \n OpenID Stack , 282 \n Open networks, rules for , 242 \n OpenPGP , 449 \n Open protocol standards, TCP/IP , 297 \n Open proxies , 739 \n Open Short Path First (OSPF) , 362 \n Open Source movement , 80 \n Open Source Security Testing Methodology \nManual (OSSTMM) , 375 , 600 \n Open Systems Interconnection (OSI) model , \n 512 \n Open Systems Interconnect (OSI) , 353 \n Open Web Application Security Project \n(OWASP) , 375 , 599 – 600 \n Operating system (OS) , 43 \n design fl aws , 392 \n MacOS , 16 , 48 \n Operationally Critical Threat, Asset, and \nVulnerability Evaluation (OCTAVE) , \n 7 , 618 \n Opinity , 713 – 714 \n Opinity OpenID support , 713 \n Optical character recognition (OCR) , 309 \n Oracle9i , 503 \n Organizational security . See Secure \norganization \n Organization personal data management , 271 \n OS . See Operating system (OS) \n" }, { "page_number": 866, "text": "Index\n833\n OSCP . See Online Certifi cate Status Protocol \n(OSCP) \n OSI . See Open Systems Interconnect (OSI) \n OSI model . See Open Systems \nInterconnection (OSI) model \n Osipov, V. , 54 \n OSPF . See Open Short Path First (OSPF) \n OSSTMM . See Open Source Security Testing \nMethodology Manual (OSSTMM) \n OTP . See One-time-passcode (OTP) \n Output feedback (OFB) , 412 \n Overblocking and underblocking , 740 \n Override authorization methods , 740 \n Overvoltage, effect on IS equipment , 634 \n OWASP . See Open Web Application Security \nProject (OWASP) \n Owens, Eydt Les , 243 – 245 \n P \n Packets \n fi lter , 354 \n link state updates (LSUs) , 177 \n marking , 342 – 343 \n reply packet (REP) , 177 \n request and CTS , 171 \n route discovery (RDP) , 177 \n ROUTE REQUEST , 176 \n sniffi ng, tools , 240 \n Packet fi ltering , 240 – 241 \n fi rewalls , 60 – 61 , 162 – 163 \n H.323 , 361 \n Pair-wise Master Key (PMK) , 173 \n Palo Alto Research Center (PARC) , 36 \n PAM . See Pluggable authentication \nmechanism (PAM) \n Pappu, R. , 214 \n PAR . See Positive Acknowledgment with \nRetransmission (PAR) \n PARC . See Palo Alto Research Center \n(PARC) \n ParentalControl Bar , 782 \n Parents and content fi ltering , 726 \n Partial distributed threshold CA scheme , 180 \n Partitioning, host access , 573 – 574 \n PAS . See Payload attribution system (PAS) \n Passwords , 516 – 517 \n crackers , 42 \n cracking , 54 \n management fl aws , 392 \n recovery , 317 – 318 \n reset disk , 326 \n SAN , 571 \n Password-cracking penetration test , 377 \n Password Dragon , 782 \n Patch(es) , 60 , 90 \n management , 239 \n Patent infringement, computer forensics in , \n 313 \n Patridge, C. , 130 \n Pattern matching method , 164 – 165 \n Paxson, V. , 344 \n Payload , 56 \n Payload attribution system (PAS) , 344 \n Payment Card Industry Data Security \nStandard (PCI DSS) , 16 \n P-box , 403 – 404 \n PC . See Personal computers (PC) \n PCI DSS . See Payment Card Industry Data \nSecurity Standard (PCI DSS) \n PC software , 730 – 731 \n PDA . See Personal authentication device \n(PDA); Personal digital assistant (PDA) \n Peacomm , 122 \n PeerGuardian , 782 \n Peer-to-peer (P2P) \n botnet , 42 , 120 , 121 – 122 \n risk assessment , 258 \n tracing illegal content distributor in , 347 \n Penetration tests , 18 – 19 , 60 \n consultants for , 379 – 380 \n defi ned , 369 – 370 , 374 – 375 \n differ from actual hack , 370 – 371 \n eligibility criteria for professionals , \n 381 – 382 \n hiring a tester , 380 – 381 \n legal consequences of , 379 \n liability issues , 378 – 379 \n methodology , 375 – 378 \n phases of , 373 – 374 \n risks , 378 \n for SAN , 599 – 600 \n skill sets , 380 \n training , 380 \n types of , 371 – 372 \n external test , 371 \n internal test , 371 \n vs. vulnerability assessment , 384 – 385 \n Perimeter networks , 357 \n Perlman, R. , 450 \n Permutations , 108 , 402 \n Perrig, A. , 173 \n Personal authentication device (PDA) , 271 \n as solution to strong authentication , 288 – 289 \n types of strong authentication through \nmobile , 289 – 290 \n full-option mobile solution , 290 \n SMS-based one-time password (OTP) , \n 289 – 290 \n soft token application , 290 \n Personal branding , 704 \n Personal computers (PC) , 4 \n Personal digital assistants (PDA) , 134 \n penetration testing of , 377 – 378 \n Personal identifi cation number (PIN) , 138 , \n 640 , 771 \n Personal Information Protection and \nElectronic Documents Act (PIPEDA) , \n 264 , 488 , 503 \n Personally identifi able information (PII) , \n272 \n defi ned , 294 \n Personal privacy policies , 502 – 504 . See also \n Privacy Management Model \n content , 488 – 490 \n CSAPP , 488 – 489 \n legislations , 488 \n specifi cations , 490 \n overview , 487 – 488 \n semiautomated derivation , 490 – 494 \n unexpected negative outcomes prevention , \n 496 – 497 \n well-formed policies , 494 – 496 \n Personnel and information systems , 629 \n PET . See Privacy-enhancing technologies \n(PET) \n PGP . See Pretty Good Privacy (PGP) \n PGP PKI systems , 449 \n Pharming , 124 \n Phishers, tracing , 347 \n Phishing , 55 , 519 . See also Identity theft \n and IM , 459 \n PHY layer , 95 \n Physical access, SAN , 571 \n Physical and logical security, integration of , \n 639 – 643 \n Physical controls, security management \nsystem , 258 \n Physical facility , 629 \n Physical layer, TCP/IP , 299 \n Physical links, communication , 95 \n Physical security , 629 \n breaches, recovery from , 636 \n penetration test , 378 \n policy, a corporate example , 637 – 639 \n Physical theft, of information , 293 \n Physical vulnerabilities , 369 \n PII . See Personally identifi able information \n(PII) \n PIN . See Personal identifi cation number (PIN) \n Pings , 53 \n PIPEDA . See Personal Information Protection \nand Electronic Documents Act \n(PIPEDA) \n Pirean Limited Homepage , 778 \n" }, { "page_number": 867, "text": "Index\n834\n PIV card , 640 \n issuance and management subsystem , 640 \n PKI . See Public Key Infrastructure (PKI) \n PKINIT . See Public Key Cryptography for \nInitial Authentication in Kerberos \n(PKINIT) \n Placement, fi rewalls , 356 – 358 \n Plaintext , 577 \n Plaintiffs, computer forensics , 334 – 335 \n Plan-Do-Check-Act (PDAC) iterative process , \n 255 \n Platform for Privacy Preferences Project \n(P3P) , 477 \n Plath privacy, in location privacy , 481 \n Platt, elements of IS security by , 629 \n Pluggable authentication mechanism (PAM) , \n 72 \n PMK . See Pair-wise Master Key (PMK) \n Png, I. P. L. , 475 \n Point-to-Point Tunneling Protocol (PPTP) , \n 513 – 514 \n Policy Manager - Cisco Systems , 778 \n Policy optimization , 352 – 353 \n combining rules , 353 \n policy reordering , 352 – 353 \n Policy reordering , 352 – 353 \n Pollak, William , 6 \n Polyalphabetic cipher , 29 – 30 \n Porras, P. , 125 \n Portable Operating System Interface (POSIX) \nstandards , 68 \n Portable systems , 630 \n PortPeeker , 166 \n Ports , 19 – 20 \n closing , 60 \n knocking , 42 \n numbers , 97 \n scanners , 42 \n scanning , 54 \n Port-scanning tools , 166 \n Position privacy, in location privacy , 481 \n Positive Acknowledgment with \nRetransmission (PAR) , 298 \n POSIX standards . See Portable Operating \nSystem Interface \n Post-attack phase , 373 – 374 \n Potts, C. , 504 \n Pownce , 4 \n P2P . See Peer-to-Peer (P2P) \n P3P . See Platform for Privacy Preferences \nProject (P3P) \n PPCS . See Privacy Policy Compliance System \n(PPCS) \n PPM . See Probabilistic packet marking (PPM) \nscheme \n P3P Preference Exchange Language \n(APPEL) , 477 \n P2P systems . See Peer-to-peer (P2P) systems \n Pre-attack phase , 373 \n Preauthentication , 178 \n Precision and recall \n in content-fi ltering systems , 742 – 743 \n Precision versus recall , 756 \n Premises security , 629 \n Pre-shared key (PSK) , 173 \n Pretrial motions, computer forensics , 335 \n Pretty Good Privacy (PGP) , 436 , 448 \n certifi cate formats , 449 \n scheme , 180 \n Previous logon information , 773 – 774 \n Prf . See Pseudo-random function \n PRIME . See Privacy and Identity \nManagement for Europe (PRIME) \n Primitives, for internet communication , 94 \n Privacy \n access control system , 476 – 478 \n anonymity \n crowd system , 485 \n freedom network , 485 \n k -anonymity , 479 – 480 \n mix networks , 484 – 485 \n and business , 475 – 476 \n data protection , 478 – 480 \n debate , 469 – 471 \n onion routing , 483 – 484 \n PET . See Privacy-enhancing technologies \n(PET) \n threats , 471 – 474 \n value of , 474 – 475 \n Privacy and Identity Management for Europe \n(PRIME) , 478 \n Privacy-enhancing technologies (PET) \n location privacy \n adversaries , 480 \n categories , 481 \n concept , 480 \n Privacy management model , 497 – 502 \n negotiation (consumer & service provider) , \n 499 – 502 \n policy compliance , 502 \n use of policies , 497 – 499 \n Privacy Policy Compliance System (PPCS) , \n 502 \n Privacy requirement, identity management , \n 272 \n Privacy Rights ClearingHouse , 473 \n Private key infrastructure (PKI) , 87 \n based Smartcard , 71 \n Private Sector Organizations for Information \nSharing , 670 – 674 \n PRIVE , 482 \n Proactive security vs. reactive security , 392 \n Probabilistic packet marking (PPM) scheme , \n 129 , 342 \n Process killing , 739 \n Product ciphers , 404 \n Professional practitioner, applied computer \nforensics , 330 \n Program-type viruses , 684 \n Prolateral Consulting , 778 \n Prolify , 778 \n Promiscuous mode , 102 \n Prosecutors, computer forensics , 334 – 335 \n Proteus (product), for risk management , 619 \n Protocol decode-based analysis , 165 – 166 \n Protocols , 328 \n ARP , 98 \n botnets , 120 – 122 \n defi nition of , 93 \n DHCP , 98 – 99 \n ICMP , 99 \n translation , 130 \n tunneling , 130 \n Provider n-correspondence , 498 \n Proxies , 241 \n Proxy fi rewalls , 61 \n Proxy gateway-based content control , 732 \n Proxy servers , 747 . See also Application-layer \nfi rewalls \n Pseudonym , 709 – 710 \n throttling , 214 \n Pseudo-random function (prf) , 116 \n Psiphon , 737 \n PSK . See Pre-shared key (PSK) \n PSS Systems , 778 \n Psychological weapons , 680 \n Public Health Security, Bioterrorism \nPreparedness & Response Act of 2002 \n(PL 107-188) , 664 – 665 \n Public-key cryptography , 412 – 416 \n Public-key cryptography, RFID system and \n authentication with , 217 \n identity-based , 217 – 219 \n Public Key Cryptography for Initial \nAuthentication in Kerberos (PKINIT) , \n 771 \n Public key encryption , 434 – 435 \n Public key infrastructure (PKI) , 364 , 701 \n alternative key management models , \n 450 – 451 \n architecture , 450 \n Callas’s self-assembling , 450 \n cryptographic algorithms , 433 – 435 \n overview , 435 – 436 \n plug and play , 450 \n" }, { "page_number": 868, "text": "Index\n835\n policy description , 447 \n standards organizations , 448 – 449 \n user-centric , 450 \n Public/private key schemes , 115 \n Public switched telephone network (PSTN) , \n 184 \n connectivity, security implications , \n 188 – 189 \n Publishing articles, in applied computer \nforensics , 331 – 332 \n Q \n QoS . See Quality-of-service (QoS) operations \n Quality-of-service (QoS) operations , 355 \n QualysGuard , 389 \n Quantum cryptography , 397 \n Quarantine , 65 – 66 \n QueriedCerts , 442 \n R \n RA . See Registration authority (RA) \n RA2 art of risk (tool), for risk management , \n 619 \n Radio access network , 185 \n security in , 186 – 187 \n Radiofrequency identifi cation (RFID) systems \n applications , 208 – 209 \n challenges \n comparison of , 212 \n counterfeiting , 209 \n denial of service (DoS) , 210 \n insert attacks , 211 \n physical attacks , 211 \n replay attacks , 211 \n repudiation , 211 \n sniffi ng , 209 \n social issues , 212 \n spoofi ng , 210 – 211 \n tracking , 209 – 210 \n viruses , 211 – 212 \n protections \n basic system , 212 – 215 \n public-key cryptography , 217 – 219 \n symmetric-key cryptography , 215 – 217 \n standards , 207 – 208 \n system architecture \n back-end database , 207 \n readers , 206 – 207 \n tags . See RFID tags \n Radiofrequency identifi er (RFID) devices , \n 288 \n RADIUS . See Remote Authentication Dial-In \nUser Service (RADIUS) \n RADIUS clients \n adding access points to IAS server , 795 \n confi guration, replicating , 798 \n RAID . See Redundant Array of Independent \n(or Inexpensive) Disks (RAID) \n Rainbow tables, hacking XP password , \n 325 – 326 \n RA2 Information Collection Device , 19 \n Rajab, M. , 120 \n Ramachandran, A. , 125 \n Ramakrishnan, R. , 479 \n RAM analysis, hacking XP password , 326 \n Random binary stream cipher . See XOR \ncipher \n Randomized Hash Locks protocol , 216 \n Rapleaf , 714 – 715 \n RAT . See Remote access trojans (RAT) \n Raymond, Eric S. , 80 \n RBN . See Russian business network (RBN) \n RDP . See Route discovery packet (RDP) \n Reachability, identity management , 271 \n Reactive security \n vs. proactive security , 392 \n Read access control , 109 \n Read-Only Domain Controller (RODC) , 770 \n Ready Business program , 775 \n Real evidence, eDiscovery , 311 \n Real-time transactions, and instant messaging \n(IM) , 457 \n Real-time transport protocol (RTP) , 361 , 552 \n Rebuttal, computer forensics , 335 \n Receiver operating characteristic (ROC) , 651 \n Recent fi les, hacking XP password , 327 \n Rechberger, C. , 515 \n Recommendation Policy (RP) , 710 \n Recommender Search Policy (RSP) , 710 – 711 \n Reconnaissance, of VoIP , 553 – 554 \n call walking , 554 \n Reconnaissance techniques \n pings , 53 \n port scanning , 54 \n traceroute , 53 \n vulnerability scanning tests , 54 \n Recording Industry Association of America \n(RIAA) , 724 \n Recovery, risk analysis , 47 \n Recovery point objective (RPO) , 143 \n Recovery time objective (RTO) , 143 – 144 \n Redundant Array of Independent (or \nInexpensive) Disks (RAID) , 314 \n Reeves, D. S. , 130 , 344 \n Registration authority (RA) , 437 \n Regular expression (RegEx) , 757 \n Regulations, for IT security management , \n 263 – 267 \n Reiter, M. , 485 \n Reloaded, Counter Hack , 53 \n Remote access confi guration , 364 \n Remote access trojans (RATs) , 57 \n Remote administration tools , 42 . See also \n Remote access trojans \n Remote Authentication Dial-In User Service \n(RADIUS) , 568 \n Remote control software, security risk , 257 \n Remote PC control applications , 739 – 740 \n REP . See Reply packet (REP) \n Replay attacks , 556 \n Reply packet (REP) , 177 \n Reputation , 702 \n applied to computating world , 704 – 708 \n category , 713 \n management , 701 \n primitives , 703 \n state-of-the-art, computation , 708 – 711 \n ReputationDefender , 720 \n Request for comments (RFCs) , 165 \n Request for proposal (RFP) , 381 \n Research In Motion (RIM) , 397 \n Residual risks , 614 \n Residue class , 400 \n Resources \n conferences \n Airscanner: Wireless Security Boot \nCamp , 785 \n AusCERT , 785 \n DallasCon , 785 \n FIRST Conference , 785 \n Infosecurity , 785 \n International Computer Security Audit \nand Control , 785 \n ITsecurityEvents , 785 \n Network and Distributed System \nSecurity Symposium , 785 \n NISC , 785 \n Training Co., The , 785 \n VP4S-06: Video Processing for \nSecurity , 785 \n consumer information \n AnonIC , 785 \n Business.gov: Information and \nComputer Security Guide , 785 \n Consumer Guide to Internet Safety, \nPrivacy and Security , 785 \n EFS File Encryption Tutorial , 785 \n GRC Security Now , 786 \n Home Network Security , 786 \n Internet Security Guide , 786 \n Online Security Tips for Consumers of \nFinancial Services , 786 \n Outlook Express Security Tutorial , \n786 \n" }, { "page_number": 869, "text": "Index\n836\n Overview of E-Mail and Internet \nMonitoring in the Workplace, An , \n 786 \n Privacy Initiatives , 786 \n Protect Your Privacy and E-mail on the \nInternet , 786 \n Spyware watch , 786 \n Staysafe.org , 786 \n Susi , 786 \n Wired Safety , 786 \n content fi ltering links \n Content Filtering vs. Blocking , 791 \n GateFilter Plug-in , 791 \n GFI Mail Essentials , 791 \n InterScan eManager , 791 \n NetIQ (MailMarshal) , 791 \n Postfi x Add-on Software , 791 \n Qmail-Content fi ltering , 791 \n SonicWALL’s Content Filtering \nSubscription Service , 791 \n SurfControl , 791 \n Tumbleweed , 791 \n WebSense , 791 \n directories \n E-Evidence Information Center , 786 \n Itzalist , 786 \n Laughing Bit, The , 786 \n Safe World , 786 \n SecureRoot , 786 \n help and tutorials \n How to fi nd security holes , 786 \n Ronald L. Rivest’s Cryptography and \nSecurity , 786 \n SANS Institute-The Internet Guide To \nPopular Resources On Computer \nSecurity , 786 \n logging \n Building a Logging Infrastructure , 791 \n IETF Security Issues in Network Event \nLogging , 791 \n mailing lists \n Alert Security Mailing List , 786 \n Computer Forensics Training Mailing \nList , 786 \n FreeBSD Resources , 786 – 787 \n InfoSec News , 787 \n ISO17799 & ISO27001 News , 787 \n IWS INFOCON Mailing List , 787 \n Risks Digest , 787 \n SCADA Security List , 787 \n SecuriTeam Mailing Lists , 787 \n Security Clipper , 787 \n news and media \n Computer Security News-Topix , 787 \n Computer Security Now , 787 \n Enterprise Security Today , 787 \n Hagai Bar-El-Information Security \nConsulting , 787 \n Help Net Security , 787 \n Investigative Research into \nInfrastructure Assurance Group , 787 \n O’Reilly Security Center , 787 \n SecureLab , 787 \n SecuriTeam , 787 \n Security Focus , 787 \n Security Geeks , 787 \n Security Tracker , 787 \n Xatrix Security , 787 \n organizations \n Association for Automatic \nIdentifi cation and Mobility , 787 \n Association for Information Security , \n 787 \n First , 787 \n Information Systems Audit and Control \nAssociation , 787 \n IntoIT , 788 \n North Texas Chapter ISSA , 788 \n RCMP Technical Security Branch , 788 \n Shmoo Group, The , 788 \n Switch-CERT , 788 \n products and tools \n AlphaShield , 788 \n Bangkok Systems & Software , 788 \n Beijing Rising International Software \nCo.,Ltd , 788 \n Beyond If Solutions , 788 \n BootLocker Security Software , 788 \n Calyx Suite , 788 \n CipherLinx , 788 \n ControlGuard , 788 \n CT Holdings, Inc. , 788 \n Cyber-Defense , 788 \n Data Circle , 788 \n Digital Pathways Services Ltd UK , 788 \n Diversinet Corp. , 788 \n DLA Security Systems, Inc. , 788 \n DSH , 788 \n eLearning Corner , 788 \n Enclave Data Solutions , 788 \n eye4you , 788 \n Faronics , 788 \n Forensic Computers , 788 \n GFI Software Ltd. , 788 \n Global Protective Management.com , \n 788 – 789 \n GuardianEdge Technologies, Inc. , 789 \n Hotfi x Reporter , 789 \n IPLocks Inc. , 789 \n iSecurityShop , 789 \n Juzt-Innovations Ltd. , 789 \n KAATAN Software , 789 \n Kilross Network Protection Ltd. , 789 \n Lexias Incorporated , 789 \n Lexura Solutions Inc. , 789 \n Locum Software Services Limited , 789 \n Lumigent Technologies , 789 \n Marshal , 789 \n n-Crypt , 789 \n Networking Technologies Inc. , 789 \n New Media Security , 789 \n NoticeBored , 789 \n Noweco , 789 \n Oakley Networks Inc. , 789 \n Pacom Systems , 789 \n Paktronix Systems: Network Security , \n 789 \n PC Lockdown , 789 \n Porcupine.org , 789 \n Powertech , 789 \n Protocom Development Systems , 789 \n Sandstorm Enterprises , 789 \n SecurDesk , 789 \n Secure Directory File Transfer System , \n 789 – 790 \n Secure your PC , 790 \n Security Awareness, Inc. , 790 \n SecurityFriday Co. Ltd. , 790 \n Security Offi cers Management and \nAnalysis Project , 790 \n SeQureIT , 790 \n Service Strategies Inc. , 790 \n Silanis Technology , 790 \n Simpliciti , 790 \n Smart PC Tools , 790 \n Softcat plc (UK) , 790 \n Softek Limited , 790 \n Softnet Security , 790 \n Tech Assist, Inc. , 790 \n Tropical Software , 790 \n UpdateEXPERT , 790 \n Visionsoft , 790 \n Wave Systems Corp. , 790 \n WhiteCanyon Security Software , 790 \n Wick Hill Group , 790 \n Winability Software Corporation , 790 \n xDefenders Inc. , 790 \n ZEPKO , 790 \n research \n Centre for Applied Cryptographic \nResearch , 790 \n Cryptography Research, Inc. , 790 \n Dartmouth College Institute for \nSecurity Technology Studies (ISTS) , \n 791 \n" }, { "page_number": 870, "text": "Index\n837\n Penn State S2 Group , 791 \n SANS Institute, The , 791 \n SUNY Stony Brook Secure Systems \nLab , 791 \n ResponseFlags , 442 \n Restore of passwords , 765 \n Restore of usernames , 765 \n Restore wizard , 765 \n Retention time, of private information , 489 , \n 490 , 495 \n rules for specifying , 497 \n Retina , 389 \n Return on investment (ROI) , 151 \n RevInfos , 442 \n Rexroad, B. , 125 \n RFC . See Request for comments (RFCs) \n RFC 4492 , 767 \n RFID . See Radiofrequency identifi er (RFID) \ndevices \n RFID Guardian , 214 \n RFID reader (transceiver) , 206 – 207 , 209 . \n See also Radiofrequency identifi cation \n(RFID) systems \n RFID systems . See Radiofrequency \nidentifi cation (RFID) systems \n RFID tags . See also Radiofrequency \nidentifi cation (RFID) systems \n active , 205 – 206 \n counterfeiting of , 209 \n passive , 206 \n RFID Guardian , 214 \n semiactive , 206 \n watchdog , 214 \n RFID virus , 212 \n RFP . See Request for proposal (RFP) \n RGCS . See Robert Gold Comm Systems, Inc. \n(RGCS) \n Richardson, Robert , 229 – 231 \n Rieback, M. R. , 214 \n Rijndael . See Advanced encryption standard \n Rijndael algorithm , 404 \n in AES implimentation \n mathematical preliminaries , 408 \n state , 408–412 \n RIM . See Research In Motion (RIM) \n Rings \n defi ned , 405 \n RIP . See Routing Information Protocol (RIP) \n Risk \n assessment . See Risk assessment \n concept of , 606 \n defi ned , 606 \n measuring , 606 – 609 \n Risk analysis , 610 – 612 \n audits , 47 \n estimation , 610 – 611 \n evaluation , 612 \n identifi cation , 611 \n recovery , 47 \n security management system , 257 – 258 \n vulnerability testing , 46 – 47 \n Risk assessment , 148 , 610 – 612 \n Risk management \n assessment process . See Risk assessment \n communication process , 614 \n context establishment , 609 – 610 \n criticism , 615 – 616 \n framework , 7 \n laws & regulations , 620 – 623 \n methods , 616 – 620 \n monitoring & review , 614 \n standards , 623 – 625 \n and system development life cycle , \n 614 – 615 \n treatment process , 612 – 614 \n avoidance of risk , 613 – 614 \n reducing risk , 612 – 613 \n risk transfer , 613 \n RiskWatch , 619 – 620 \n Rivest, R. L. , 214 , 216 \n Rivest, Ron , 448 , 516 \n Rivest, Shamir, and Adleman (RSA) , 38 \n Robert Gold Comm Systems, Inc. (RGCS) , \n 793 \n Roberts, P. F. , 124 , 129 \n Robots . See Bots \n Robust Security Network Associations \n(RSNAs) , 245 \n Robust security networks (RSNs) , 244 – 245 \n RODC . See Read-Only Domain Controller \n(RODC) \n Role-based access control (RBAC) , 238 \n Root , 69 \n Rootkits , 55 , 57 \n Route discovery packet (RDP) , 177 \n ROUTE REQUEST packet , 176 \n Router penetration test , 377 \n Router(s) , 96 \n mesh , 172 \n sf, confi guration script for , 160 \n Routing , 99 – 100 \n protocols , 361 – 362 \n Routing information protocol (RIP) , 362 \n RPO . See Recovery point objective (RPO) \n RSA . See Rivest, Shamir, and Adleman \n(RSA) \n RSA cryptosystem , 415 – 416 \n RSA digital signature , 420 \n RSA Envision , 781 \n RSA protocol , 516 \n RTO . See Recovery time objective (RTO) \n RTP . See Realtime Transport Protocol (RTP) \n Rubin, A , 485 \n Rule-match policies , 351 \n Ruskwig security portal , 778 \n Russian business network (RBN) , 123 \n S \n SafeWord , 49 \n SageFire , 716 \n SAINT . See Security Administrator’s \nIntegrated Network Tool (SAINT) \n Samarati, P. , 478 , 479 , 482 \n SAML . See Security Assertion Markup \nLanguage (SAML) \n Sample cascading attack , 191 \n SAN . See Storage area networking (SAN) \n Sanchez, A. , 130 \n Sandboxing , 698 – 699 \n SAN implementation and deployment \ncompanies , 778 \n SANS (SysAdmin, Audit, Network, Security) \nInstitute, The , 9 \n SARA . See Simple to Apply Risk Analysis \n(SARA) \n Sara , 389 \n Sarbanes-Oxley Act , 3 \n and risk management , 622 – 623 \n Sarbanes-Oxley Act of 2002 (SOX) , 263 \n Sarma, S. E. , 213 , 216 \n SATAN . See System Administrator Tool for \nAnalyzing Networks (SATAN) \n Satellite encryption \n future of , 430 – 431 \n implimentation , 426 – 430 \n downlink encryption , 429 – 430 \n extraplanetary link encryption , 428 – 429 \n general issues , 426 – 428 \n uplink encryption , 428 \n need for , 423 – 425 \n policy , 425 – 426 \n Savage, S. , 129 , 342 \n SBA . See Security by Analysis (SBA) \n S-boxes , 37 , 403 \n Scambray, J. , 54 , 58 \n Scanner \n architecture , 387 \n cornerstones , 390 \n network , 166 \n network countermeasures , 390 – 391 \n performance , 390 \n verifi cation , 390 \n Scanners , 42 \n Scan rules , 157 \n" }, { "page_number": 871, "text": "Index\n838\n Scarfone, K. , 240 , 243 – 245 \n Schannel , 766 \n Schiller, Craig , 57 \n Schneier, B. , 38 , 87 \n Schoof, R. , 122 \n Schools and content fi ltering , 725 \n Schrodinger cat experiment , 35 \n Screened Subnet , 154 , 159 \n Scripting, addition of access points to IAS \nserver , 795 – 796 \n Script kiddy , 296 \n SCVP . See Server-based Certifi cate Validity \nProtocol (SCVP) \n SDSI . See Simple Distributed Security \nInfrastructure (SDSI) \n SEAD . See Secure effi cient ad hoc distance \n(SEAD) \n Secoda risk management , 778 \n Secret-key cryptography . See Symmetric-key \ncryptography \n Secretstuff , 15 \n Secunia Personal Software Inspector , 781 , \n 782 \n Secure effi cient ad hoc distance (SEAD) , \n 175 – 176 \n Secure Fabric OS ™ , 779 \n Secure hash algorithm (SHA) , 420 , 515 – 516 \n Secure link state routing protocol (SLSP) , \n177 \n Secure network encryption protocol (SNEP) , \n 174 \n Secure organization \n obstacles \n creation cost , 5 – 6 \n cybercrime , 5 \n data accessibility , 4 – 5 \n data sharing , 4 \n development of computers , 4 \n effects on productivity , 3 \n security products, improper training \nof , 5 \n unfamiliarity with computer \nfunctioning , 3 – 4 \n unsophisticated computer users , 4 \n steps to build \n built-in security features, of operating \nsystem and applications , 14 – 16 \n employees training , 12 – 14 \n evaluating threats , 6 – 7 \n fundamental security mechanisms , \n 19 – 20 \n implementing policies, to avoid data \nleakage , 10 – 12 \n misconceptions, avoidance of , 8 – 9 \n security training, for IT staff , 9 – 10 \n systems monitoring , 16 – 17 \n third party analysis, of security , 17 – 19 \n tracking of system updates , 20 – 21 \n Secure public web-based proxies , 739 \n Secure RTP (SRTP) , 558 – 559 \n Secure shell protocol (SSH) , 510 , 514 \n Secure shell (SSH) system , 73 – 74 \n Secure socket layer (SSL) , 87 , 437 \n VPN , 514 \n Security Administrator’s Integrated Network \nTool (SAINT) , 389 , 391 \n Security assertion markup language (SAML) , \n 279 – 280 \n Security assessment , 147 – 148 \n Security by analysis (SBA), for risk \nmanagement , 620 \n Security controls, in security management \nsystems , 257 \n Security Employee Training and Awareness \n(SETA) program , 248 – 249 \n Security enhanced linux (SELinux) , 90 \n Security events management (SEM) , 142 \n Security fi rst , 520 \n Security information and event management \n(SIEM) , 250 , 464 \n Security information management (SIM) \nsystem , 304 \n Security management systems , 255 – 258 \n incident response , 258 \n network access , 257 \n risk assessment , 257 – 258 \n roles and responsibilities of personnel in , \n 256 \n security controls , 257 \n security policies , 256 – 257 \n standards , 255 – 256 \n training requirements for , 256 \n Security offi cer \n role in IT security management , 262 \n Security organization structure \n role in IT security management , 262 – 263 \n Security policies , 45 – 46 \n fi rewalls , 159 \n IT security management , 261 – 263 \n important issues , 261 – 262 \n security management systems , 256 – 257 \n Security Policies & Baseline Standards , 778 \n Security procedures for IT security \nmanagement , 261 – 263 \n Security tokens , 283 \n Seeded tests , 367 \n Self-healing session key distribution (S-\nHEAL) scheme , 181 \n Self-organized key management scheme \n(PGP-A) , 180 – 181 \n SELinux . See Security enhanced linux \n(SELinux) \n SEM . See Security events management \n(SEM) \n Semiautomated derivation, of personal \nprivacy policy , 490 – 494 \n retrieval from a community of peers , \n 493 – 494 \n third-party surveys , 491 – 492 \n Sensor network, wireless , 171 \n Server-based Certifi cate Validity Protocol \n(SCVP) , 439 \n in X.509 certifi cates , 442 – 443 \n Service-level agreement (SLA) , 144 \n Service nodes , 185 \n Service providers (SP) , 270 \n Service set identifi er (SSID) , 139 , 243 , 798 \n Session \n defi ned , 106 \n disruption, SIP , 556 – 557 \n key, establishment , 114 , 116 – 117 \n Session border controllers (SBC), SIP , 559 \n end-to-end identity with , 563 – 564 \n Session establishment , 113 \n goals of \n key secrecy , 117 \n mutual authentication , 117 \n session state consistency . See State \nconsistency \n Session initiation protocol (SIP) , 361 , \n 551 – 553 . See also Voice over Internet \nProtocol (VoIP) \n denial-of-service (DoS) , 554 – 555 \n forking in , 552 \n HERFP , 560 – 561 \n identity , 559 \n peer to peer (P2P) , 561 – 563 \n Session-oriented defenses \n defend against eavesdropping , 106 \n controlling output , 108 \n keys independence , 107 – 108 \n key size , 108 \n operating mode , 108 – 110 \n defend against forgeries and replays , 110 \n authentication keys independence , \n 111 – 112 \n forgery detection schemes , 112 \n key size , 112 \n message authentication code size , \n 112 – 113 \n Set-ID bit , 70 \n SHA . See Secure Hash Algorithm (SHA) \n SHA-1 . See Secure Hash Algorithm (SHA) \n Shamir, A. , 217 , 450 , 516 \n Shanmugasundaram, K. , 344 \n" }, { "page_number": 872, "text": "Index\n839\n Sharma, V. , 121 \n Shema, Mike , 58 \n Shibboleth project , 280 \n Shift cipher , 26 – 29 \n Short message service (SMS) , 131 \n Shostack, A. , 474 \n SIEM . See Security information and event \nmanagement (SIEM) \n Signaling System No. 7 (SS7) , 184 , 187 , 189 \n Signal processing subsystem , 648 – 649 \n Signature \n analysis , 164 \n fi les , 154 \n scheme , 115 \n Signature algorithms \n anomaly-based analysis , 166 – 167 \n heuristic-based analysis , 166 \n pattern matching , 164 – 165 \n protocol decode-based analysis , 165 – 166 \n stateful pattern matching , 165 \n Signature-based detection, intrusions , 64 – 65 \n Silo model, for identity management , 275 \n Sima, Caleb , 58 \n Simple distributed security infrastructure \n(SDSI) , 448 \n Simple eXtensible identity protocol (SXIP) \n2.0 , 284 – 285 \n features of , 284 \n Simple mail transport protocol (SMTP) , 153 , \n 160 , 298 , 300 \n Simple mathematical model, fi rewalls \n for packets , 351 – 352 \n for policies , 351 – 352 \n for rules , 351 – 352 \n Simple network management protocol \n(SNMP) , 798 \n Simple public key infrastructure (SPKI) , 436 , \n 448 . See also Public key infrastructure \n(PKI) \n Simple to apply risk analysis (SARA) , \n 617 – 618 \n Simplifi ed process for risk identifi cation \n(SPRINT) , 618 \n SIM system . See Security information \nmanagement (SIM) system \n Singh, S. , 125 \n Single-channel authentication , 289 \n Single sign-on (SSO) , 271 , 277 , 766 \n for terminal services logon , 765 – 766 \n Single Unix specifi cations , 79 – 80 \n Singular security , 778 \n Sinit , 121 \n SIP . See Session initiation protocol (SIP) \n Skiadopoulos, S. Priv è , 482 \n Skoudis, Ed , 53 , 56 \n SLA . See Service-level agreement (SLA) \n Slade, D. , 56 \n SLAP . See Switch link authentication protocol \n(SLAP) \n SLSP . See Secure link state routing protocol \n(SLSP) \n Smart card authentication changes , 770 – 773 \n certifi cate revocation support , 771 \n logon of , 771 \n OCSP support for PKINIT , 771 \n registry settings , 773 \n terminal server redirection , 772 \n Smart cards , 639 \n ISO standards for contactless , 207 \n S/MIME standards \n for secure email , 436 \n Smoke damage , 632 \n SMS . See Short message service (SMS); \nSystems management server (SMS) \n SMS-based one-time password (OTP) , \n 289 – 290 \n SMS double-channel authentication , 289 \n SMTP . See Simple mail transport protocol \n(SMTP) \n SNEP . See Secure network encryption \nprotocol (SNEP) \n Sniffers \n packet , 42 \n program , 154 \n wireless , 41 \n Sniffer, network \n Ethereal , 166 \n EtherSnoop light , 167 \n SNMP . See Simple Network Management \nProtocol (SNMP) \n Snoeren, A. , 130 \n Snort , 65 , 153 , 154 , 156 , 167 \n Social engineering , 459 , 519 \n attack , 55 \n penetration test , 377 \n SPIT , 557 – 558 \n Social security number (SSN) , 478 \n Sockets layer , 98 \n Soft token application, PDA , 290 \n Software \n antispyware , 61 – 62 \n antivirus , 61 – 62 \n bugs , 392 \n fi rewall implementations , 355 \n installation, hacking XP password , 327 \n write blockers , 313 \n Sorkin, A. , 37 \n Source ID (S_ID) checking, SAN , 574 \n Source Path Isolation Engine (SPIE) , \n 343 – 344 , 344 \n SOX . See Sarbanes-Oxley Act of 2002 (SOX) \n Spam , 123 – 124 \n fi ltering , 62 \n tools , 740 \n Spam instant messaging (SPIM) , 458 \n Spam over internet telephony (SPIT) , \n 557 – 558 \n Sparks, S. , 121 \n Specifi cation and description language (SDL) \nspecifi cations , 194 – 195 \n SPIE . See Source path isolation engine (SPIE) \n SPIM . See Spam instant messaging (SPIM) \n SPINS, security protocols for sensor \nnetworks , 173 \n μ TESLA , 174 – 175 \n SNEP , 174 \n SPIT . See Spam over internet telephony \n(SPIT) \n SPKI . See Simple public key infrastructure \n(SPKI) \n Spoofi ng attacks, RFID systems and , 210 – 211 \n SPRINT . See Simplifi ed process for risk \nidentifi cation (SPRINT) \n Spry Control , 778 \n Sputnik , 1 , 423 \n Spybot-Search & Destroy , 781 , 782 \n Spyware , 57 \n SpywareBlaster , 781 \n Spyware Doctor , 781 \n SQL . See Structured query language (SQL) \n SRTP . See Secure RTP (SRTP) \n SSH . See Secure shell protocol (SSH) \n SSID . See Service set identifi er (SSID) \n SSL . See Secure socket layer (SSL) \n SSL-VPN . See Secure socket layer (SSL) \nVPN \n SSO . See Single sign-on (SSO) \n Stacheldraht , 683 \n Stajano, F. , 481 \n Standard base specifi cations, Linux , 82 \n Standards \n BITS Financial Services Roundtable , 783 \n Common Criteria , 783 \n ISO 27001 certifi cates , 783 \n ISO 27000 Directory, The , 783 \n ISO/IEC 27002 explained , 783 \n ISO/IEC 27001 frequently asked questions , \n 783 \n ISO 27001 Security , 783 \n ISO 27000 Toolkit , 783 \n NIST Special Publication 800-53 , 783 \n Overview of Information Security \nStandards , 783 \n Praxiom Research Group Ltd. , 783 \n for risk management , 623 – 625 \n" }, { "page_number": 873, "text": "Index\n840\nStandards (Continued)\n Security Practitioner, The , 783 \n Veridion , 783 \n Standards organizations, PKI \n IETF OpenPGP , 448 – 449 \n IETF PKIX , 448 \n SDSI/SPKI , 448 \n Start menu, hacking XP password , 327 \n State, in AES implimentation \n MixColumns transformation , 409 – 411 \n round keys in reverse order , 411, 413 \n S-box , 408 – 409 \n shift rows , 409 \n sub key addition , 410 \n State and federal organizations , 669 – 670 \n State consistency , 114 , 117 \n Stateful fi rewalls , 61 \n Stateful inspection fi rewalls , 163 \n Stateful packet fi rewalls , 354 \n Stateful pattern matching method , 165 \n State Security Breach Notifi cation Laws , 264 \n Stealing the Network: How to Own a \nContinent , 385 \n StegAlyzerAS , 324 \n Steganography , 324 – 325 , 740 \n Steiner, P. , 470 \n Stepping-stone attack attribution , 244 – 246 \n Stepping stones , 129 – 130 \n Stevens, W. Richard. , 86 \n Stewart, J. , 121 \n Sticky bit , 70 \n Stinson, E. , 125 \n Stolen laptop penetration test , 377 – 378 \n Stone, R. , 342 \n Storage area networking (SAN) \n cost of , 594 \n defi ned , 591 \n implementation , 591 – 592 \n benefi ts , 592 \n importance , 592 – 593 \n logical threat & protection , 596 – 603 \n confi gurations , 597 – 598 \n encryption , 600 – 601 \n IDS/IPS , 596 \n logging , 601 – 603 \n network traffi c , 596 – 597 \n penetration testing , 599 – 600 \n system hardening , 598 \n vulnerability scanning , 598 \n physical threat , 594 – 595 \n switches , 593 – 594 \n Storage area networks (SAN) \n AAA , 568 \n access . See Access, SAN \n denial-of-service attacks , 577 \n encryption . See Encryption, SAN \n e-port attack , 576 – 577 \n host attacks , 575 – 576 \n management control attacks , 575 \n man-in-the-middle attacks , 576 \n organizational structure , 567 – 568 \n and organizational structure , 567 – 569 \n physical attacks , 575 \n session hijacking attacks , 577 \n WWN spoofi ng , 576 \n Storage area networks (SANs) , 778 – 779 \n Storm worm , 57 , 123 \n Strayer, W. T. , 130 , 344 , 346 \n Stream ciphers , 31 – 32 , 107 \n Structured query language (SQL) , 88 \n injection , 55 \n injection attack , 137 , 138 \n Stuffl ebeam, W. , 504 \n SubByte transformation , 409 \n Subgroup \n defi ned , 405 \n Subject alternative name , 446 \n Subject key identifi er , 445 \n Subnet mask/masking , 298 \n Substitution cipher , 25 – 26 , 401 – 402 \n Sudo(1) mechanism , 76 \n Sung, M. , 130 \n Superusers , 69 \n Supporting facilities to information systems , \n 629 \n Supporting outgoing services, fi rewalls , \n 359 – 360 \n forms of state , 359 – 360 \n payload inspection , 360 \n Surf Control , 730 \n Surrebuttal, computer forensics , 335 \n Su(1) utility , 74 \n Switch link authentication protocol (SLAP) , 570 \n Swivel , 4 \n SXIP 2.0 . See Simple eXtensible identity \nprotocol (SXIP) 2.0 \n Sybil attacks , 711 \n Symantec , 147 , 239 \n Symmetric encryption , 516 \n Symmetric key authentication , 114 – 115 \n Symmetric-key cryptography, RFID system \nand , 215 – 217 \n approaches , 216 – 217 \n authentication and privacy with , 215 – 216 \n SYN attack , 156 , 167 \n Syslog , 88 \n Syslog NG , 781 \n System administrator tool for analyzing \nnetworks (SATAN) , 391 \n scanner , 389 \n System integrity , 146 – 147 \n validation , 306 \n Systems administrator \n role in IT security management , 263 \n Systems management server (SMS) , \n389 – 390 \n Syverson, P. , 474 \n Syzdlo, M. , 214 \n Szor, Peter , 56 , 61 \n T \n Tanenbaum, A. , 214 \n Tangible assets , 137 \n Tape \n encryption , 587 – 588 \n library , 593 – 594 \n Tap technology , 510 \n Tchakountio, F. , 130 \n TCP . See Transmission Control Protocol \n(TCP) \n TCP/IP . See Transmission Control Protocol/\ninternet Protocol (TCP/IP) \n TCP SYN (half-open) scanning , 155 – 156 \n Tcpwrappers , 85 \n TDM . See Time division multiplexing (TDM) \n Technical controls, security management \nsystem , 258 \n Technical threats to service of information \nsystems and data \n electrical power , 633 – 634 \n electromagnetic interference (EMI) , 634 \n Technical weapons , 681 \n Technological evolution, and IM , 454 – 455 \n Telemetry , 429 \n Temperature \n inappropriate , 631 – 632 \n thresholds for damage to computing \nresources , 631 \n Template storage in biometric system , 658 \n Temporal key integrity protocol (TKIP) , 173 , \n 245 – 246 \n Temporary restraining order (TRO), computer \nforensics , 312 – 325 \n creating images by software and hardware \nwrite blockers , 313 – 314 \n data capturing/acquisition , 313 \n divorce , 313 \n fi le carving , 318 – 320 \n fi le system analyses , 314 – 315 \n live capture of relevant fi les , 314 \n NTFS , 315 \n password recovery , 317 – 318 \n patent infringement , 313 \n RAID , 314 \n" }, { "page_number": 874, "text": "Index\n841\n role of forensic examiner in investigations \nand fi le recovery , 315 \n timestamps , 320 – 324 \n Terrorism and sovereignty , 686 \n Terzis, A. , 120 \n μ TESLA . See Micro timed, effi cient, \nstreaming, loss-tolerant authentication \nprotocol (\u0010TESLA) \n Testimonial, applied computer forensics , 329 \n Texas state law , 736 – 737 \n TFN2K , 683 \n TFTP confi guration fi le sniffi ng , 555 \n TGS-REQ/REP . See Ticket-granting service \nrequest/response (TGS-REQ/REP) \n Third Generation Partnership Project (3GPP) , \n 194 \n Third-party surveys , 491 – 492 \n 32-bit tag-specifi c PIN code , 213 \n Threat \n classifi cation , 611 \n defi ned , 606 \n Threat assessment , 636 – 637 \n planning and implementation , 637 \n Threats evaluation, secure organization \n based on business , 7 \n based on infrastructure model , 6 – 7 \n global threats , 7 \n industry-specifi c , 7 \n Three-dimensional attack taxonomy \n attack type , 193 \n physical access to network , 192 – 193 \n vulnerability exploited , 193 \n Threshold value , 650 \n Thumbnails , 309 \n Ticket-granting service request/response \n(TGS-REQ/REP) , 770 \n Time bombs , 685 \n Time division multiplexing (TDM) , 170 \n Time nesting , 439 \n Time-of-day policy changing , 740 \n Times-of-arrival (TOA) measurements, of \nfrequency-hopping radio system , 793 \n Timestamps, computer forensics , 320 – 324 \n altering the Entry Modifi ed , 322 \n Date Created , 321 \n experimental evidence , 321 – 322 \n working , 320 – 321 \n Time-to-live (TTL) , 53 , 105 , 362 \n Tipton, Harold F. , 226 \n TJX Companies, Inc, The , 133 – 134 \n TKIP . See Temporal Key Integrity Protocol \n(TKIP) \n TLS . See Transport layer security (TLS) \n TLS/SSL cryptographic enhancements \n AES cipher suites , 766 – 767 \n default cipher suite preference , 769 \n ECC cipher suites , 767 – 768 \n previous cipher suites , 769 \n Schannel CNG provider model , 768 – 769 \n TLS/SSL protocols \n for secure internet , 436 \n Tokens , 50 \n TOPO , 344 \n Topologies, botnets , 120 \n centralized , 121 \n P2P , 121 – 122 \n Topology, network , 96 \n TorPark , 737 – 738 \n Torvalds, Linus , 80 \n Total cost of ownership (TCO) , 151 \n Traceback in anonymous systems , 346 \n Traceroute , 53 \n Tracing illegal content distributor, in P2P \nsystems , 347 \n Tracing phishers , 347 \n Traffi c analysis , 555 \n Traffi c monitoring , 64 , 65 \n Training requirements, for security \nmanagement system , 256 \n Transaction Capabilities Application Part \n(TCAP) , 185 \n Transient Electromagnetic Pulse Emanation \nStandard (TEMPEST) attacks , 211 \n Transition Security Network (TSN) , 245 \n Transmission Control Protocol/internet \nProtocol (TCP/IP) , 40 \n Application Layer , 298 \n data architecture and data encapsulation , \n 298 – 300 \n features of , 297 \n introduction , 297 \n model , 354 \n Network Layer , 298 – 299 \n Physical Layer , 299 \n traffi c over Frame Relay , 364 \n Transport Layer , 298 \n Transmission Control Protocol (TCP) , 60 , \n 153 , 159 , 510 – 511 \n Transportation Security Administration \n(TSA) , 3 \n Transport layer , 97 – 98 \n TCP/IP , 298 \n Transport Layer Security (TLS) , 514 – 515 , \n 767 \n Transposition cipher , 402 \n Trend Micro AntiVirus plus AntiSpyware , 782 \n Trend Micro HijackThis , 781 \n Tribe FloodNet , 683 \n Trinoo , 683 \n Triple DES , 38 \n Tripwire , 89 \n Trojans , 122 , 681 \n defense, hacking XP password , 326 \n threat , 295 \n Trufi na.com , 717 – 718 \n Trust \n context , 704 \n defi ned , 703 \n transfer , 709 – 711 \n value , 709 \n TrustAnchors parameters , 442 \n Trusted Computer Security Evaluation \nCriteria (TCSEC) , 226 \n Trustfuse.com , 714 \n Trustplus , 716 – 717 \n TrustPlus Rating User Interface , 716 \n Trustworthiness , 709 \n TSA . See Transportation Security \nAdministration (TSA) \n Tsai, J. , 475 \n Tsichlas v Touch Media , 689 \n Tsudik, G. , 216 \n TTL . See Time to Live (TTL) \n Tung, L. , 123 \n Tunnel , 510 \n Tunneling , 56 \n Turner, Dean , 53 , 54 \n Two-factor authentication , 59 , 87 \n Two-router confi guration , 357 – 358 \n Tygar, Doug , 450 \n U \n Ubiquitous systems , 694 \n attacking with antivirus tools , 694 – 695 \n UDP . See User Datagram Protocol (UDP) \n UDPFlood , 167 \n UID . See User identifi er (UserID) \n Ulimit interface , 86 \n Ultrasurf proxy client , 738 \n Unallocated clusters , 14 \n Unauthorized access, by outsider , 294 \n Unchecked user input , 392 – 393 \n Undervoltage, effect on IS equipment , 634 \n Unicity distance , 29 \n Unifi ed threat management (UTM) , 44 , 49 \n Uniform Resource Identifi ers (URI) , 446 \n SIP , 551 \n Uninterrupible power supplies (UPSs) , 144 \n Unix \n accessing standard fi les in \n mandatory locking , 70 \n rights for , 69 – 70 \n Set-ID bit , 70 \n sticky bit , 70 \n" }, { "page_number": 875, "text": "Index\n842\nUnix (Continued)\n account access control \n local fi les , 71 – 72 \n NIS , 72 \n PAM and , 72 \n authentication mechanisms for , 71 \n defi ned , 79 \n fi le systems security \n access permissions , 77 \n locate SetID fi les , 77 \n read-only partitions , 76 \n hardening \n host . See Host hardening \n systems management , 90 \n history of , 79 \n kernel space , 68 \n Linux and , 80 \n login process , 71 \n need to update , 67 – 68 \n netstat command in , 85 \n network authentication mechanisms , 73 \n noninteractive services , 72 \n organizational considerations \n separation of duties , 92 \n unannounced forced vacations , 92 \n patches and , 68 \n permissions on directories \n execute , 70 \n read and write , 70 \n SetID , 71 \n proactive defense for \n incident response preparation . See \n Incident response \n vulnerability assessment . See \n Vulnerability, assessment \n process , 84 \n process, kernel structure of , 68 , 69 \n root user access, controlling \n confi guring secure terminals , 74 \n gaining privileges with su , 74 \n sudo(1) mechanism , 76 \n using groups instead of root , 74 \n Single Unix Specifi cation , 79 – 80 \n SSH , 73 – 74 \n standards , 68 \n syslog , 88 \n system architecture \n access rights , 84 \n fi le system , 82 \n kernel , 82 \n users and groups , 82 , 84 \n system security, aims of \n authentication , 67 \n authorization , 67 \n availability , 67 \n integrity , 67 \n traditional systems \n kernel space vs. userland , 68 \n user space security , 69 \n tree , 81 \n user account protection , 71 \n userland , 68 \n Unix scanners , 391 \n Uplink encryption , 428 \n UPN . See User principal name (UPN) \n Upscoop.com , 714 \n UPSs . See Uninterrupible power supplies \n(UPSs) \n URI . See Uniform Resource Identifi ers (URI) \n URL-based identity 2.0 , 278 \n URL block method , 726 \n U.S. Department of Defense , 425 \n U.S. government \n and content fi ltering , 725 \n goals of , 775 \n U.S. Government Federal Information \nSecurity Management Act of 2002 \n(FISMA) , 755 \n Usability requirement, identity management , \n 273 – 274 \n USA PATRIOT Act , 264 \n scope and penalties , 267 \n USA PATRIOT Act of 2001 (PL 107-56) , \n 661 – 663 \n USA PATRIOT Act Titles , 662 – 663 \n USB \n devices , 389 \n storage devices, security risk , 257 \n User access control \n accounting , 51 \n authentication , 49 – 50 \n authorization , 50 – 51 \n tokens . See Tokens \n updating , 51 \n User agents (UA), in SIP , 551 \n User artifact analysis, hacking XP password , \n 326 – 327 \n User-centricity, identity management , 272 – 273 \n User-controlled management , 272 \n User Datagram Protocol (UDP) , 60 , 298 , \n 510 , 511 \n attacks , 154 \n protocols , 159 \n User identifi er (UID) , 69 , 82 \n Userland vs. kernel space , 68 \n User-level root kit threat , 295 \n UserPolicySet , 442 \n User principal name (UPN) , 770 \n UTM system . See Unifi ed threat management \n(UTM) \n V \n VA . See Validation authority (VA) \n Validation authority (VA) , 437 \n ValidationPolicy parameter , 442 \n ValidationTime , 442 \n Validity Screening Solutions , 8 \n VantagePoint Security , 778 \n VA tools . See Vulnerability assessment \n Vendor-neutral program , 9 – 10 \n Venn diagram , 35 \n Venyo , 715 – 716 \n Verman cipher . See Stream cipher \n Vindex , 715 \n Virtual directories, for identity management , \n 276 – 277 \n Virtual links, communication , 95 \n Virtual memory (VM) , 48 \n Virtual private networks (VPN) , 47 , 360 \n asymmetric cryptography , 516 \n authentication \n hashing , 515 \n HMAC , 515 \n MD5 , 515 \n SHA-1 , 515 – 516 \n edge devices , 516 \n hackers and crackers , 517 \n history , 508 – 511 \n IEEE , 511 – 512 \n IETF , 511 , 512 \n overview , 507 – 508 \n passwords , 516 – 517 \n penetration test , 378 \n symmetric encryption , 516 \n types , 512 – 515 \n Viruses , 56 , 684 \n threat , 294 – 295 \n Virustotal.com , 695 \n Vishing , 557 \n Vision training and consultancy: 778 \n Visitor Location Register (VLR) , 185 , 186 \n Vista , 698 \n Date Created timestamp , 322 \n VM . See Virtual memory (VM) \n VMWare , 699 \n Voice over Internet Protocol (VoIP) , 48 , \n 346 – 347 , 349 , 360 \n basics , 551 – 553 \n denial-of-service (DoS) attack . See Denial-\nof-service (DoS) \n future trends , 560 – 564 \n IPS , 559 – 560 \n penetration test , 378 \n preventive measures , 558 – 559 \n privacy loss , 555 – 557 \n rate limiting , 560 \n" }, { "page_number": 876, "text": "Index\n843\n reconnaissance of , 553 – 554 \n social engineering , 557 – 558 \n taxonomy of threats , 553 \n XSS attacks , 557 \n VoIP . See Voice over Internet Protocol (VoIP) \n VPN . See Virtual private networks (VPN) \n Vulnerability(ies) , 54 \n classifi cation , 611 \n defi ned , 606 \n IM , 459 \n mitigation cycle , 385 \n scanning tests , 54 \n of spread-spectrum wireless networks , 793 \n Vulnerability analysis , 377 \n cellular networks , 193 \n aCAT , 198 – 199 \n CAT , 195 – 198 \n eCAT , 199 – 201 \n Vulnerability assessment (VA) \n defense in depth strategy , 388 \n disclosure date , 391 – 392 \n DIY , 393 \n host-based , 91 \n network-centric , 91 \n network mapping tool , 385 – 386 \n scanner selection , 386 – 387 \n theoretical goal , 385 \n tools , 148 , 388 – 390 \n core impact , 389 \n GFI LANguard , 389 \n MBSA , 389 – 390 \n nessus , 388 – 389 \n QualysGuard , 389 \n retina , 389 \n SAINT , 389 \n sara , 389 \n X-Scan , 389 \n vs. penetration tests , 384 – 385 \n Vulnerability causes , 392 – 393 \n operating system design fl aws , 392 \n password management fl aws , 392 \n software bugs , 392 \n unchecked user input , 392 – 393 \n Vulnerability-scanning, appliances , 137 \n Vulnerability testing , 59 – 60 \n risk analysis , 46 – 47 \n W \n Walker, Owen Thor , 129 \n Wang, P. , 121 \n Wang, X. , 130 , 344 , 345 \n Wang and Reeves active watermark scheme , \n 344 \n WantBack parameter , 442 \n Warn and allow methods , 740 \n Watchdog tag , 214 \n Water damage , 632 \n W3C . See World Wide Web Consortium \n(W3C) \n W3C Platform for Privacy Preferences , 504 \n Web based attacks , 57 – 58 \n Web browser, diversity and , 698 \n Web fi ltering , 747 \n Web-fi ltering tools , 147 \n Web of Trust (WoT) , 448 \n Web pages, identity theft \n bad URL , 535 \n login page . See Login page \n vulnerability to update mechanism , 547 \n WebSense and Bluecoat , 730 \n Web server , 354 \n security policy for , 356 \n Weis, S. A. , 212 , 216 \n WEP . See Wired Equivalent Privacy (WEP) \n Westin, Alan F. , 272 \n Wetherall, D. , 129 \n White-box test , 371 \n Whitman, M. E. , 233 , 235 , 251 \n Whitten, Alma , 450 \n Wicherski, G. , 123 , 126 \n Widgets, Inc. , 384 \n Wi-Fi Protected Access (WPA) , 173 \n Wildfi res , 632 \n Wilson, M. , 248 \n Wilson, William R. , 225 \n Windows CardSpace , 283 – 284 \n Windows Server ® 2003 , 766 \n Windows 2008 Server, table of contents \nfor , 14 \n Windows Server Update Services \n(WSUS) , 389 \n Windows Vista ® \n confi guration for authentication services , \n 765 – 774 \n backup wizard , 765 \n Cred SSP , 765 – 766 \n Kerberos enhancements , 769 – 770 \n previous logon information , 773 – 774 \n restore wizard , 765 \n smartcard authentication changes , \n 770 – 773 \n TLS/SSL cryptographic enhancements , \n 766 – 769 \n Windows XP \n with smart card logon , 772 \n WinHex , 14 , 15 \n WinPcap , 166 \n Wired Equivalent Privacy (WEP) , 139 , \n 172 – 173 \n Wireless ad hoc networks \n bootstrapping in , 178 \n characteristics , 171 \n mesh networks , 171 – 172 \n sensor networks , 171 \n Wireless local area network (WLAN) , \n 170 – 171 , 242 \n access control , 243 \n availability , 244 \n confi dentiality , 243 – 244 \n data integrity , 244 \n security controls, enhancing , 244 – 246 \n Wireless mesh networks (WMN) , \n 171 – 172 \n Wireless network \n ad hoc networks \n characteristics , 171 \n mesh networks , 171 – 172 \n sensor networks , 171 \n bootstrapping . See Bootstrapping \n cellular networks , 169 \n cellular telephone networks , 170 \n LAN , 170 – 171 \n key management schemes , 178 \n classifi cation of , 179 \n D-H algorithm , 179 \n H & O algorithm , 179 – 180 \n ING algorithm , 179 \n partial distributed threshold CA \nscheme , 180 \n self-healing session key distribution , \n 181 \n self-organized key management scheme \n(PGP-A) , 180 – 181 \n penetration test , 377 \n protocols \n SPINS , 173 – 175 \n WEP , 172 – 173 \n WPA and WPA2 , 173 \n secure routing \n Adriane , 176 \n ARAN , 176 – 177 \n DSDV routing , 175 \n SEAD , 175 – 176 \n SLSP , 177 \n Wireless remote access points (AP) \n adding to fi rst IAS server , 795 \n adding to IAS server , 795 \n as RADIUS clients , 795 \n scripting of , 795 – 796 \n additional settings to secure , 797 – 798 \n confi guration , 796 \n enabling secure WLAN authentication on , \n 796 – 797 \n Wireless security , 139 – 140 \n" }, { "page_number": 877, "text": "Index\n844\n Wireless sensor network , 171 \n bootstrapping in , 178 \n SPINS, security protocols , 173 \n μ TESLA , 174 – 175 \n SNEP , 174 \n Witness testimony, eDiscovery , 311 \n WLAN authentication, enabling secure \n on access points , 796 – 797 \n WMN . See Wireless mesh networks (WMN) \n Wong, R. , 288 \n Workforce, and IM , 455 – 456 \n effi ciency of , 456 – 457 \n generational differences , 456 – 457 \n Work procedures and processes, IT security \nmanagement , 262 \n World Wide Web Consortium (W3C) , 449 – 450 \n Wormhole , 104 \n Worms , 56 , 684 \n threat of , 295 , 695 – 697 \n WoT . See Web of Trust (WoT) \n WPA . See Wi-Fi Protected Access (WPA) \n WPA2 , 173 \n Write access control , 109 \n Write blockers, creating forensic images \nusing software and hardware , 313 – 314 \n WSUS . See Windows Server Update Services \n(WSUS) \n WWN spoofi ng, SAN , 576 \n X \n XACML . See EXtensible Access Control \nMarkup Language (XACML) \n Xbasics , 778 \n X.509 certifi cates , 640 \n XCP . See Extended copy protection (XCP) \n XDI , 279 \n XeroBank , 737 \n Xin, L. , 345 \n Xinetd , 85 \n Xing.com , 716 \n X-KISS . See XML Key Information Service \nSpecifi cation (X-KISS) \n XKMS . See XML Key Management \nSpecifi cation (XKMS) \n X-KRSS . See XML Key Registration Service \nSpecifi cation (X-KRSS) \n XML Key Information Service Specifi cation \n(X-KISS) , 449 \n XML Key Management Specifi cation \n(XKMS) , 449 \n XML Key Registration Service Specifi cation \n(X-KRSS) , 449 \n X.509 model \n bridge certifi cation systems , 443 – 444 \n certifi cate format , 444 – 446 \n certifi cate model , 436 – 437 \n certifi cate policy extensions , 446 – 447 \n certifi cate revocation , 440 – 442 \n certifi cate validation , 439 – 440 \n history of , 436 \n implementation architectures , 437 – 439 \n modifi ed architechture , 450 \n policy extensions , 446 \n SCVP , 442 – 443 \n XOR \n chaining , 35 – 36 \n cipher , 34 – 35 \n XP, timestamp , 321 – 322 \n XRDS (Extensible Resource Description \nSequence) , 282 \n X.509 Revocation Protocols , 438 \n XRI (EXtensible Resource Identifi er) , 279 \n X-Scan , 389 \n XSS . See Cross-Site Scripting (XSS) \n Xu, J. , 130 \n X.509 V3 extensions \n authority key identifi er , 445 \n key usages , 446 \n subject alternative name , 446 \n subject key identifi er , 445 \n X.509 V1 format , 445 \n X.509 V2 format , 445 \n X.509 V3 format , 445 \n Y \n Yadis , 282 \n Yasuura, H. , 213 \n YA-TRAP (Yet Another Trivial RFID \nAuthentication Protocol) , 216 \n Yegneswaran, V. , 125 \n Yoda, K. , 344 \n Yoda and Etoh study on stepping-stone \nconnections , 344 \n Yonan, James , 507 \n Yu, T. , 504 \n Yurcik, W. , 473 \n Z \n Zarfoss, J. , 120 \n Zdziarski, J. , 62 \n Zenmap , 385 \n graphical user interface , 386 \n “ 0-day ” , 295 – 296 \n Zhang, L. , 344 , 345 \n Zhang, Y. , 344 \n Zimmermann, Philip , 448 \n ZoneAlarm Firewall , 781 \n ZoneAlarm Internet Security Suite , 782 \n ZoomInfo , 716 \n Zou, C. , 121 , 126 \n" } ] }