{ "pages": [ { "page_number": 1, "text": "" }, { "page_number": 2, "text": "Computer Communications and Networks\n" }, { "page_number": 3, "text": "The Computer Communications and Networks series is a range of textbooks, \nmonographs and handbooks. It sets out to provide students, researchers and \n nonspecialists alike with a sure grounding in current knowledge, together with \ncomprehensible access to the latest developments in computer communications and \nnetworking.\nEmphasis is placed on clear and explanatory styles that support a tutorial approach \nso that even the most complex of topics is presented in a lucid and intelligible \n manner.\nFor other titles published in this series, go to http://www.springer.com/\n" }, { "page_number": 4, "text": "Joseph Migga Kizza\nA Guide to Computer \nNetwork Security\n1 23\n" }, { "page_number": 5, "text": "CCN Series ISSN 1617-7975 \nISBN 978-1-84800-916-5 \ne-ISBN 978-1-84800-917-2\nDOI 10.1007/978-1-84800-917-2\nLibrary of Congress Control Number: 2008942999\n© Springer-Verlag London Limited 2009\nAll rights reserved. This work may not be translated or copied in whole or in part without the written \npermission of the publisher (Springer Science +Business Media, LLC, 233 Spring Street, New York, NY \n10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connec-\ntion with any form of information storage and retrieval, electronic adaptation, computer software, or by \nsimilar or dissimilar methodology now known or hereafter developed is forbidden.\nThe use in this publication of trade names, trademarks, service marks and similar terms, even if they are \nnot identifi ed as such, is not to be taken as an expression of opinion as to whether or not they are subject \nto proprietary rights.\nPrinted on acid-free paper\nspringer.com\nJoseph Migga Kizza, PhD\nUniversity of Tennessee-Chattanooga\nDepartment of Computer Science\n615 McCallie Ave.\nChattanooga TN 37403\n326 Grote Hall\nUSA\njoseph-kizza@utc.edu\nSeries Editor\nProfessor A.J. Sammes, BSc, MPhil, PhD, FBCS, CEng\nCISM Group, Cranfi eld University,\nRMCS, Shrivenham, Swindon SN6 8LA,UK\n" }, { "page_number": 6, "text": "To the Trio: Immaculate, Josephine,\nand Florence\n" }, { "page_number": 7, "text": " \nvii\nPreface \nIf we are to believe in Moore’s law, then every passing day brings new and advanced \nchanges to the technology arena. We are as amazed by miniaturization of computing \ndevices as we are amused by their speed of computation. Everything seems to be \nin fl ux and moving fast. We are also fast moving towards ubiquitous computing. To \nachieve this kind of computing landscape, new ease and seamless computing user \ninterfaces have to be developed. Believe me, if you mature and have ever program \nany digital device, you are, like me, looking forward to this brave new computing \nlandscape with anticipation.\nHowever, if history is any guide to use, we in information security, and indeed \nevery computing device user young and old, must brace themselves for a future full \nof problems. As we enter into this world of fast, small and concealable ubiquitous \ncomputing devices, we are entering fertile territory for dubious, mischievous, and \nmalicious people. We need to be on guard because, as expected, help will be slow \ncoming because fi rst, well trained and experienced personnel will still be diffi cult \nto get and those that will be found will likely be very expensive as the case is today. \nSecondly, the security protocols and best practices will, as it is today, keep chang-\ning at a fi rst rate which may warrant network administrators to constantly changing \nthem. Thirdly, as the case is today, it will be extremely diffi cult to keep abreast of \nthe many new vulnerabilities and patches to them. In other words, the computing \nlandscape will change for sure on one side and remain the same on the other.\nFor these reasons, we need to remain vigilant with better, if not advanced com-\nputer and information security protocols and best practices because the frequency of \ncomputer network attacks and the vulnerability of computer network systems will \nlikely not abet, rather they are likely to increase as before.\nMore efforts in developing adaptive and scalable security protocols and best \npractices and massive awareness, therefore, are needed to meet this growing chal-\nlenge and bring the public to a level where they can be active and safe participants \nin the brave new worlds of computing.\nThis guide is a comprehensive volume touching not only on every major topic \nin computing and information security and assurance, but it also introduces new \ncomputing technologies like wireless sensor networks, a wave of the future, where \n" }, { "page_number": 8, "text": "security is likely to be a major issues. It is intended to bring massive education and \nawareness of security issues and concerns in cyberspace in general and the com-\nputing world in particular, their benefi ts to society, the security problems and the \ndangers likely to be encountered by the users, and be a pathfi nder as it initiates a \ndialog towards developing better algorithms, protocols, and best practices that will \nenhance security of computing systems in the anticipated brave new world. It does \nthis comprehensively in four parts and twenty-two chapters. Part I gives the reader \nan understanding of the working of and the security situation of computer networks. \nPart II builds on this knowledge and exposes the reader to the prevailing security situ-\nation based on a constant security threat. It surveys several security threats. Part III,\nthe largest, forms the core of the guide and presents to the reader most of the best \npractices and solutions that are currently in use. Part IV is for projects. In addition \nto the algorithms, protocols, and solutions, several products and services are given \nfor each security item under discussion.\nIn summary, the guide attempts to achieve the following objectives:\n1 Educate the public about cyberspace security in general terms and computer \n systems security in particular, with reference to the Internet,\n2 Alert the public to the magnitude of computer network vulnerabilities, \n weaknesses, and loopholes inherent in the computer network infrastructure\n3 Bring to the public attention effective security solutions and best practice, expert \nopinions on those solutions, and the possibility of ad-hoc solutions\n4 Look at the roles legislation, regulation, and enforcement play in computer \n network security efforts\n5 Finally, initiate a debate on developing effective and comprehensive algorithms, \nprotocols, and best practices for information security.\nSince the guide covers a wide variety of security topics, algorithms, solutions, \nand best practices, it is intended to be both a teaching and a reference tool for all \ninterested in learning about computer network security issues and available tech-\nniques to prevent information systems attacks. The depth and thorough discussion \nand analysis of most of the computer network security issues, together with the \ndiscussion of security algorithms, and solutions given, makes the guide a unique \nreference source of ideas for computer network security personnel, network secu-\nrity policy makers, and those reading for leisure. In addition, the guide provokes the \nreader by raising valid legislative, legal, social, and ethical security issues, includ-\ning the increasingly diminishing line between individual privacy and the need for \ncollective and individual security.\nThe guide targets college students in computer science, information science, \ntechnology studies, library sciences, engineering, and to a lesser extent students in \nthe arts and sciences who are interested in information technology. In addition, stu-\ndents in information management sciences will fi nd the guide particularly helpful. \nPractitioners, especially those working in information-intensive areas, will likewise \nfi nd the guide a good reference source. It will also be valuable to those interested \nin any aspect of information security and assurance and those simply wanting to \nbecome cyberspace literates.\nviii \nPreface\n" }, { "page_number": 9, "text": "Preface \nix\nBook Resources\nThere are two types of exercises at the end of chapter: easy and quickly work-\nable exercises whose responses can be easily spotted from the proceeding text; and \nmore though provoking advanced exercises whole responses may require research \noutside the content of this book. Also chapter 22 is devoted to lab exercises. There \nare three types of lab exercises: weekly or bi-weekly assignments that can be done \neasily with either reading or using readily available software and hardware tools; \nslightly harder semester long projects that may require extensive time, collabora-\ntion, and some research to fi nish them successfully; and hard open research projects \nthat require a lot of thinking, take a lot of time, and require extensive research.\nWe have tried as much as possible, throughout the guide, to use open source \nsoftware tools. This has two consequences to it: one, it makes the guide affordable \nkeeping in mind the escalating proprietary software prices; and two, it makes the \ncontent and related software tools last longer because the content and corresponding \nexercises and labs are not based on one particular proprietary software tool that can \ngo out anytime.\nInstructor Support Materials\nAs you consider using this book, you may need to know that we have developed \nmaterials to help you with your course. The help materials for both instructors and \nstudents cover the following areas:\nSyllabus. There is a suggested syllabus for the instructor.\n• \nInstructor PowerPoint slides. These are detailed enough to help the instructor, \n• \nespecially those teaching the course for the fi rst time.\nAnswers to selected exercises at the end of each chapter\n• \nLaboratory. Since network security is a hands-on course, students need to spend a \n• \nconsiderable amount of time on scheduled laboratory exercises. The last chapter \nof the book contains several laboratory exercises and projects. The book resource \ncenter contains several more and updates\nInstructor manual. These will guide the instructor in the day to day job of getting \n• \nmaterials ready for the class.\nStudent laboratory materials. Under this section, we will be continuously posting \n• \nthe latest laboratory exercises, software, and challenge projects.\nThese materials can be found at the publisher’s website at\nhttp://www.springeronline.com and at the author’s site at http://www.utc.edu/\nFaculty/Joseph-Kizza/\nChattanooga, Tennessee, USA \nJoseph Migga Kizza\nOctober, 2008.\n" }, { "page_number": 10, "text": " \nxi\nContents\nPart I Understanding Computer Network Security\n1 Computer Network Fundamentals ................................................................3\n1.1 Introduction ..............................................................................................3\n1.2 Computer Network Models ......................................................................4\n1.3 Computer Network Types ........................................................................5\n1.3.1 Local Area Networks (LANs) .......................................................5\n1.3.2 Wide Area Networks (WANs) ......................................................6\n1.3.3 Metropolitan Area Networks (MANs) ..........................................6\n1.4 Data Communication Media Technology.................................................7\n1.4.1 Transmission Technology .............................................................7\n1.4.2 Transmission Media ....................................................................10\n1.5 Network Topology ..................................................................................13\n1.5.1 Mesh ...........................................................................................13\n1.5.2 Tree .............................................................................................13\n1.5.3 Bus ..............................................................................................14\n1.5.4 Star ..............................................................................................15\n1.5.5 Ring ............................................................................................15\n1.6 Network Connectivity and Protocols .....................................................16\n1.6.1 Open System Interconnection (OSI) Protocol Suite ...................18\n1.6.2 Transport Control Protocol/Internet Protocol \n(TCP/IP) Model ..........................................................................19\n1.7 Network Services ...................................................................................22\n1.7.1 Connection Services ...................................................................22\n1.7.2 Network Switching Services ......................................................24\n1.8 Network Connecting Devices.................................................................26\n1.8.1 LAN Connecting Devices ...........................................................26\n1.8.2 Internetworking Devices .............................................................30\n1.9 Network Technologies ............................................................................34\n1.9.1 LAN Technologies ......................................................................35\n1.9.2 WAN Technologies .....................................................................37\n1.9.3 Wireless LANs ............................................................................39\n1.10 Conclusion ..............................................................................................40\n" }, { "page_number": 11, "text": "xii \nContents\nExercises ...............................................................................................................40\nAdvanced Exercises .............................................................................................. 41\nReferences ............................................................................................................. 41\n2 Understanding Computer Network Security .............................................43\n2.1 Introduction ............................................................................................43\n2.1.1 Computer Security ......................................................................44\n2.1.2 Network Security ........................................................................45\n2.1.3 Information Security ..................................................................45\n2.2 Securing the Computer Network ...........................................................45\n2.2.1 Hardware ....................................................................................46\n2.2.2 Software .....................................................................................46\n2.3 Forms of Protection................................................................................46\n2.3.1 Access Control ............................................................................46\n2.3.2 Authentication ............................................................................48\n2.3.3 Confi dentiality ............................................................................48\n2.3.4 Integrity ......................................................................................49\n2.3.5 Nonrepudiation ...........................................................................49\n2.4 Security Standards .................................................................................50\n2.4.1 Security Standards Based on Type of Service/Industry ............. 51\n2.4.2 Security Standards Based on Size/Implementation ....................54\n2.4.3 Security Standards Based on Interests .......................................55\n2.4.4 Best Practices in Security ...........................................................56\nExercises ...............................................................................................................58\nAdvanced Exercises ..............................................................................................58\nReferences .............................................................................................................59\nPart II Security Challenges to Computer Networks\n3 Security Threats to Computer Networks ....................................................63\n3.1 Introduction ............................................................................................63\n3.2 Sources of Security Threats ...................................................................64\n3.2.1 Design Philosophy ......................................................................65\n3.2.2 Weaknesses in Network Infrastructure and Communication\nProtocols .................................................................................65\n3.2.3 Rapid Growth of Cyberspace .....................................................68\n3.2.4 The Growth of the Hacker Community ......................................69\n3.2.5 Vulnerability in Operating System Protocol ...............................78\n3.2.6 The Invisible Security Threat – The Insider Effect ....................79\n" }, { "page_number": 12, "text": "3.2.7 Social Engineering .....................................................................79\n3.2.8 Physical Theft .............................................................................80\n3.3 Security Threat Motives .........................................................................80\n3.3.1 Terrorism ....................................................................................80\n3.3.2 Military Espionage ..................................................................... 81\n3.3.3 Economic Espionage .................................................................. 81\n3.3.4 Targeting the National Information Infrastructure .....................82\n3.3.5 Vendetta/Revenge .......................................................................82\n3.3.6 Hate (National Origin, Gender, and Race) .................................83\n3.3.7 Notoriety .....................................................................................83\n3.3.8 Greed ..........................................................................................83\n3.3.9 Ignorance ....................................................................................83\n3.4 Security Threat Management .................................................................83\n3.4.1 Risk Assessment .........................................................................84\n3.4.2 Forensic Analysis .......................................................................84\n3.5 Security Threat Correlation ....................................................................84\n3.5.1 Threat Information Quality .........................................................85\n3.6 Security Threat Awareness .....................................................................85\nExercises ...............................................................................................................86\nAdvanced Exercises ..............................................................................................87\nReferences .............................................................................................................88\n4 Computer Network Vulnerabilities ..............................................................89\n4.1 Defi nition ...............................................................................................89 \n4.2 Sources of Vulnerabilities ......................................................................89\n4.2.1 Design Flaws ..............................................................................90\n4.2.2 Poor Security Management ........................................................93\n4.2.3 Incorrect Implementation ...........................................................94\n4.2.4 Internet Technology Vulnerability ..............................................95\n4.2.5 Changing Nature of Hacker Technologies and Activities ..........99\n4.2.6 Diffi culty of Fixing Vulnerable Systems ..................................100\n4.2.7 Limits of Effectiveness of Reactive Solutions ......................... 101\n4.2.8 Social Engineering ...................................................................102\n4.3 Vulnerability Assessment .....................................................................103\n4.3.1 Vulnerability Assessment Services ...........................................104\n4.3.2 Advantages of Vulnerability Assessment Services ...................105\nExercises .............................................................................................................105\nAdvanced Exercises ............................................................................................106\nReferences ...........................................................................................................106\nContents \nxiii\n" }, { "page_number": 13, "text": "xiv \nContents\n5 Cyber Crimes and Hackers ........................................................................107\n5.1 Introduction ..........................................................................................107\n5.2 Cyber Crimes .......................................................................................108\n5.2.1 Ways of Executing Cyber Crimes ............................................108\n5.2.2 Cyber Criminals ....................................................................... 111\n5.3 Hackers ................................................................................................112\n5.3.1 History of Hacking ...................................................................112\n5.3.2 Types of Hackers ......................................................................115\n5.3.3 Hacker Motives ........................................................................118\n5.3.4 Hacking Topologies .................................................................. 121\n5.3.5 Hackers’ Tools of System Exploitation ....................................126\n5.3.6 Types of Attacks .......................................................................128\n5.4 Dealing with the Rising Tide of Cyber Crimes ....................................129\n5.4.1 Prevention .................................................................................129\n5.4.2 Detection ..................................................................................130\n5.4.3 Recovery ...................................................................................130\n5.5 Conclusion ...........................................................................................130\nExercises ............................................................................................................. 131\nAdvanced Exercises ............................................................................................ 131\nReferences ........................................................................................................... 131\n6 Hostile Scripts .............................................................................................133\n6.1 Introduction ..........................................................................................133\n6.2 Introduction to the Common Gateway Interface (CGI) .......................133\n6.3 CGI Scripts in a Three-Way Handshake ..............................................134\n6.4 Server–CGI Interface ...........................................................................136\n6.5 CGI Script Security Issues ...................................................................137\n6.6 Web Script Security Issues ...................................................................138\n6.7 Dealing with the Script Security Problems ..........................................139\n6.8 Scripting Languages ............................................................................139\n6.8.1 Server-Side Scripting Languages .............................................139\n6.8.2 Client-Side Scripting Languages .............................................. 141\nExercises .............................................................................................................143\nAdvanced Exercises ............................................................................................143\nReferences ...........................................................................................................143\n7 Security Assessment, Analysis, and Assurance .........................................145\n7.1 Introduction ..........................................................................................145\n7.2 System Security Policy ........................................................................147\n" }, { "page_number": 14, "text": "Contents \nxv\n7.3 Building a Security Policy ...................................................................149\n7.3.1 Security Policy Access Rights Matrix ......................................149\n7.3.2 Policy and Procedures .............................................................. 151\n7.4 Security Requirements Specifi cation ...................................................155\n7.5 Threat Identifi cation .............................................................................156\n7.5.1 Human Factors .........................................................................156\n7.5.2 Natural Disasters ......................................................................157\n7.5.3 Infrastructure Failures ..............................................................157\n7.6 Threat Analysis ....................................................................................159\n7.6.1 Approaches to Security Threat Analysis...................................160\n7.7 Vulnerability Identifi cation and Assessment ........................................ 161\n7.7.1 Hardware .................................................................................. 161\n7.7.2 Software ....................................................................................162\n7.7.3 Humanware ..............................................................................163\n7.7.4 Policies, Procedures, and Practices ..........................................163\n7.8 Security Certifi cation ...........................................................................165\n7.8.1 Phases of a Certifi cation Process ..............................................165\n7.8.2 Benefi ts of Security Certifi cation .............................................166\n7.9 Security Monitoring and Auditing .......................................................166\n7.9.1 Monitoring Tools ......................................................................166\n7.9.2 Type of Data Gathered ..............................................................167\n7.9.3 Analyzed Information ...............................................................167\n7.9.4 Auditing ....................................................................................168\n7.10 Products and Services ..........................................................................168\nExercises .............................................................................................................168\nAdvanced Exercises ............................................................................................169\nReferences ...........................................................................................................169\nAdditional References ........................................................................................169\nPart III Dealing with Network Security Challenges\n8 Disaster Management .................................................................................173\n8.1 Introduction ..........................................................................................173\n8.1.1 Categories of Disasters .............................................................174\n8.2 Disaster Prevention ..............................................................................175\n8.3 Disaster Response ................................................................................177\n8.4 Disaster Recovery ................................................................................177\n8.4.1 Planning for a Disaster Recovery ............................................178\n8.4.2 Procedures of Recovery ...........................................................179\n8.5 Make your Business Disaster Ready ................................................... 181\n" }, { "page_number": 15, "text": "xvi \nContents\n8.5.1 Always Be Ready for a Disaster ..............................................182\n8.5.2 Always Backup Media .............................................................182\n8.5.3 Risk Assessment ......................................................................182\n8.6 Resources for Disaster Planning and Recovery .....................................182\n8.6.1 Local Disaster Resources .........................................................183\nExercises .............................................................................................................183\nAdvanced Exercises – Case Studies ..................................................................183\nReferences ...........................................................................................................184\n9 Access Control and Authorization .............................................................185\n9.1 Defi nitions ............................................................................................185\n9.2 Access Rights .......................................................................................185\n9.2.1 Access Control Techniques and \n Technologies ..........................................................................187\n9.3 Access Control Systems .......................................................................192\n9.3.1 Physical Access Control ...........................................................192\n9.3.2 Access Cards ............................................................................192\n9.3.3 Electronic Surveillance ............................................................193\n9.3.4 Biometrics ................................................................................194\n9.3.5 Event Monitoring .....................................................................197\n9.4 Authorization .......................................................................................197\n9.4.1 Authorization Mechanisms ......................................................198\n9.5 Types of Authorization Systems ..........................................................199\n9.5.1 Centralized ...............................................................................199\n9.5.2 Decentralized ...........................................................................200\n9.5.3 Implicit .....................................................................................200\n9.5.4 Explicit ..................................................................................... 201\n9.6 Authorization Principles ...................................................................... 201\n9.6.1 Least Privileges ........................................................................ 201\n9.6.2 Separation of Duties ................................................................. 201\n9.7 Authorization Granularity ....................................................................202\n9.7.1 Fine Grain Authorization .........................................................202\n9.7.2 Coarse Grain Authorization .....................................................202\n9.8 Web Access and Authorization .............................................................203\nExercises .............................................................................................................203\nAdvanced Exercises ............................................................................................204\nReferences ...........................................................................................................204\n" }, { "page_number": 16, "text": "Contents \nxvii\n10 Authentication ............................................................................................207\n10.1 Defi nition ............................................................................................207\n10.2 Multiple Factors and Effectiveness of Authentication .......................208\n10.3 Authentication Elements ....................................................................210\n 10.3.1 Person or Group Seeking Authentication ..............................210\n 10.3.2 Distinguishing Characteristics for Authentication ................210\n 10.3.3 The Authenticator .................................................................. 211\n 10.3.4 The Authentication Mechanism ............................................ 211\n 10.3.5 Access Control Mechanism ...................................................212\n10.4 Types of Authentication......................................................................212\n 10.4.1 Nonrepudiable Authentication ..............................................212\n 10.4.2 Repudiable Authentication ....................................................213\n10.5 Authentication Methods .....................................................................213\n 10.5.1 Password Authentication .......................................................214\n 10.5.2 Public-Key Authentication ....................................................216\n 10.5.3 Remote Authentication ..........................................................220\n 10.5.4 Anonymous Authentication ...................................................222\n 10.5.5 Digital Signature-Based Authentication ...............................222\n 10.5.6 Wireless Authentication ........................................................223\n10.6 Developing an Authentication Policy .................................................223\nExercises .............................................................................................................224\nAdvanced Exercises ............................................................................................225\nReferences ...........................................................................................................225\n11 Cryptography .............................................................................................227\n11.1 Defi nition ............................................................................................227\n 11.1.1 Block Ciphers ........................................................................229\n11.2 Symmetric Encryption ........................................................................230\n 11.2.1 Symmetric Encryption Algorithms ....................................... 231\n 11.2.2 Problems with Symmetric Encryption ..................................233\n11.3 Public Key Encryption .......................................................................233\n 11.3.1 Public Key Encryption Algorithms .......................................236\n 11.3.2 Problems with Public Key Encryption ..................................236\n 11.3.3 Public Key Encryption Services ...........................................236\n11.4 Enhancing Security: Combining Symmetric and Public\nKey Encryptions .............................................................................237\n11.5 Key Management: Generation, Transportation, and Distribution ......237\n 11.5.1 The Key Exchange Problem ..................................................237\n 11.5.2 Key Distribution Centers (KDCs) .........................................238\n 11.5.3 Public Key Management .......................................................240\n 11.5.4 Key Escrow ...........................................................................242\n" }, { "page_number": 17, "text": "xviii \nContents\n11.6 Public Key Infrastructure (PKI) ...........................................................243\n11.6.1 Certifi cates ..............................................................................244\n11.6.2 Certifi cate Authority ...............................................................244\n11.6.3 Registration Authority (RA) ...................................................244\n11.6.4 Lightweight Directory Access Protocols (LDAP) ..................244\n11.6.5 Role of Cryptography in Communication ..............................245\n11.7 Hash Function ......................................................................................245\n11.8 Digital Signatures ................................................................................246\nExercises .............................................................................................................247\nAdvanced Exercises ............................................................................................248\nReferences ...........................................................................................................248\n12 Firewalls ......................................................................................................249\n 12.1 Defi nition ...........................................................................................249\n 12.2 Types of Firewalls .............................................................................252\n 12.2.1 Packet Inspection Firewalls .................................................253\n 12.2.2 Application Proxy Server: Filtering Based\non Known Services ..........................................................257\n 12.2.3 Virtual Private Network (VPN) Firewalls ............................ 261\n 12.2.4 Small Offi ce or Home (SOHO) Firewalls ............................262\n 12.3 Confi guration and Implementation of a Firewall ..............................263\n 12.4 The Demilitarized Zone (DMZ) ........................................................264\n 12.4.1 Scalability and Increasing Security in a DMZ .....................266\n 12.5 Improving Security Through the Firewall .........................................267\n 12.6 Firewall Forensics .............................................................................268\n 12.7 Firewall Services and Limitations .....................................................269\n 12.7.1 Firewall Services ..................................................................269\n 12.7.2 Limitations of Firewalls .......................................................269\nExercises .............................................................................................................270\nAdvanced Exercises ............................................................................................270\nReferences ........................................................................................................... 271\n13 System Intrusion Detection and Prevention ............................................273\n 13.1 Defi nition ...........................................................................................273\n 13.2 Intrusion Detection ............................................................................273\n13.2.1 The System Intrusion Process ................................................274\n13.2.2 The Dangers of System Intrusions .........................................275\n" }, { "page_number": 18, "text": "Contents \nxix\n13.3 Intrusion Detection Systems (IDSs) ....................................................276\n13.3.1 Anomaly Detection .................................................................277\n13.3.2 Misuse Detection ....................................................................279\n13.4 Types of Intrusion Detection Systems .................................................279\n13.4.1 Network-Based Intrusion Detection Systems (NIDSs) ..........280\n13.4.2 Host-Based Intrusion Detection Systems (HIDSs) ................285\n13.4.3 The Hybrid Intrusion Detection System .................................287\n13.5 The Changing Nature of IDS Tools .....................................................287\n13.6 Other Types of Intrusion Detection Systems .......................................288\n13.6.1 System Integrity Verifi ers (SIVs) ...........................................288\n13.6.2 Log File Monitors (LFM) .......................................................288\n13.6.3 Honeypots...............................................................................288\n13.7 Response to System Intrusion ..............................................................290\n13.7.1 Incident Response Team .........................................................290\n13.7.2 IDS Logs as Evidence ............................................................ 291\n13.8 Challenges to Intrusion Detection Systems ......................................... 291\n13.8.1 Deploying IDS in Switched Environments ............................292\n13.9 Implementing an Intrusion Detection System .....................................292\n13.10 Intrusion Prevention Systems (IPSs) ...................................................293\n13.10.1 Network-Based Intrusion Prevention Systems (NIPSs) .......293\n13.10.2 Host-Based Intrusion Prevention Systems (HIPSs) .............295\n13.11 Intrusion Detection Tools .....................................................................295\nExercises .............................................................................................................297\nAdvanced Exercises ............................................................................................297\nReferences ...........................................................................................................298\n14 Computer and Network Forensics ............................................................299\n 14.1 Defi nition ...........................................................................................299\n 14.2 Computer Forensics ...........................................................................300\n 14.2.1 History of Computer Forensics ............................................ 301\n 14.2.2 Elements of Computer Forensics .........................................302\n 14.2.3 Investigative Procedures ......................................................303\n 14.2.4 Analysis of Evidence ............................................................309\n 14.3 Network Forensics .............................................................................315\n 14.3.1 Intrusion Analysis ................................................................316\n 14.3.2 Damage Assessment ............................................................. 321\n 14.4 Forensics Tools .................................................................................. 321\n 14.4.1 Computer Forensic Tools .....................................................322\n 14.4.2 Network Forensic Tools .......................................................326\nExercises .............................................................................................................327\n" }, { "page_number": 19, "text": "xx \nContents\nAdvanced Exercises ............................................................................................328\nReferences ...........................................................................................................328\n15 Virus and Content Filtering ...................................................................... 331\n 15.1 Defi nition ........................................................................................... 331\n 15.2 Scanning, Filtering, and Blocking ..................................................... 331\n 15.2.1 Content Scanning .................................................................332\n 15.2.2 Inclusion Filtering ................................................................332\n 15.2.3 Exclusion Filtering ...............................................................333\n 15.2.4 Other Types of Content Filtering .........................................333\n 15.2.5 Location of Content Filters ..................................................335\n 15.3 Virus Filtering ....................................................................................336\n 15.3.1 Viruses ..................................................................................336\n 15.4 Content Filtering ................................................................................344\n 15.4.1 Application Level Filtering ..................................................344\n 15.4.2 Packet-Level Filtering and Blocking ...................................346\n 15.4.3 Filtered Material ...................................................................347\n 15.5 Spam ..................................................................................................348\nExercises .............................................................................................................350\nAdvanced Exercises ............................................................................................350\nReferences ...........................................................................................................350\n16 Standardization and Security Criteria: Security Evaluation \nof Computer Products ............................................................................... 351\n 16.1 Introduction ....................................................................................... 351\n 16.2 Product Standardization ....................................................................352\n 16.2.1 Need for the Standardization of (Security) \n Products .............................................................................352\n 16.2.2 Common Computer Product Standards ...............................353\n 16.3 Security Evaluations ..........................................................................354\n 16.3.1 Purpose of Evaluation ..........................................................354\n 16.3.2 Security Evaluation Criteria .................................................354\n 16.3.3 Basic Elements of an Evaluation .........................................355\n 16.3.4 Outcomes/Benefi ts ...............................................................355\n 16.4 Major Security Evaluation Criteria ...................................................357\n 16.4.1 Common Criteria (CC) ........................................................357\n 16.4.2 FIPS......................................................................................358\n 16.4.3 The Orange Book/TCSEC ...................................................358\n" }, { "page_number": 20, "text": "Contents \nxxi\n 16.4.4 Information Technology Security Evaluation \nCriteria (ITSEC) ................................................................. 361\n 16.4.5 The Trusted Network Interpretation (TNI): \n The Red Book .................................................................. 361\n 16.5 Does Evaluation Mean Security? ......................................................362\nExercises .............................................................................................................362\nAdvanced Exercises ............................................................................................363\nReferences ...........................................................................................................363\n17 Computer Network Security Protocols ....................................................365\n 17.1 Introduction .......................................................................................365\n 17.2 Application Level Security ................................................................366\n 17.2.1 Pretty Good Privacy (PGP) ..................................................368\n 17.2.2 Secure/Multipurpose Internet Mail Extension \n(S/MIME) .........................................................................368\n 17.2.3 Secure-HTTP (S-HTTP) ......................................................369\n 17.2.4 Hypertext Transfer Protocol over Secure Socket Layer \n(HTTPS) ...........................................................................373\n 17.2.5 Secure Electronic Transactions (SET) .................................373\n 17.2.6 Kerberos ...............................................................................375\n 17.3 Security in the Transport Layer .........................................................378\n 17.3.1 Secure Socket Layer (SSL) ..................................................378\n 17.3.2 Transport Layer Security (TLS) ...........................................382\n 17.4 Security in the Network Layer ..........................................................382\n 17.4.1 Internet Protocol Security (IPSec) .......................................382\n 17.4.2 Virtual Private Networks (VPN) ..........................................387\n 17.5 Security in the Link Layer and over LANS ...................................... 391\n 17.5.1 Point-to-Point Protocol (PPP) .............................................. 391\n 17.5.2 Remote Authentication Dial-In User Service \n(RADIUS) ........................................................................392\n 17.5.3 Terminal Access Controller Access Control System \n(TACACS + ) ....................................................................394\nExercises .............................................................................................................394\nAdvanced Exercises ............................................................................................395\nReferences ...........................................................................................................395\n" }, { "page_number": 21, "text": "xxii \nContents\n18 Security in Wireless Networks and Devices .............................................397\n 18.1 Introduction .......................................................................................397\n 18.2 Cellular Wireless Communication Network Infrastructure ...............397\n 18.2.1 Development of Cellular Technology ..................................400\n 18.2.2 Limited and Fixed Wireless Communication \nNetworks ..........................................................................404\n 18.3 Wireless LAN (WLAN) or Wireless Fidelity (Wi-Fi) .......................406\n 18.3.1 WLAN (Wi-Fi) Technology .................................................406\n 18.3.2 Mobile IP and Wireless Application Protocol \n(WAP) ..............................................................................407\n18.4 Standards for Wireless Networks .......................................................410\n 18.4.1 The IEEE 802.11 .................................................................410\n 18.4.2 Bluetooth .............................................................................. 411\n 18.5 Security in Wireless Networks ..........................................................413\n 18.5.1 WLANs Security Concerns ..................................................413\n 18.5.2 Best Practices for Wi-Fi Security .........................................419\n 18.5.3 Hope on the Horizon for WEP .............................................420\nExercises .............................................................................................................420\nAdvanced Exercises ............................................................................................ 421\nReferences ...........................................................................................................422\n19 Security in Sensor Networks .....................................................................423\n 19.1 Introduction .......................................................................................423\n 19.2 The Growth of Sensor Networks .......................................................424\n 19.3 Design Factors in Sensor Networks ..................................................425\n 19.3.1 Routing .................................................................................425\n 19.3.2 Power Consumption .............................................................428\n 19.3.3 Fault Tolerance .....................................................................428\n 19.3.4 Scalability ............................................................................428\n 19.3.5 Product Costs .......................................................................428\n 19.3.6 Nature of Hardware Deployed .............................................428\n 19.3.7 Topology of Sensor Networks ..............................................429\n 19.3.8 Transmission Media .............................................................429\n 19.4 Security in Sensor Networks .............................................................429\n 19.4.1 Security Challenges .............................................................429\n 19.4.2 Sensor Network Vulnerabilities and Attacks ....................... 431\n 19.4.3 Securing Sensor Networks ...................................................432\n 19.5 Security Mechanisms and Best Practices for Sensor \nNetworks .......................................................................................433\n" }, { "page_number": 22, "text": "Contents \nxxiii\n 19.6 Trends in Sensor Network Security Research ...................................434\n 19.6.1 Cryptography .......................................................................435\n 19.6.2 Key Management .................................................................435\n 19.6.3 Confi dentiality, Authentication, and Freshness ....................436\n 19.6.4 Resilience to Capture ...........................................................436\nExercises .............................................................................................................437\nAdvanced Exercises ............................................................................................437\nReferences ...........................................................................................................438\n20 Other Efforts to Secure Information and Computer Networks ............439\n 20.1 Introduction .......................................................................................439\n 20.2 Legislation .........................................................................................439\n 20.3 Regulation .........................................................................................440\n 20.4 Self-Regulation ..................................................................................440\n 20.4.1 Hardware-Based Self-Regulation ........................................ 441\n 20.4.2 Software-Based Self-Regulation .......................................... 441\n 20.5 Education ...........................................................................................442\n 20.5.1 Focused Education ...............................................................443\n 20.5.2 Mass Education ....................................................................444\n 20.6 Reporting Centers ..............................................................................444\n 20.7 Market Forces ....................................................................................444\n 20.8 Activism .............................................................................................445\n 20.8.1 Advocacy ..............................................................................445\n 20.8.2 Hotlines ................................................................................446\nExercises .............................................................................................................446\nAdvanced Exercises ............................................................................................447\nReferences ...........................................................................................................447\n21 Security Beyond Computer Networks: Information Assurance ............449\n 21.1 Introduction .......................................................................................449\n 21.2 Collective Security Initiatives and Best Practices .............................450\n 21.2.1 The U.S. National Strategy to Secure Cyberspace...............450\n 21.2.2 Council of Europe Convention on Cyber Crime ..................452\nReferences ...........................................................................................................453\n" }, { "page_number": 23, "text": "Part IV Projects\n22 Projects ........................................................................................................457\n 22.1 Introduction .........................................................................................457\n 22.2 Part I: Weekly/Biweekly Laboratory Assignments .............................457\n 22.3 Part II: Semester Projects .................................................................... 461\n 22.3.1 Intrusion Detection Systems .................................................. 461\n 22.3.2 Scanning Tools for System Vulnerabilities ...........................464\n 22.4 The Following Tools Are Used to Enhance Security in Web \nApplications .................................................................................466\n 22.4.1 Public Key Infrastructure ......................................................466\n 22.5 Part III: Research Projects ...................................................................467\n 22.5.1 Consensus Defense ................................................................467\n 22.5.2 Specialized Security ..............................................................467\n 22.5.3 Protecting an Extended Network ...........................................467\n 22.5.4 Automated Vulnerability Reporting ......................................467\n 22.5.5 Turn-Key Product for Network Security Testing ..................468\n 22.5.6 The Role of Local Networks in the Defense of the National \nCritical Infrastructure .......................................................468\n 22.5.7 Enterprise VPN Security .......................................................468\n 22.5.8 Perimeter Security .................................................................469\n 22.5.9 Enterprise Security ................................................................469\n 22.5.10 Password Security – Investigating the Weaknesses ..............469\nIndex .................................................................................................................... 471\nxxiv \nContents\n" }, { "page_number": 24, "text": "Part I\nUnderstanding Computer \nNetwork Security\n" }, { "page_number": 25, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_1, © Springer-Verlag London Limited 2009\n\b\n3\nChapter 1\nComputer Network Fundamentals\n1.1 Introduction\nThe basic ideas in all types of communication are that there must be three ingre­\ndients for the communication to be effective. First, there must be two entities, \ndubbed a sender and a receiver. These two must have something they need to share. \nSecond, there must be a medium through which the sharable item is channeled. \nThis is the transmission medium. Finally, there must be an agreed-on set of com­\nmunication rules or protocols. These three apply to every category or structure of \ncommunication.\nIn this chapter, we will focus on these three components in a computer network. \nBut what is a computer network? A computer network is a distributed system con­\nsisting of loosely coupled computers and other devices. Any two of these devices, \nwhich we will from now on refer to as network elements or transmitting elements \nwithout loss of generality, can communicate with each other through a communica­\ntion medium. In order for these connected devices to be considered a communicat­\ning network, there must be a set of communicating rules or protocols each device \nin the network must follow to communicate with another device in the network. \nThe resulting combination consisting of hardware and software is a computer com­\nmunication network or computer network in short. Figure 1.1 shows a computer \nnetwork.\nThe hardware component is made of network elements consisting of a collec­\ntion of nodes that include the end systems commonly called hosts and intermediate \nswitching elements that include hubs, bridges, routers, and gateways that, without \nloss of generality, we will call network elements.\nNetwork elements may own resources individually, that is locally or globally. \nNetwork software consists of all application programs and network protocols that \nare used to synchronize, coordinate, and bring about the sharing and exchange of \ndata among the network elements. Network software also makes the sharing of \nexpensive resources in the network possible. Network elements, network software, \nand users all work together so that individual users can exchange messages and \nshare resources on other systems that are not readily available locally. The network \nelements, together with their resources, may be of diverse hardware technologies \n" }, { "page_number": 26, "text": "4\b\n1  Computer Network Fundamentals\nand the software may be as different as possible, but the whole combination must \nwork together in unison.\nInternetworking technology enables multiple, diverse underlying hardware tech­\nnologies and different software regimes to interconnect heterogeneous networks \nand bring them to communicate smoothly. The smooth working of any computer \ncommunication network is achieved through the low-level mechanisms provided \nby the network elements and high-level communication facilities provided by the \nsoftware running on the communicating elements. Before we discuss the working \nof these networks, let us first look at the different types of networks.\n1.2  Computer Network Models\nThere are several configuration models that form a computer network. The most \ncommon of these are the centralized and distributed models. In a centralized \nmodel, several computers and devices are interconnected, and can talk to each \nother. However, there is only one central computer, called the master, through \nwhich all correspondence must take place. Dependent computers, called surro­\ngates, may have reduced local resources, such as memory, and sharable global \nresources are controlled by the master at the center. Unlike the centralized \nmodel, however, the distributed network consists of loosely coupled comput­\ners interconnected by a communication network consisting of connecting ele­\nments and communication channels. The computers themselves may own their \nresources locally or may request resources from a remote computer. These com­\nputers are known by a string of names, including host, client, or node. If a host \nhas resources that other hosts need, then that host is known as a server. Commu­\nnication and sharing of resources are not controlled by the central computer but \nare arranged between any two communicating elements in the network. Figures \n1.2 and 1.3 show a centralized network model and a distributed network model, \nrespectively.\nFig. 1.1  A Computer Network\nEthernet\nLaptop computer\nLaptop computer\nWorkstation\nLaser printer\nIBM compatible\n" }, { "page_number": 27, "text": "1.3  Computer Network Types\b\n5\n1.3  Computer Network Types\nComputer networks come in different sizes. Each network is a cluster of network \nelements and their resources. The size of the cluster determines the network type. \nThere are, in general, two main network types: the local area network (LAN) and \nwide area network (WAN).\n1.3.1  Local Area Networks (LANs)\nA computer network with two or more computers or clusters of network and their \nresources connected by a communication medium sharing communication proto­\ncols and confined in a small geographical area, such as a building floor, a building, \nFig. 1.2  A Centralized network model\nSurrogate Computer\nSurrogate Computer\nSurrogate Printer\nSurrogate Laptop\nServer/Master\nFig. 1.3  A Distributed network model\nLaptop computer\nWorkstation\nComputer\n Mac II\nLaptop computer\n" }, { "page_number": 28, "text": "6\b\n1  Computer Network Fundamentals\nor a few adjacent buildings, is called a local area network (LAN). The advantage \nof a LAN is that all network elements are close together so the communication \nlinks maintain a higher speed of data movement. Also, because of the proximity of \nthe communicating elements, high-cost and high quality communicating elements \ncan be used to deliver better service and high reliability. Figure 1.4 shows a LAN \n­network.\n1.3.2  Wide Area Networks (WANs)\nA wide area network (WAN), on the other hand, is a network made up of one or \nmore clusters of network elements and their resources but instead of being con­\nfined to a small area, the elements of the clusters or the clusters themselves are \nscattered over a wide geographical area as in a region of a country or across the \nwhole country, several countries, or the entire globe like the Internet for example. \nSome advantages of a WAN include distributing services to a wider community and \navailability of a wide array of both hardware and software resources that may not \nbe available in a LAN. However, because of the large geographical areas covered \nby WANs, communication media are slow and often unreliable. Figure 1.5 shows \na WAN network.\n1.3.3  Metropolitan Area Networks (MANs)\nBetween the LAN and WAN, there is also a middle network called the metropolitan \narea network (MAN) because it covers a slightly wider area than the LAN but not \nso wide as to be considered a WAN. Civic networks that cover a city or part of a city \nare a good example of a MAN. MANs are rarely talked about because they are quiet \noften overshadowed by cousin LAN to the left and cousin WAN to the right.\nFig. 1.4  A LAN Network\nEthernet\n IBM compatible\n Laptop computer\n Scanner\nWorkstation\nLaptop computer\nLaser printer\n" }, { "page_number": 29, "text": "1.4  Data Communication Media Technology\b\n7\n1.4  Data Communication Media Technology\nThe performance of a network type depends greatly on the transmission technology \nand media used in the network. Let us look at these two.\n1.4.1  Transmission Technology\nThe media through which information has to be transmitted determine the signal to \nbe used. Some media permit only analog signals. Some allow both analog and digi­\ntal. Therefore, depending on the media type involved and other considerations, the \ninput data can be represented as either digital or analog signal. In an analog format, \ndata is sent as continuous electromagnetic waves on an interval representing things \nsuch as voice and video and propagated over a variety of media that may include \ncopper wires, twisted coaxial pair or cable, fiber optics, or wireless. We will discuss \nthese media soon. In a digital format, on the other hand, data is sent as a digital \nsignal, a sequence of voltage pulses that can be represented as a stream of binary \nbits. Both analog and digital data can be propagated and many times represented as \neither analog or digital.\nTransmission itself is the propagation and processing of data signals between \nnetwork elements. The concept of representation of data for transmission, either as \nanalog or digital signal, is called an encoding scheme. Encoded data is then trans­\nmitted over a suitable transmission medium that connects all network elements. \nThere are two encoding schemes, analog and digital. Analog encoding propagates \nanalog signals representing analog data such as sound waves and voice data. Digital \nencoding, on the other hand, propagates digital signals representing either an analog \nor a digital signal representing digital data of binary streams by two voltage levels. \nFig. 1.5  A WAN Network\nServer\nComputer\nLaptop\nLaptop\nRouter\nRouter\nLaptop\nPrinter\nInternet\nHub\nServer\n" }, { "page_number": 30, "text": "8\b\n1  Computer Network Fundamentals\nSince our interest in this book is in digital networks, we will focus on the encoding \nof digital data.\n1.4.1.1  Analog Encoding of Digital Data\nRecall that digital information is in the form of 1s or 0s. To send this information \nover some analog medium such as the telephone line, for example, which has lim­\nited bandwidth, digital data needs to be encoded using modulation and demodula­\ntion to produce analog signals. The encoding uses a continuous oscillating wave, \nusually a sine wave, with a constant frequency signal called a carrier signal. The \ncarrier has three modulation characteristics: amplitude, frequency, and phase shift. \nThe scheme then uses a modem, a modulation–demodulation pair, to modulate and \ndemodulate the data signal based on any one of the three carrier characteristics or a \ncombination. The resulting wave is between a range of frequencies on both sides of \nthe carrier as shown below [1]:\nAmplitude\n• \n modulation represents each binary value by a different amplitude of \nthe carrier frequency. The absence of or low carrier frequency may represent \na 0 and any other frequency then represents a 1. But this is a rather inefficient \nmodulation technique and is therefore used only at low frequencies up to 1200 bps \nin voice grade lines.\nFrequency\n• \n modulation also represents the two binary values by two different \nfrequencies close to the frequency of the underlying carrier. Higher frequencies \nrepresent a 1 and low frequencies represent a 0. The scheme is less susceptible \nto errors.\nPhase shift\n• \n modulation changes the timing of the carrier wave, shifting the \ncarrier phase to encode the data. A 1 is encoded as a change in phase by 180 \ndegrees and a 0 may be encoded as a 0 change in phase of a carrier signal. This \nis the most efficient scheme of the three and it can reach a transmission rate of \nup to 9600 bps.\n1.4.1.2  Digital Encoding of Digital Data\nIn this encoding scheme, which offers the most common and easiest way to transmit \ndigital signals, two binary digits are used to represent two different voltages. Within \na computer, these voltages are commonly 0 volt and 5 volts. Another procedure uses \ntwo representation codes: nonreturn to zero level (NRZ-L), in which negative volt­\nage represents binary one and positive voltage represents binary zero, and nonreturn \nto zero, invert on ones (NRZ-I). See Figs. 1.6 and 1.7 for an example of these two \ncodes. In NRZ-L, whenever a 1 occurs, a transition from one voltage level to another \nis used to signal the information. One problem with NRZ signaling techniques is \nthe requirement of a perfect synchronization between the receiver and transmitter \nclocks. This is, however, reduced by sending a separate clock signal. There are yet \n" }, { "page_number": 31, "text": "1.4  Data Communication Media Technology\b\n9\nother representations such as the Manchester and differential ­Manchester, which \nencode clock information along with the data.\nOne may wonder why go through the hassle of digital encoding and transmis­\nsion. There are several advantages over its cousin, analog encoding. These include \nthe following:\nPlummeting costs of digital circuitry\n• \nMore efficient integration of voice, video, text, and image\n• \nReduction of noise and other signal impairment because of use of repeaters\n• \nCapacity of channels is utilized best with digital techniques\n• \nBetter encryption and hence better security than in analog transmission\n• \n1.4.1.3  Multiplexing of Transmission Signals\nQuite often during the transmission of data over a network medium, the volume \nof transmitted data may far exceed the capacity of the medium. Whenever this \nhappens, it may be possible to make multiple signal carriers share a transmission \nmedium. This is referred to as multiplexing. There are two ways in which multi­\nplexing can be achieved: time-division multiplexing (TMD) and frequency-division \nmultiplexing (FDM).\nIn FDM, all data channels are first converted to analog form. Since a number \nof signals can be carried on a carrier, each analog signal is then modulated by a \nseparate and different carrier frequency, and this makes it possible to recover dur­\ning the demultiplexing process. The frequencies are then bundled on the carrier. At \nthe receiving end, the demultiplexer can select the desired carrier signal and use it \nFig. 1.6  NRZ-L N Nonreturn to zero level representation code\n0000000000000011111111110000000000000000000111111100000000000000011111111\nFig. 1.7  NRZI Nonreturn to zero Invert on ones representation code\n000000000000000111110000000000000000011111111111111111111000000000000\n" }, { "page_number": 32, "text": "10\b\n1  Computer Network Fundamentals\nto extract the data signal for that channel in such a way that the bandwidths do not \noverlap. FDM has an advantage of supporting full-duplex communication.\nTDM, on the other hand, works by dividing the channel into time slots that are \nallocated to the data streams before they are transmitted. At both ends of the trans­\nmission, if the sender and receiver agree on the time-slot assignments, then the \nreceiver can easily recover and reconstruct the original data streams. So multiple \ndigital signals can be carried on one carrier by interleaving portions of each signal \nin time.\n1.4.2  Transmission Media\nAs we have observed above, in any form of communication, there must be a medium \nthrough which the communication can take place. So network elements in a net­\nwork need a medium in order to communicate. No network can function without a \ntransmission medium because there would be no connection between the transmit­\nting elements. The transmission medium plays a vital role in the performance of the \nnetwork. In total, characteristic quality, dependability, and overall performance of \na network depend heavily on its transmission medium. The transmission medium \nalso determines a network’s capacity in realizing the expected network traffic, reli­\nability for the network’s availability, size of the network in terms of the distance \ncovered, and the transmission rate. Network transmission media can be either wired \nor wireless.\n1.4.2.1  Wired Transmission Media\nWired transmission media are used in fixed networks physically connecting every \nnetwork element. There are different types of physical media, the most common of \nwhich are copper wires, twisted pair, coaxial cables, and optical fibers.\nCopper wires have been traditionally used in communication because of their \nlow resistance to electrical currents that allows signals to travel even further. But \ncopper wires suffer interference from electromagnetic energy in the environment, \nand because of this, they must always be insulated\nTwisted pair is a pair of wires consisting of insulated copper wire each wrapped \naround the other, forming frequent and numerous twists. Together, the twisted, insu­\nlated copper wires act as a full-duplex communication link. The twisting of the wires \nreduces the sensitivity of the cable to electromagnetic interference and also reduces \nthe radiation of radio frequency noises that may interfere with nearby cables and \nelectronic components. To increase the capacity of the transmitting medium, more \nthan one pair of the twisted wires may be bundled together in a protective coating. \nBecause twisted pairs were far less expensive, easy to install, and had a high qual­\nity of voice data, they were widely used in telephone networks. However, because \nthey are poor in upward scalability in transmission rate, distance, and bandwidth in \n" }, { "page_number": 33, "text": "1.4  Data Communication Media Technology\b\n11\nLANs, twisted pair technology has been abandoned in favor of other technologies. \nFigure 1.8 shows a twisted pair.\nCoaxial cables are dual-conductor cables with a shared inner conductor in the \ncore of the cable protected by an insulation layer and the outer conductor surround­\ning the insulation. These cables are called coaxial because they share the inner \nconductor. The inner core conductor is usually made of solid copper wire, but at \ntimes can also be made up of stranded wire. The outer conductor commonly made \nof braided wires, but sometimes made of metallic foil or both, forms a protective \ntube around the inner conductor. This outer conductor is also further protected by \nanother outer coating called the sheath. Figure 1.9 shows a coaxial cable. Coaxial \ncables are commonly used in television transmissions. Unlike twisted pairs, coaxial \ncables can be used over long distances. There are two types of coaxial cables: thin­\nnet, a light and flexible cabling medium that is inexpensive and easy to install; and \nthe thickent, which is thicker and harder to break and can carry more signals through \na longer distance than thinnet.\nOptical fiber is a small medium made up of glass and plastics and conducts \nan optical ray. This is the most ideal cable for data transmission because it can \naccommodate extremely high bandwidths and has few problems with electromag­\nnetic interference that coaxial cables suffer from. It can also support cabling for \nseveral kilometers. The two disadvantages of fiber-optic cables, however, are cost \nand installation difficulty. As shown in Fig. 1.10, a simple optical fiber has a central \ncore made up of thin fibers of glass or plastics. The fibers are protected by a glass or \nplastic coating called a cladding. The cladding, though made up of the same materi­\nals as the core, has different properties that give it the capacity to reflect back the \ncore rays that tangentially hit on it. The cladding itself is encased in a plastic jacket. \nThe jacket protects the inner fiber from external abuses such as bending and abra­\nsions. Optical fiber cables transmit data signals by first converting them into light \nsignals. The transmitted light is emitted at the source from either a light ­emitting \nFig. 1.9  Optical Fiber\nCore \nJacket\nCladding\nFig. 1.8  Coaxial Cable\nInner conductor\nOuter sheath \n . \nInsulation Outer conductor\n" }, { "page_number": 34, "text": "12\b\n1  Computer Network Fundamentals\ndiode (LED) or an injection laser diode (ILD). At the receiving end, the emitted rays \nare received by a photo detector that converts them back to the original form.\n1.4.2.2  Wireless Communication\nWireless communication and wireless networks have evolved as a result of rapid \ndevelopment in communication technologies, computing, and people’s need for \nmobility. Wireless networks fall in one of the following three categories depending \non distance as follows:\nRestricted Proximity Network\n• \n: This network involves local area networks \n(LANs) with a mixture of fixed and wireless devices.\nIntermediate/Extended Network:\n• \n This wireless network is actually made up of \ntwo fixed LAN components joined together by a wireless component. The bridge \nmay be connecting LANs in two nearby buildings or even further.\nMobile Network:\n• \n This is a fully wireless network connecting two network \nelements. One of these elements is usually a mobile unit that connects to the \nhome network (fixed) using cellular or satellite technology.\nThese three types of wireless networks are connected using basic media such \nas infrared, laser beam, narrow-band and spread-spectrum radio, microwave, and \nsatellite communication [2].\nInfrared: During an infrared transmission, one network element remotely emits \nand transmits pulses of infrared light that carry coded instructions to the receiving \nnetwork element. As long as there is no object to stop the transmitted light, the \nreceiver gets the instruction. Infrared is best used effectively in a small confined \narea, within 100 feet, for example, a television remote communicating with the tele­\nvision set. In a confined area such as this, infrared is relatively fast and can support \nhigh bandwidths of up to 10 Mbps.\nHigh-Frequency Radio: During a radio communication, high-frequency elec­\ntromagnetic radio waves or radio frequency commonly referred to as RF transmis­\nsions are generated by the transmitter and are picked up by the receiver. Because \nthe range of radio frequency band is greater than that of infrared, mobile computing \nelements can communicate over a limited area without both transmitter and receiver \nbeing placed along a direct line of sight; the signal can bounce off light walls, build­\nings, and atmospheric objects. RF transmissions are very good for long distances \nwhen combined with satellites to refract the radio waves.\nFig. 1.10  Twisted Pair\n" }, { "page_number": 35, "text": "1.5  Network Topology\b\n13\nMicrowave: Microwaves are a higher-frequency version of radio waves but \nwhose transmissions, unlike those of the radio, can be focused in a single direction. \nMicrowave transmissions use a pair of parabolic antennas that produce and receive \nnarrow, but highly directional signals. To be sensitive to signals, both the transmit­\nting and receiving antennas must focus within a narrow area. Because of this, both \nthe transmitting and receiving antennas must be carefully adjusted to align the trans­\nmitted signal to the receiver. Microwave communication has two forms: terrestrial, \nwhen it is near ground, and satellite microwave. The frequencies and technologies \nemployed by these two forms are similar but with notably distinct differences.\nLaser: Laser light can be used to carry data for several thousand yards through \nair and optical fibers. But this is possible only if there are no obstacles in the line \nof sight. Lasers can be used in many of the same situations as microwaves, and like \nmicrowaves, laser beams must be refracted when used over long distances.\n1.5  Network Topology\nComputer networks, whether LANs, MANs, or WANs, are constructed based on a \ntopology. The are several topologies including the following popular ones.\n1.5.1  Mesh\nA mesh topology allows multiple access links between network elements, unlike \nother types of topologies. The multiplicity of access links between the network \nelements offers an advantage in network reliability because whenever one network \nelement fails, the network does not cease operations; it simply finds a bypass to the \nfailed element and the network continues to function. Mesh topology is most often \napplied in MAN networks. Figure 1.11 shows a mesh network.\n1.5.2  Tree\nA more common type of network topology is the tree topology. In the tree topology, \nnetwork elements are put in a hierarchical structure in which the most predomi­\nnant element is called the root of the tree and all other elements in the network \nshare a child–parent relationship. As in ordinary, though inverted trees, there are no \nclosed loops. So dealing with failures of network elements presents complications \ndepending on the position of the failed element in the structure. For example, in a \ndeeply rooted tree, if the root element fails, the network automatically ruptures and \nsplits into two parts. The two parts cannot communicate with each other. The func­\ntioning of the network as a unit is, therefore, fatally curtailed. Figure 1.12 shows a \nnetwork using a tree topology.\n" }, { "page_number": 36, "text": "14\b\n1  Computer Network Fundamentals\n1.5.3  Bus\nA more popular topology, especially for LANs, is the bus topology. Elements in a net­\nwork using a bus topology always share a bus and, therefore, have equal access to all \nLAN resources. Every network element has full-duplex connections to the transmit­\nting medium which allows every element on the bus to send and receive data. Because \neach computing element is directly attached to the transmitting medium, a transmis­\nsion from any one element propagates through the entire length of the medium in \neither direction and therefore can be received by all elements in the network. Because \nof this, precautions need to be taken to make sure that transmissions intended for one \nelement can be received by that element and no other element. The network must also \nuse a mechanism that handles disputes in case two or more elements try to transmit at \nthe same time. The mechanism deals with the likely collision of signals and brings a \nFig. 1.11  Mesh Network\nLaptop\nServer\nLaptop\nLaptop\nLaptop\nWorkstation\nLaptop\nFig. 1.12  Tree Topology\nServer\nLaptop\nLaptop\nLaptop\nServer\nLaptop\n" }, { "page_number": 37, "text": "1.5  Network Topology\b\n15\nquick recovery from such a collision. It is also necessary to create fairness in the net­\nwork so that all other elements can transmit when they need to do so. See Fig. 1.13.\n A collision control mechanism must also improve efficiency in the network \nusing a bus topology by allowing only one element in the network to have control \nof the bus at any one time. This network element is then called the bus master and \nother elements are considered to be its slaves. This requirement prevents collision \nfrom occurring in the network as elements in the network try to seize the bus at the \nsame time. A bus topology is commonly used by LANs.\n1.5.4 Star\nAnother very popular topology, especially in LAN network technologies, is a star topol­\nogy. A star topology is characterized by a central prominent node that connects to every \nother element in the network. So, all the elements in the network are connected to a cen­\ntral element. Every network element in a star topology is connected pairwise in a point-\nto-point manner through the central element, and communication between any pair of \nelements must go through this central element. The central element or node can either \noperate in a broadcast fashion, in which case information from one element is broadcast \nto all connected elements, or transmit as a switching device in which the incoming data \nis transmitted only to one element, the nearest element enroute to the destination. The \nbiggest disadvantage to the star topology in networks is that the failure of the central \nelement results in the failure of the entire network. Figure 1.14 shows a star topology.\n1.5.5  Ring\nFinally another popular network topology is the ring topology. In this topology, \neach computing element in a network using a ring topology is directly connected to \nFig. 1.13  Bus topology\nLaptop\nLaptop\nServer\nFirewall\nWorkstation\nComputer\nLaptop\n" }, { "page_number": 38, "text": "16\b\n1  Computer Network Fundamentals\nthe transmitting medium via a unidirectional connection so that information put on \nthe transmission medium can reach all computing elements in the network through \na mechanism of taking turns in sending information around the ring. Figure 1.15 \nshows a ring topology network. The taking of turns in passing information is man­\naged through a token system. A token is a system-wide piece of information that \nguarantees the current owner to be the bus master. As long as it owns the token, no \nother network element is allowed to transmit on the bus. When an element currently \nsending information and holding the token has finished, it passes the token down­\nstream to its nearest neighbor. The token system is a good management system of \ncollision and fairness.\nThere are variants of a ring topology collectively called hub hybrids combining \neither a star with a bus or a stretched star as shown in Fig. 1.16.\nAlthough network topologies are important in LANs, the choice of a topology \ndepends on a number of other factors, including the type of transmission medium, \nreliability of the network, the size of the network, and its anticipated future growth. \nRecently the most popular LAN topologies have been the bus, star, and ring topolo­\ngies. The most popular bus- and star-based LAN topology is the Ethernet, and the \nmost popular ring-based LAN topology is the token ring.\n1.6  Network Connectivity and Protocols\nIn the early days of computing, computers were used as stand-alone machines, and \nall work that needed cross-computing was done manually. Files were moved on \ndisks from computer to computer. There was, therefore, a need for cross-computing \nwhere more than one computer should talk to others and vice versa.\nFig. 1.14  Star topology\nLaptop\nLaptop\nLaptop\nLaptop\nLaptop\nLaptop\nLaptop\nServer\n" }, { "page_number": 39, "text": "1.6  Network Connectivity and Protocols\b\n17\nA new movement was, therefore, born. It was called the open system movement, \nwhich called for computer hardware and software manufacturers to come up with a \nway for this to happen. But to make this possible, standardization of equipment and \nsoftware was needed. To help in this effort and streamline computer communica­\ntion, the International Standards Organization (ISO) developed the Open System \nInterconnection (OSI) model. The OSI is an open architecture model that functions \nFig. 1.15  Ring topology network\nLaptop\nLaptop\nLaptop\nLaptop\nLaptop\nLaptop\nLaptop\nServer\nFirewall\nLaptop\nFig. 1.16  Token ring hub\nToken-ring\nWorkstation\nLaptop\nServer\nFirewall\nLaptop\nLaptop\nWorkstation\nLaptop\nToken\nInternet\nLaptop\n" }, { "page_number": 40, "text": "18\b\n1  Computer Network Fundamentals\nas the network communication protocol standard, although it is not the most widely \nused one. The Transport Control Protocol/Internet Protocol (TCP/IP) model, a rival \nmodel to OSI, is the most widely used. Both OSI and TCP/IP models use two proto­\ncol stacks, one at the source element and the other at the destination element\n1.6.1  Open System Interconnection (OSI) Protocol Suite\nThe development of the OSI model was based on the secure premise that a communi­\ncation task over a network can be broken into seven layers, where each layer represents \na different portion of the task. Different layers of the protocol provide different services \nand ensure that each layer can communicate only with its own neighboring layers. That \nis, the protocols in each layer are based on the protocols of the previous layers.\nStarting from the top of the protocol stack, tasks and information move down \nfrom the top layers until they reach the bottom layer where they are sent out over \nthe network media from the source system to the destination. At the destination, the \ntask or information rises back up through the layers until it reaches the top. Each \nlayer is designed to accept work from the layer above it and to pass work down to \nthe layer below it, and vice versa. To ease interlayer communication, the interfaces \nbetween the layers are standardized. However, each layer remains independent and \ncan be designed independently and each layer’s functionality should not affect the \nfunctionalities of other layers above and below it.\nTable 1.1 shows an OSI model consisting of seven layers and the descriptions of \nthe services provided in each layer.\nIn peer-to-peer communication, the two communicating computers can initiate \nand receive tasks and data. The task and data initiated from each computer starts \nfrom the top in the application layer of the protocol stack on each computer. The \ntasks and data then move down from the top layers until they reach the bottom \nlayer, where they are sent out over the network media from the source system to \nthe destination. At the destination, the task and data rise back up through the layers \nuntil the top. Each layer is designed to accept work from the layer above it and pass \nwork down to the layer below it. As data passes from layer to layer of the sender \nmachine, layer headers are appended to the data, causing the datagram to grow \nlarger. Each layer header contains information for that layer’s peer on the remote \nsystem. That information may indicate how to route the packet through the network \nor what should be done to the packet as it is handed back up the layers on the recipi­\nent computer.\nFigure 1.17 shows a logical communication model between two peer com­\nputers using the ISO model. Table 1.2 shows the datagram with added header \ninformation as it moves through the layers. Although the development of the \nOSI model was intended to offer a standard for all other proprietary models, \nand it was as ­encompassing of all existing models as possible, it never really \nreplaced many of those rival models it was intended to replace. In fact it is this \n“all in one” concept that led to market failure because it became too complex. \nIts late arrival on the market also prevented its much anticipated interoperability \nacross networks.\n" }, { "page_number": 41, "text": "1.6  Network Connectivity and Protocols\b\n19\n1.6.2  Transport Control Protocol/Internet Protocol (TCP/IP) \nModel\nAmong the OSI rivals was the TCP/IP, which was far less complex and more his­\ntorically established by the time the OSI came on the market. The TCP/IP model \ndoes not exactly match the OSI model. For example, it has two to three fewer levels \nthan the seven layers of the OSI model. It was developed for the US Department of \nDefense Advanced Research Project Agency (DARPA); but over the years, it has \nseen a phenomenal growth in popularity and it is now the de facto standard for the \nInternet and many intranets. It consists of two major protocols: the transmission \ncontrol protocol (TCP) and the Internet protocol (IP), hence the TCP/IP designa­\ntion. Table 1.3 shows the layers and protocols in each layer.\nSince TCP/IP is the most widely used in most network protocol suites by the \nInternet and many intranets, let us focus on its layers here.\nTable 1.1  ISO protocol layers and corresponding services\nLayer Number\t\nProtocol\n7\t\nApplication\n6\t\nPresentation\n5\t\nSession\n4\t\nTransport\n3\t\nNetwork\n2\t\nData Link\n1\t\nPhysical\nFig. 1.17  ISO logical peer communication model\nApplication\nPresentation\nSession\nTransport\nNetwork\nDatalink\nPhysical\nPhysical\nDatalink\nNetwork\nTransport\nSession\nPresentation\nApplication\nChannel\nMachine A\nMachine B\nTable 1.2  OSI datagrams seen in each layer with header added\nNo header\t\nData\t\nApplication\nH1\t\nData\t\nPresentation\nH2\t\nData\t\nSession\nH3\t\nData\t\nTransport\nH4\t\nData\t\nNetwork\nH5\t\nData\t\nData Link\nNo header\t\nData\t\nPhysical\n" }, { "page_number": 42, "text": "20\b\n1  Computer Network Fundamentals\n1.6.2.1  Application Layer\nThis layer, very similar to the application layer in the OSI model, provides the user \ninterface with resources rich in application functions. It supports all network appli­\ncations and includes many protocols on a data structure consisting of bit streams as \nshown in Fig. 1.18.\n1.6.2.2  Transport Layer\nThis layer, again similar to the OSI model session layer, is a slightly removed from \nthe user and is hidden from the user. Its main purpose is to transport application \nlayer messages that include application layer protocols in their headers between the \nTable 1.3  TCP/IP protocol layers\nLayer\nDelivery Unit\nProtocols\nApplication\nMessage\n– Handles all higher level protocols including File \nTransfer Protocol (FTP), Name Server Protocol (NSP), \nSimple Mail Transfer Protocol (SMTP), Simple Net­\nwork Management Protocol (SNMP), HTTP, Remote \nfile access (telnet), Remote file server (NFS), Name \nResolution (DNS), HTTP,- TFTP, SNMP, DHCP, DNS, \nBOOTP\n– Combines Application, Session and Presentation \n­Layers of the OSI model.\n– Handles all high-level protocols\nTransport\nSegment\n– Handles transport protocols including ­Transmission \nControl Protocol (TCP), User Datagram Protocol \n(UDP).\nNetwork\nDatagram\n– Contains the following protocols: Internet ­Protocol \n(IP), Internet Control Message Protocol (ICMP), \n­Internet Group Management Protocol (IGMP).\n– Supports transmitting source packets from any \nnetwork on the internetwork and makes sure they arrive \nat the destination independent of the path and networks \nthey took to reach there.\n– Best path determination and packet switching occur \nat this layer.\nData Link\nFrame\nContains protocols that require IP packet to cross \na physical link from one device to another directly \n­connected device.\n– It included the following networks:\n– WAN – Wide Area Network\n– LAN – Local Area Network\nPhysical\nBit stream\nAll network card drivers.\nFig. 1.18  Application layer data frame\nApplication header protocols\nBit stream\n" }, { "page_number": 43, "text": "1.6  Network Connectivity and Protocols\b\n21\nhost and the server. For the Internet network, the transport layer has two standard \nprotocols: transport control protocol (TCP) and user datagram protocol (UDP). \nTCP provides a connection-oriented service, and it guarantees the delivery of all \napplication layer packets to their destination. This guarantee is based on two mecha­\nnisms: congestion control that throttles the transmission rate of the source element \nwhen there is traffic congestion in the network and the flow control mechanism that \ntries to match sender and receiver speeds to synchronize the flow rate and reduce \nthe packet drop rate. While TCP offers guarantees of delivery of the application \nlayer packets, UDP, on the other hand, offers no such guarantees. It provides a no-\nfrills connectionless service with just delivery and no acknowledgements. But it is \nmuch more efficient and a protocol of choice for real-time data such as streaming \nvideo and music. Transport layer delivers transport layer packets and protocols to \nthe network layer. Figure 1.19 shows the TCP data structure, and Fig. 1.20 shows \nthe UDP data structure.\n1.6.2.3  Network Layer\nThis layer moves packets, now called datagrams, from router to router along the \npath from a source host to the destination host. It supports a number of protocols \nincluding the Internet Protocol (IP), Internet Control Message Protocol (ICMP) \nand Internet Group Management Protocol (IGMP). The IP Protocol is the most \nwidely used network layer protocol. IP uses header information from the transport \nlayer protocols that include datagram source and destination port numbers from IP \naddresses, and other TCP header and IP information, to move datagrams from router \nto router through the network. Best routes are found in the network by using routing \nalgorithms. Figure 1.21 shows the IP datagram structure.\nThe standard IP address has been the so-called IPv4, a 32-bit addressing scheme. \nBut with the rapid growth of the Internet, there was fear of running out of addresses, \nso IPv6, a new 64-bit addressing scheme, was created. The network layer conveys \nthe network layer protocols to the data link layer.\nFig. 1.19  A TCP data structure\n32 bits \nSource address\nDestination address\nSequence number\nAcknowledgement number\nOther control information\nData\nFig. 1.20  An UDP data structure\n32 bits \nSource address\nDestination address \nOther header control\ninformation\nUDP Checksum\nData\n" }, { "page_number": 44, "text": "22\b\n1  Computer Network Fundamentals\n1.6.2.4  Data Link Layer\nThis layer provides the network with services that move packets from one packet \nswitch like a router to the next over connecting links. This layer also offers reliable \ndelivery of network layer packets over links. It is at the lowest level of communication, \nand it includes the network interface card (NIC) and operating system (OS) protocols. \nThe protocols in this layer include: Ethernet, asynchronous transfer mode (ATM), and \nothers such as frame relay. The data link layer protocol unit, the frame, may be moved \nover links from source to destination by different link layer protocols at different links \nalong the way.\n1.6.2.5  Physical Layer\nThis layer is responsible for literally moving data link datagrams bit by bit over the \nlinks and between the network elements. The protocols here depend on and use the \ncharacteristics of the link medium and the signals on the medium.\n1.7  Network Services\nFor a communication network to work effectively, data in the network must be able \nto move from one network element to another. This can only happen if the network \nservices to move such data work. For data networks, these services fall into two \ncategories:\nConnection services to facilitate the exchange of data between the two network \n• \ncommunicating end-systems with as little data loss as possible and in as little \ntime as possible\nSwitching services to facilitate the movement of data from host to host across \n• \nthe length and width of the network mesh of hosts, hubs, bridges, routers, and \ngateways\n1.7.1  Connection Services\nHow do we get the network transmitting elements to exchange data over the \n­network? Two types of connection services are used: the connected-oriented and \nconnectionless services.\nFig. 1.21  An IP datagram \nstructure\nOther header\ncontrol\ninformation\nSource port\nnumber\nDestination\nport number\nData\n" }, { "page_number": 45, "text": "1.7  Network Services\b\n23\n1.7.1.1  Connected-Oriented Services\nWith a connection-oriented service, before a client can send packets with real data \nto the server, there must be a three-way handshake. We will define this three-way \nhandshake in later chapters. But the purpose of a three-way handshake is to estab­\nlish a session before the actual communication can begin. Establishing a session \nbefore data is moved creates a path of virtual links between the end systems through \na network and therefore, guarantees the reservation and establishment of fixed com­\nmunication channels and other resources needed for the exchange of data before any \ndata is exchanged and as long as the channels are needed. For example, this happens \nwhenever we place telephone calls; before we exchange words, the channels are \nreserved and established for the duration. Because this technique guarantees that \ndata will arrive in the same order it was sent in, it is considered to be reliable. In \nshort the service offers the following:\nAcknowledgments of all data exchanges between the end-systems,\n• \nFlow control in the network during the exchange, and\n• \nCongestion control in the network during the exchange.\n• \nDepending on the type of physical connections in place and the services \nrequired by the systems that are communicating, connection-oriented methods may \nbe implemented in the data link layers or in the transport layers of the protocol \nstack, although the trend now is to implement it more at the transport layer. For \nexample, TCP is a connection-oriented transport protocol in the transport layer. \nOther network technologies that are connection-oriented include the frame relay \nand ATMs.\n1.7.1.2  Connectionless Service\nIn a connectionless service, there is no handshaking to establish a session between \nthe communicating end-systems, no flow control, and no congestion control in the \nnetwork. This means that a client can start communicating with a server without \nwarning or inquiry for readiness; it simply sends streams of packets, called data­\ngrams, from its sending port to the server’s connection port in single point-to-point \ntransmissions with no relationship established between the packets and between the \nend-systems. There are advantages and of course disadvantages to this type of con­\nnection service. In brief, the connection is faster because there is no handshaking \nwhich can sometimes be time consuming, and it offers periodic burst transfers with \nlarge quantities of data and, in addition, it has simple protocol. However, this service \noffers minimum services, no safeguards and guarantees to the sender since there is \nno prior control information and no acknowledgment. In addition, the service does \nnot have the reliability of the connection-oriented method, and offers no error han­\ndling and no packets ordering; in addition, each packet self-identifies that leads to \nlong headers, and finally, there is no predefined order in the arrival of packets.\n" }, { "page_number": 46, "text": "24\b\n1  Computer Network Fundamentals\nLike the connection-oriented method, this service can operate both at the data \nlink and transport layers. For example, UDP, a connectionless service, operates at \nthe transport layer.\n1.7.2  Network Switching Services\nBefore we discuss communication protocols, let us take a detour and briefly discuss \ndata transfer by a switching element. This is a technique by which data is moved \nfrom host to host across the length and width of the network mesh of hosts, hubs, \nbridges, routers, and gateways. This technique is referred to as data switching. The \ntype of data switching technique used by a network determines how messages are \ntransmitted between the two communicating elements and across that network. \nThere are two types of data switching techniques: circuit switching and packet \nswitching.\n1.7.2.1  Circuit Switching\nIn circuit switching networks, one must reserve all the resources before setting up \na physical communication channel needed for communication. The physical con­\nnection, once established, is then used exclusively by the two end-systems, usually \nsubscribers, for the duration of the communication. The main feature of such a con­\nnection is that it provides a fixed data rate channel, and both subscribers must oper­\nate at this rate. For example, in a telephone communication network, a connected \nline is reserved between the two points before the users can start using the service. \nOne issue of debate on circuit switching is the perceived waste of resources during \nthe so-called silent periods when the connection is fully in force but not being used \nby the parties. This situation occurs when, for example, during a telephone network \nsession, a telephone receiver is not hung up after use, leaving the connection still \nestablished. During this period, while no one is utilizing the session, the session line \nis still open.\n1.7.2.2  Packet Switching\nPacket switching networks, on the other hand, do not require any resources to be \nreserved before a communication session begins. These networks, however, require \nthe sending host to assemble all data streams to be transmitted into packets. If a \n­message is large, it is broken into several packets. Packet headers contain the source \nand the destination network addresses of the two communicating end-systems. \nThen, each of the packets is sent on the communication links and across packet \nswitches (routers). On receipt of each packet, the router inspects the destination \naddress contained in the packet. Using its own routing table, each router then for­\nwards the packet on the appropriate link at the maximum available bit rate. As each \n" }, { "page_number": 47, "text": "1.7  Network Services\b\n25\npacket is received at each intermediate router, it is forwarded on the appropriate link \ninterspersed with other packets being forwarded on that link. Each router checks the \ndestination address, if it is the owner of the packet; it then reassembles the packets \ninto the final message. Figure 1.22 shows the role of routers in packet switching \nnetworks.\nPacket switches are considered to be store-and-forward transmitters, mean­\ning that they must receive the entire packet before the packet is retransmitted or \nswitched on to the next switch.\nBecause there is no predefined route for these packets, there can be unpredictably \nlong delays before the full message can be re-assembled. In addition, the network \nmay not dependably deliver all the packets to the intended destination. To ensure \nthat the network has a reliably fast transit time, a fixed maximum length of time is \nallowed for each packet. Packet switching networks suffer from a few problems, \nincluding the following:\nThe rate of transmission of a packet between two switching elements depends on \n• \nthe maximum rate of transmission of the link joining them and on the switches \nthemselves.\nMomentary delays are always introduced whenever the switch is waiting for a \n• \nfull packet. The longer the packet, the longer the delay.\nEach switching element has a finite buffer for the packets. It is thus possible for \n• \na packet to arrive only to find the buffer full with other packets. Whenever this \nhappens, the newly arrived packet is not stored but gets lost, a process called \npacket dropping. In peak times, servers may drop a large number of packets. \nCongestion control techniques use the rate of packet drop as one measure of \ntraffic congestion in a network.\nPacket switching networks are commonly referred to as packet networks for \nobvious reasons. They are also called asynchronous networks and in such networks, \npackets are ideal because there is a sharing of the bandwidth, and of course, this \nFig. 1.22  Packet switching networks\nLaptop\nLaptop\nLaptop\nLaptop\nPacket Switch\nPacket Switch\nInternet\nLaptop\n" }, { "page_number": 48, "text": "26\b\n1  Computer Network Fundamentals\navoids the hassle of making reservations for any anticipated transmission. There are \ntwo types of packet switching networks:\nvirtual circuit network\n• \n in which a packet route is planned, and it becomes a \nlogical connection before a packet is released and\ndatagram network\n• \n, which is the focus of this book.\n1.8  Network Connecting Devices\nBefore we discuss network connecting devices, let us revisit the network infra­\nstructure. We have defined a network as a mesh of network elements, commonly \nreferred to as network nodes, connected together by conducting media. These \nnetwork nodes can be either at the ends of the mesh, in which case they are com­\nmonly known as clients or in the middle of the network as transmitting elements. \nIn a small network such as a LAN, the nodes are connected together via special \nconnecting and conducting devices that take network traffic from one node and \npass it on to the next node. If the network is big Internetwork (large networks of \nnetworks like WANs and LANs), these networks are connected to other special \nintermediate networking devices so that the Internet functions as a single large \nnetwork.\nNow let us look at network connecting devices and focus on two types of devices: \nthose used in networks (small networks such as LANs) and those used in internet­\nworks.\n1.8.1  LAN Connecting Devices\nBecause LANs are small networks, connecting devices in LANs are less powerful \nwith limited capabilities. There are hubs, repeaters, bridges, and switches.\n1.8.1.1  A Hub\nThis is the simplest in the family of network connecting devices since it connects \nthe LAN components with identical protocols. It takes in imports and re-transmits \nthem verbatim. It can be used to switch both digital and analog data. In each node, \npre-setting must be done to prepare for the formatting of the incoming data. For \nexample, if the incoming data is in digital format, the hub must pass it on as pack­\nets; however, if the incoming data is analog, then the hub passes as a signal. There \nare two types of hubs: simple and multiple port hubs, as shown in Figs. 1.23 and \n1.24. Multiple ports hubs may support more than one computer up to its number \nof ports and may be used to plan for the network expansion as more computers are \nadded at a later time.\n" }, { "page_number": 49, "text": "1.8  Network Connecting Devices\b\n27\nNetwork hubs are designed to work with network adapters and cables and can \ntypically run at either 10 Mbps or 100 Mbps; some hubs can run at both speeds. To \nconnect computers with differing speeds, it is better to use hubs that run at both \nspeeds 10/100 Mbps.\n1.8.1.2  A Repeater\nA network repeater is a low-level local communication device at the physical \nlayer of the network that receives network signals, amplifies them to restore \nthem to full strength, and then re-transmits them to another node in the network. \nRepeaters are used in a network for several purposes including countering the \nattenuation that occurs when signals travel long distances, and extending the \nFig. 1.23  A simple hub\nHub\nFirewall\nLaptop\nLaptop\nInternet\nLaptop\nFig. 1.24  Multi-ported hubs\nFirewall\nLaptop\nLaptop\nLaptop\nInternet\nLaptop\nLaptop\nHub\n" }, { "page_number": 50, "text": "28\b\n1  Computer Network Fundamentals\nlength of the LAN above the specified maximum. Since they work at the low­\nest network stack layer, they are less intelligent than their counterparts such \nas bridges, switches, routers, and gateways in the upper layers of the network \nstack. See Fig. 1.25.\n1.8.1.3  A Bridge\nA bridge is like a repeater but differs in that a repeater amplifies electrical signals \nbecause it is deployed at the physical layer; a bridge is deployed at the datalink and \ntherefore amplifies digital signals. It digitally copies frames. It permits frames from \none part of a LAN or a different LAN with different technology to move to another \npart or another LAN. However, in filtering and isolating a frame from one network \nto another or another part of the same network, the bridge will not move a dam­\naged frame from one end of the network to the other. As it filters the data packets, \nthe bridge makes no modifications to the format and content of the incoming data. \nA bridge filters the frames to determine whether a frame should be forwarded or \ndropped. All “noise” (collisions, faulty wiring, power surges, etc.) packets are not \ntransmitted.\nThe bridge filters and forwards frames on the network using a dynamic \nbridge table. The bridge table, which is initially empty, maintains the LAN \naddresses for each computer in the LAN and the addresses of each bridge inter­\nface that connects the LAN to other LANs. Bridges, like hubs, can be either \nsimple or multi-ported. Figure 1.26 shows a simple bridge, Fig. 1.27 shows a \nmulti-ported bridge, and Fig. 1.28 shows the position of the bridge in an OSI \nprotocol stack.\nFig. 1.25  A repeater in an OSI model\nApplication\nPresentation\nPresentation\nApplication\nSession\nNetwork\nTransport\nPhysical\nData Link\nPhysical\nData Link\nNetwork\nTransport\nSession\nPhysical\nRepeater\n" }, { "page_number": 51, "text": "1.8  Network Connecting Devices\b\n29\n1.8.1.4  A Switch\nA switch is a network device that connects segments of a network or two small \nnetworks such as Ethernet or token ring LANs. Like the bridge, it also filters and \nforwards frames on the network with the help of a dynamic table. This point-to-\nFig. 1.26  Simple bridge\nBridge\nLaptop\nLaptop\nLaptop\nHub\nServer\nFirewall\nInternet\nFig. 1.27  Multi-ported bridge\nBridge\nLaptop\nLaptop\nLaptop\nHub\nServer\nFirewall\nLaptop\ncomputer\nWorkstation\nLaptop\nInternet\n" }, { "page_number": 52, "text": "30\b\n1  Computer Network Fundamentals\npoint approach allows the switch to connect multiple pairs of segments at a time, \nallowing more than one computer to transmit data at a time, thus giving them a high \nperformance over their cousins, the bridges.\n1.8.2 Internetworking Devices\nInternetworking devices connect together smaller networks, like several LANs \ncreating much larger networks such as the Internet. Let us look at two of these con­\nnectors: the router and the gateway.\n1.8.2.1 Routers\nRouters are general purpose devices that interconnect two or more heterogeneous \nnetworks represented by IP subnets or unnumbered point to point lines. They are \nusually dedicated special-purpose computers with separate input and output inter­\nfaces for each connected network. They are implemented at the network layer in \nthe protocol stack. Figure 1.29 shows the position of the router in the OSI protocol \nstack.\nAccording to RFC 1812, a router performs the following functions [3]:\nConforms to specific Internet protocols specified in the 1812 document, including \n• \nthe Internet protocol (IP), Internet control message protocol (ICMP), and others \nas necessary.\nConnects to two or more packet networks. For each connected network, the \n• \nrouter must implement the functions required by that network because it is a \nmember of that network. These functions typically include the following:\nEncapsulating and decapsulating the IP datagrams with the connected \n• \nnetwork framing. For example, if the connected network is an Ethernet LAN, \nan Ethernet header and checksum must be attached.\nFig. 1.28  Position of a bridge in an OSI protocol stack\nApplication\nPresentation\nSession\nTransport\nNetwork\nDatalink\nPhysical\nPhysical\nPhysical\nDatalink\nDatalink\nNetwork\nTransport\nSession\nPresentation\nApplication\nBridge\n" }, { "page_number": 53, "text": "1.8  Network Connecting Devices\b\n31\nSending and receiving IP datagrams up to the maximum size supported by \n• \nthat network; this size is the network’s maximum transmission unit or MTU.\nTranslating the IP destination address into an appropriate network-level \n• \naddress for the connected network. These are the Ethernet hardware address \non the NIC, for Ethernet cards, if needed. Each network addresses the router \nas a member computer of its own network. This means that each router is a \nmember of each network it connects to. It, therefore, has a network host address \nfor that network and an interface address for each network it is connected to. \nBecause of this rather strange characteristic, each router interface has its own \naddress resolution protocol (ARP) module, its LAN address (network card \naddress), and its own Internet protocol (IP) address.\nResponding to network flow control and error indications, if any.\n• \nReceives and forwards Internet datagrams. Important issues in this process are \n• \nbuffer management, congestion control, and fairness. To do this the router must\nRecognize error conditions and generate ICMP error and information \n• \nmessages as required.\nDrop datagrams whose time-to-live fields have reached zero.\n• \nFragment datagrams when necessary to fit into the maximum transmission \n• \nunit (MTU) of the next network.\nChooses a next-hop destination for each IP datagram based on the information \n• \nin its routing database.\nUsually supports an interior gateway protocol (IGP) to carry out distributed \n• \nrouting and reachability algorithms with the other routers in the same autonomous \nsystem. In addition, some routers will need to support an exterior gateway \nprotocol (EGP) to exchange topological information with other autonomous \nsystems.\nProvides network management and system support facilities, including loading, \n• \ndebugging, status reporting, exception reporting, and control.\nForwarding an IP datagram from one network across a router requires the router \nto choose the address and relevant interface of the next-hop router or for the final \nhop if it is the destination host. The next-hop router is always in the next network \nFig. 1.29  Router in the OSI protocol stack\nApplication\nPresentation\nSession\nTransport\nNetwork\nDatalink\nPhysical\nPhysical\nPhysical\nDatalink\nDatalink\nNetwork\nTransport\nSession\nPresentation\nApplication\nNetwork\nRouter\n" }, { "page_number": 54, "text": "32\b\n1  Computer Network Fundamentals\nof which the router is also a member. The choice of the next-hop router, called for­\nwarding, depends on the entries in the routing table within the router.\nRouters are smarter than bridges in that the router with the use of a router table \nhas some knowledge of possible routes a packet could take from its source to its \ndestination. Once it finds the destination, it determines the best, fastest, and most \nefficient way of routing the package. The routing table, like in the bridge and switch, \ngrows dynamically as activities in the network develop. On receipt of a packet, the \nrouter removes the packet headers and trailers and analyzes the IP header by deter­\nmining the source and destination addresses and data type and noting the arrival \ntime. It also updates the router table with new addresses if not already in the table. \nThe IP header and arrival time information is entered in the routing table. If a router \nencounters an address it cannot understand, it drops the package. Let us explain the \nworking of a router by an example using Fig. 1.30.\nIn Fig. 1.30, suppose host A in LAN1 tries to send a packet to host B in LAN2. \nBoth host A and host B have two addresses: the LAN (host) address and the IP \naddress. The translation between host LAN addresses and IP addresses is done by \nthe ARP, and data is retrieved or built into the ARP table, similar to Table 1.4. Notice \nalso that the router has two network interfaces: interface 1 for LAN1 and interface \n2 for LAN2 for the connection to a larger network such as the Internet. Each inter­\nface has a LAN (host) address for the network the interface connects on and a cor­\nresponding IP address. As we will see later in the chapter, host A sends a packet to \nrouter 1 at time 10:01 that includes, among other things, both its addresses, message \ntype, and destination IP address of host B. The packet is received at interface 1 of \nthe router; the router reads the packet and builds row 1 of the routing table as shown \nin Table 1.5.\nThe router notices that the packet has to go to network 193.55.1.***, where *** \nare digits 0–9, and it has knowledge that this network is connected on interface \nFig. 1.30  Working of a router\n127.0.0.5\nRouter 1\nRouter 2\n16-73-AX-E4-01\n192.76.10.12\n.181.1\nAX-74-31-14-00, \n193.55.1.6\n193.55.1.6)\nInterface 1\nInterface 2\nB\nA\nHost A (07-1A-E6-17-F0, 127.0.0.1)\nHost B (LX-27-A6-33-74-14, 193.15.1.1)\nLAN1\nLAN2\nInternet\n224.37.181.1\n" }, { "page_number": 55, "text": "1.8  Network Connecting Devices\b\n33\n2. It forwards the packet to interface 2. Now, interface 2 with its own ARP may \nknow host B. If it does, then it forwards the packet and updates the routing table \nwith the inclusion of row 2. What happens when the ARP at the router interface \n1 cannot determine the next network? That is, if it has no knowledge of the pres­\nence of network 193.55.1.***, it will then ask for help from a gateway. Let us \nnow discuss how IP chooses a gateway to use when delivering a datagram to a \nremote network.\n1.8.2.2 Gateways\nGateways are more versatile devices than routers. They perform protocol conver­\nsion between different types of networks, architectures, or applications and serve \nas translators and interpreters for network computers that communicate in differ­\nent protocols and operate in dissimilar networks, for example, OSI and TCP/IP. \nBecause the networks are different with different technologies, each network has its \nown routing algorithms, protocols, domain names servers, and network administra­\ntion procedures and policies. Gateways perform all of the functions of a router and \nmore. The gateway functionality that does the translation between different network \ntechnologies and algorithms is called a protocol converter. Figure 1.31 shows the \nposition of a gateway in a network.\nGateways services include packet format and/or size conversion, protocol con­\nversion, data translation, terminal emulation, and multiplexing. Since gateways per­\nform a more complicated task of protocol conversion, they operate more slowly and \nhandle fewer devices.\nLet us now see how a packet can be routed through a gateway or several gate­\nways before it reaches its destination. We have seen that if a router gets a datagram, \nit checks the destination address and finds that it is not on the local network. It, \ntherefore, sends it to the default gateway. The default gateway now searches its \ntable for the destination address. In case the default gateway recognizes that the \ndestination address is not on any of the networks it is connected to directly, it has to \nfind yet another gateway to forward it through.\nTable 1.4  ARP table for LAN1\nIP-Address\t\nLAN Address\t\nTime\n127.0.0.5\t\n16–73-AX-E4–01\t\n10:00\n127.76.1.12\t\n07–1A-EB-17-F6\t\n10:03\nTable 1.5 Routing table for interface1\nAddress\t\nInterface\t\nTime\n127.0.0.1\t\n1\t\n10:01\n192.76.1.12\t\n2\t\n10:03\n" }, { "page_number": 56, "text": "34\b\n1  Computer Network Fundamentals\nThe routing information the server uses for this is in a gateway routing table link­\ning networks to gateways that reach them. The table starts with the network entry \n0.0.0.0, a catch-all entry, for default routes. All packets to an unknown network are \nsent through the default route. Table 1.6 shows the gateway routing table.\nThe choice between a router, a bridge, and a gateway is a balance between func­\ntionality and speed. Gateways, as we have indicated, perform a variety of functions; \nhowever, because of this variety of functions, gateways may become bottlenecks \nwithin a network because they are slow.\nRouting tables may be built either manually for small LANs or by using software \ncalled routing daemons for larger networks.\n1.9  Network Technologies\nEarlier in this chapter, we indicated that computer networks are basically classi­\nfied according to their sizes with the local area networks (LANs) covering smaller \nareas, and the bigger ones covering wider areas (WANs). In this last section of the \nchapter, let us look at a few network technologies in each one of these categories.\nFig. 1.31  Position of a gateway\nLaptop\nLaptop\nLaptop\nLaptop\nLaptop\nLaptop\nFirewall\nRouter minicomputer\nFirewall\nServer\nServer\nGateway - Protocol Converter\nFirewall\nServer\nTable 1.6  A gateway routing table\nNetwork\t\nGateway\t\nInterface\n0.0.0.0\t\n192.133.1.1\t\n1\n127.123.0.1\t\n198.24.0.1\t\n2\n" }, { "page_number": 57, "text": "1.9  Network Technologies\b\n35\n1.9.1  LAN Technologies\nRecall our definition of a LAN at the beginning of this chapter. We defined a LAN \nto be a small data communication network that consists of a variety of machines that \nare all part of the network and cover a geographically small area such as one build­\ning or one floor. Also, a LAN is usually owned by an individual or a single entity \nsuch as an organization. According to IEEE 802.3 Committee on LAN Standard­\nization, a LAN must be a moderately sized and geographically shared peer-to-peer \ncommunication network broadcasting information for all on the network to hear via \na common physical medium on a point-to-point basis with no intermediate switch­\ning element required. Many common network technologies today fall into this cat­\negory including the popular Ethernet, the widely used token ring/IEEE 805.2, and \nthe fiber distributed data interface (FDDI).\n1.9.1.1  Star-Based Ethernet (IEEE 802.3) LAN\nEthernet technology is the most widely used of all LAN technologies and it has been \nstandardized by the IEEE 802.3 Committee on Standards. The IEEE 802.3 standards \ndefine the medium access control (MAC) layer and the physical layer. The Ethernet \nMAC is a carrier sense multiple access with collision detection (CSMA/CD) sys­\ntem. With CSMA, any network node that wants to transmit must listen first to the \nmedium to make sure that there is no other node already transmitting. This is called \nthe carrier sensing of the medium. If there is already a node using the medium, then \nthe element that was intending to transmit waits; otherwise it transmits. In case, \ntwo or more elements are trying to transmit at the same time, a collision will occur \nand the integrity of the data for all is compromised. However, the element may \nnot know this. So it waits for an acknowledgment from the receiving node. The \nwaiting period varies, taking into account maximum round-trip propagation delay \nand other unexpected delays. If no acknowledgment is received during that time, \nthe element then assumes that a collision has occurred and the transmission was \nunsuccessful and therefore it must retransmit. If more collisions were to happen, \nthen the element must now double the delay time and so on. After a collision, when \nthe two elements are in delay period, the medium may be idle and this may lead to \ninefficiency. To correct this situation, the elements, instead of just going into the \ndelay mode, must continue to listen onto the medium as they transmit. In this case, \nthey will not only be doing carrier sensing but also detecting a collision that leads \nto CSMA/CD. According to Stallings, the CSMA/CD scheme follows the following \nalgorithm [1]:\nIf the medium is idle, transmit.\n• \nIf the medium busy, continue to listen until idle, then transmit immediately.\n• \nIf collision is detected, transmit jamming signal for “collision warning” to all \n• \nother network elements.\nAfter jamming the signal, wait random time units and attempt to transmit.\n• \n" }, { "page_number": 58, "text": "36\b\n1  Computer Network Fundamentals\nA number of Ethernet LANs are based on the IEEE 802.3 standards, ­including\n10 BASE-X (where X = 2, 5, T and F; T = twisted pair and F = fiber optics)\n• \n100 BASE-T (where the T options include T4, TX, and FX)\n• \n1000 BASE-T (where T options include LX, SX, T, and CX)\n• \nThe basic Ethernet transmission structure is a frame and it is shown in \nFig. 1.32.\nThe source and destination fields contain 6-byte LAN addresses of the form xx-xx-\nxx-xx-xx-xx, where x is a hexadecimal integer. The error detection field is 4 bytes of \nbits used for error detection, usually using the cyclic redundancy check (CRC) algo­\nrithm, in which the source and destination elements synchronize the values of these \nbits.\n1.9.1.2  Token Ring/IEEE 805.2\nToken ring LANs based on IEEE 805.2 are also used widely in commercial and \nsmall industrial networks, although not as popular as Ethernet. The standard uses \na frame called a token that circulates around the network so that all network nodes \nhave equal access to it. As we have seen previously, token ring technology employs \na mechanism that involves passing the token around the network so that all network \nelements have equal access to it.\nWhenever a network element wants to transmit, it waits for the token on the \nring to make its way to the element’s connection point on the ring. When the token \narrives at this point, the element grabs it and changes one bit of the token that \nbecomes the start bit in the data frame the element will be transmitting. The ele­\nment then inserts data, addressing information and other fields and then releases \nthe payload onto the ring. It then waits for the token to make a round and come \nback. The receiving host must recognize the destination MAC address within the \nframe as its own. Upon receipt, the host identifies the last field indicating the \nrecognition of the MAC address as its own. The frame contents are then copied \nby the host, and the frame is put back in circulation. On reaching the network ele­\nment that still owns the token, the element withdraws the token and a new token \nis put on the ring for another network element that may need to transmit.\nBecause of its round-robin nature, the token ring technique gives each network \nelement a fair chance of transmitting if it wants to. However, if the token ever gets \nlost, the network business is halted. Figure 1.33 shows the structure of a token data \nframe, and Fig. 1.16 shows the token ring structure.\nLike Ethernet, the token ring has a variety of technologies based on the \ntransmission rates.\nFig. 1.32  An ethernet frame \nstructure\nOther \ncontrol \nheaders \nDestination \naddress \nSource \nError\nData\nType\naddress \ndetection \n(CRC) \n" }, { "page_number": 59, "text": "1.9  Network Technologies\b\n37\n1.9.1.3  Other LAN Technologies\nIn addition to those we have discussed earlier, several other LAN technologies are \nin use, including the following:\nAsynchronous transfer mode (ATM) with the goal of transporting real-time \n• \nvoice, video, text, e-mail, and graphic data. ATM offers a full array of network \nservices that make it a rival of the Internet network.\nFiber distributed data interface (FDDI) is a dual-ring network that uses a token \n• \nring scheme with many similarities to the original token ring technology.\nAppleTalk, the popular Mac users’ LAN.\n• \n1.9.2  WAN Technologies\nAs we defined it earlier, WANs are data networks like LANs but they cover a wider \ngeographical area. Because of their sizes, WANs traditionally provide fewer services \nto customers than LANs. Several networks fall into this category, including the inte­\ngrated services digital network (ISDN), X.25, frame relay, and the popular Internet.\n1.9.2.1  Integrated Services Digital Network (ISDN)\nISDN is a system of digital phone connections that allows data to be transmitted \nsimultaneously across the world using end-to-end digital connectivity. It is a net­\nwork that supports the transmission of video, voice, and data. Because the trans­\nmission of these varieties of data, including graphics, usually puts widely differing \ndemands on the communication network, service integration for these networks is \nan important advantage to make them more appealing. The ISDN standards specify \nthat subscribers must be provided with\nBasic rate interface (BRI)\n• \n services of two full-duplex 64-kbps B channels – the \nbearer channels, and one full-duplex 16-kbps D channel – the data channel. One \nB channel is used for digital voice and the other for applications such as data \ntransmission. The D channel is used for telemetry and for exchanging network \ncontrol information. This rate is for individual users.\nPrimary rate interface (PRI)\n• \n services consisting of 23 64-kbps B channels and \none 64-kbps D channel. This rate is for all large users.\nFig. 1.33  A token data frame\nStart\nfield \nAccess\ncontrol \nSource\naddress\nData \nEnding\nfield \n \nDestination\naddress \n" }, { "page_number": 60, "text": "38\b\n1  Computer Network Fundamentals\nBRI can be accessed only if the customer subscribes to an ISDN phone line and \nis within 18,000 feet (about 3.4 miles or 5.5 km) of the telephone company central \noffice. Otherwise, expensive repeater devices are required that may include ISDN \nterminal adapters and ISDN routers.\n1.9.2.2 X.25\nX.25 is the International Telecommunication Union (ITU) protocol developed in \n1993 to bring interoperability to a variety of many data communication wide area \nnetworks (WANs), known as public networks, owned by private companies, orga­\nnizations, and governments agencies. By doing so, X.25 describes how data passes \ninto and out of public data communications networks.\nX.25 is a connection-oriented and packet-switched data network protocol \nwith three levels corresponding to the bottom three layers of the OSI model as \nfollows: the physical level corresponds to the OSI physical layer; the link level \ncorresponds to OSI data link layer; and the packet level corresponds to the OSI \nnetwork layer.\nIn full operation, the X.25 networks allow remote devices known as data ter­\nminal equipment (DTE) to communicate with each other across high-speed digital \nlinks, known as data circuit-terminating equipment (DCE), without the expense of \nindividual leased lines. The communication is initiated by the user at a DTE setting \nup calls using standardized addresses. The calls are established over virtual circuits, \nwhich are logical connections between the originating and destination addresses.\nOn receipt, the called users can accept, clear, or redirect the call to a third party. \nThe virtual connections we mentioned above are of the following two types [4]:\nSwitched virtual circuits (SVCs)\n• \n – SVCs are very much like telephone calls; a \nconnection is established, data is transferred, and then the connection is released. \nEach DTE on the network is given a unique DTE address that can be used much \nlike a telephone number.\nPermanent virtual circuits (PVCs)\n• \n – a PVC is similar to a leased line in that the \nconnection is always present. The logical connection is established permanently \nby the packet-switched network administration. Therefore, data may always be \nsent without any call setup.\nBoth of these circuits are used extensively, but since user equipment and network \nsystems supported both X.25 PVCs and X.25 SVCs, most users prefer the SVCs \nsince they enable the user devices to set up and tear down connections as required.\nBecause X.25 is a reliable data communications with a capability over a wide \nrange of quality of transmission facilities, it provides advantages over other WAN \ntechnologies, for example,\nUnlike frame relay and ATM technologies that depend on the use of high-quality \n• \ndigital transmission facilities, X.25 can operate over either analog or digital \nfacilities.\n" }, { "page_number": 61, "text": "1.9  Network Technologies\b\n39\nIn comparison with TCP/IP, one finds that TCP/IP has only end-to end error \n• \nchecking and flow control, while X.25 is error checked from network element to \nnetwork element.\nX.25 networks are in use throughout the world by large organizations with widely \ndispersed and communication-intensive operations in sectors such as finance, insur­\nance, transportation, utilities, and retail.\n1.9.2.3 Other WAN Technologies\nThe following are other WAN technologies that we would like to discuss but cannot \ninclude because of space limitations:\nFrame relay is a packet-switched network with the ability to multiplex many \n• \nlogical data conversions over a single connection. It provides flexible efficient \nchannel bandwidth using digital and fiber-optic transmission. It has many similar \ncharacteristics to X.25 network except in format and functionality.\nPoint-to-point Protocol\n• \n (PPP) is the Internet standard for transmission of IP \npackets over serial lines. The point-to-point link provides a single, pre-established \ncommunications path from the ending element through a carrier network, such \nas a telephone company, to a remote network. These links can carry datagram or \ndata-stream transmissions.\nxDirect service line\n• \n (xDSL) is a technology that provides an inexpensive, yet \nvery fast connection to the Internet.\nSwitched multi-megabit data service\n• \n (SMDS) is a connectionless service \noperating in the range of 1.5–100 Mbps; any SMDS station can send a frame to \nany other station on the same network.\nAsynchronous transfer mode (ATM) is already discussed as a LAN technology.\n• \n1.9.3 Wireless LANs\nThe rapid advances, miniaturization, and the popularity of wireless technology have \nopened a new component of LAN technology. The mobility and relocation of work­\ners has forced companies to move into new wireless technologies with emphasis on \nwireless networks extending the local LAN into a wireless LAN. There are basi­\ncally four types of wireless LANs [1]:\nLAN extension is a quick wireless extension to an existing LAN to accommodate \n• \nnew changes in space and mobile units.\nCross-building interconnection establishes links across buildings between both \n• \nwireless and wired LANs.\nNomadic access establishes a link between a LAN and a mobile wireless \n• \ncommunication device such as a laptop computer.\nAd hoc Networking is a peer-to-peer network temporarily set up to meet \n• \n" }, { "page_number": 62, "text": "40\b\n1  Computer Network Fundamentals\nsome immediate need. It usually consists of laptops, handheld, PCs, and other \ncommunication devices.\nPersonal area networks (PANs) that include the popular BlueTooth networks.\n• \nThere are several wireless IEEE 802.11-based LAN types, including\nInfrared\n• \nSpread Spectrum\n• \nNarrowband Microwave\n• \nWireless technology is discussed in further detail in Chapter 17.\n1.10 Conclusion\nWe have developed the theory of computer networks and discussed the topolo­\ngies, standards, and technologies of these networks. Because we were limited by \nspace, we could not discuss a number of interesting and widely used technologies \nboth in LAN and WAN areas. However, our limited discussion of these technolo­\ngies should give the reader an understanding and scope of the changes that are \ntalking place in network technologies. We hope that the trend will keep the con­\nvergence of the LAN, WAN, and wireless technologies on track so that the alarm­\ning number of different technologies is reduced and basic international standards \nare established.\nExercises\n\t 1.\t What is a communication protocol?\n\t 2.\t Why do we need communication protocols?\n\t 3.\t List the major protocols discussed in this chapter.\n\t 4.\t In addition to ISO and TCP/IP, what are the other models?\n\t 5.\t Discuss two LAN technologies that are NOT Ethernet or token ring.\n\t 6.\t Why is Ethernet technology more appealing to users than the rest of the LAN \ntechnologies?\n\t 7.\t What do you think are the weak points of TCP/IP?\n\t 8.\t Discuss the pros and cons of four LAN technologies.\n\t 9.\t List four WAN technologies.\n10.\t What technologies are found in MANs? Which of the technologies listed in \n8 and 9 can be used in MANs?\n" }, { "page_number": 63, "text": "References\b\n41\nAdvanced Exercises\n\t 1.\t X.25 and TCP/IP are very similar but there are differences. Discuss these \n­differences.\n\t 2.\t Discuss the reasons why ISDN failed to catch on as WAN technology.\n\t 3.\t Why is it difficult to establish permanent standards for a technology like WAN \nor LAN?\n\t 4.\t Many people see BlueTooth as a personal wireless network (PAN). Why is this \nso? What standard does BlueTooth use?\n\t 5.\t Some people think that BlueTooth is a magic technology that is going to change \nthe world. Read about BlueTooth and discuss this assertion.\n\t 6.\t Discuss the future of wireless LANs.\n\t 7.\t What is a wireless WAN? What kind of technology can be used in it? Is this the \nwave of the future?\n\t 8.\t With the future in mind, compare and contrast ATMs and ISDN technologies.\n\t 9.\t Do you foresee a fusion between LAN, MAN, and WAN technologies in the \nfuture? Support your response.\n10.\t Network technology is in transition. Discuss the direction of network technology.\nReferences\n1.\t William Stallings. Local and Metropolitan Area Network, Sixth Edition. Prentice Hall, 2000.\n2.\t Douglas E. Comar. Internetworking with TCP/IP: Principles, Protocols, and Architecture, \nFourth Edition. Prentice-Hall, 2000.\n3.\t RFC 1812. Requirements for IP Version 4 Routers http://www.cis.ohio-state.edu/cgi-bin/rfc/\nrfc1812.html#sec-2.2.3.\n4.\t Sangoma Technologies http://www.sangoma.com/x25.htm.\n" }, { "page_number": 64, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_2, © Springer-Verlag London Limited 2009\n\b\n43\nChapter 2\nUnderstanding Computer Network Security\n2.1  Introduction\nBefore we talk about network security, we need to understand in general terms what \nsecurity is. Security is a continuous process of protecting an object from unauthor­\nized access. It is as state of being or feeling protected from harm. That object in \nthat state may be a person, an organization such as a business, or property such as a \ncomputer system or a file. Security comes from secure which means, according to \nWebster Dictionary, a state of being free from care, anxiety, or fear [1].\nAn object can be in a physical state of security or a theoretical state of security. \nIn a physical state, a facility is secure if it is protected by a barrier like a fence, has \nsecure areas both inside and outside, and can resist penetration by intruders. This \nstate of security can be guaranteed if the following four protection mechanisms are \nin place: deterrence, prevention, detection, and response [1, 2].\nDeterrence\n• \n is usually the first line of defense against intruders who may try to gain \naccess. It works by creating an atmosphere intended to frighten intruders. Sometimes \nthis may involve warnings of severe consequences if security is breached.\nPrevention\n• \n is the process of trying to stop intruders from gaining access to the \nresources of the system. Barriers include firewalls, demilitalized zones (DMZs), \nand use of access items like keys, access cards, biometrics, and others to allow \nonly authorized users to use and access a facility.\nDetection\n• \n occurs when the intruder has succeeded or is in the process of gaining \naccess to the system. Signals from the detection process include alerts to the \nexistence of an intruder. Sometimes these alerts can be real time or stored for \nfurther analysis by the security personnel.\nResponse\n• \n is an aftereffect mechanism that tries to respond to the failure of the \nfirst three mechanisms. It works by trying to stop and/or prevent future damage \nor access to a facility.\nThe areas outside the protected system can be secured by wire and wall fencing, \nmounted noise or vibration sensors, security lighting, closed circuit television \n(CCTV), buried seismic sensors, or different photoelectric and microwave systems \n[1]. Inside the system, security can be enhanced by using electronic barriers such as \nfirewalls and passwords.\n" }, { "page_number": 65, "text": "44\b\n2  Understanding Computer Network Security\nDigital barriers – commonly known as firewalls, discussed in detail in Chapter 12, \ncan be used. Firewalls are hardware or software tools used to isolate the sensitive por­\ntions of an information system facility from the outside world and limit the potential \ndamage by a malicious intruder.\nA theoretical state of security, commonly known as pseudosecurity or security \nthrough obsecurity (STO) is a false hope of security. Many believe that an object \ncan be secure as long as nobody outside the core implementation group has knowl­\nedge about its existence. This security is often referred to as “bunk mentality” secu­\nrity. This is virtual security in the sense that it is not physically implemented like \nbuilding walls, issuing passwords, or putting up a firewall, but it is effectively based \nsolely on a philosophy. The philosophy itself relies on a need to know basis, imply­\ning that a person is not dangerous as long as that person doesn’t have knowledge \nthat could affect the security of the system like a network, for example. In real sys­\ntems where this security philosophy is used, security is assured through a presump­\ntion that only those with responsibility and who are trustworthy can use the system \nand nobody else needs to know. So, in effect, the philosophy is based on the trust of \nthose involved assuming that they will never leave. If they do, then that means the \nend of security for that system.\nThere are several examples where STO has been successfully used. These \ninclude Coca-Cola, KFC, and other companies that have, for generations, kept their \nsecret recipes secure based on a few trusted employees. But the overall STO is a \nfallacy that has been used by many software producers when they hide their codes. \nMany times, STO hides system vulnerabilities and weaknesses. This was demon­\nstrated vividly in Matt Blaze’s 1994 discovery of a flaw in the Escrowed Encryption \nStandard (Clipper) that could be used to circumvent law-enforcement monitoring. \nBlaze’s discovery allowed easier access to secure communication through the Clip­\nper technology than was previously possible, without access to keys [3]. The belief \nthat secrecy can make the system more secure is just that, a belief – a myth in fact. \nUnfortunately, the software industry still believes this myth.\nAlthough its usefulness has declined as the computing environment has changed \nto large open systems, new networking programming and network protocols, and as \nthe computing power available to the average person has increased, the philosophy \nis in fact still favored by many agencies, including the military, many government \nagencies, and private businesses.\nIn either security state, many objects can be thought of as being secure if such a \nstate, a condition, or a process is afforded to them. Because there are many of these \nobjects, we are going to focus on the security of a few of these object models. These \nwill be a computer, a computer network, and information.\n2.1.1  Computer Security\nThis is a study, which is a branch of Computer Science, focusing on creating a secure \nenvironment for the use of computers. It is a focus on the “behavior of users,” if you \nwill, required and the protocols in order to create a secure environment for anyone \n" }, { "page_number": 66, "text": "2.2  Securing the Computer Network\b\n45\nusing computers. This field, therefore, involves four areas of interest: the study of \ncomputer ethics, the development of both software and hardware protocols, and the \ndevelopment of best practices. It is a complex field of study involving detailed mathe­\nmatical designs of cryptographic protocols. We are not focusing on this in this book.\n2.1.2  Network Security\nAs we saw in chapter 1, computer networks are distributed networks of comput­\ners that are either strongly connected meaning that they share a lot of resources \nfrom one central computer or loosely connected, meaning that they share only those \nresources that can make the network work. When we talk about computer network \nsecurity, our focus object model has now changed. It is no longer one computer but \na network. So computer network security is a broader study of computer security. It \nis still a branch of computer science, but a lot broader than that of computer security. \nIt involves creating an environment in which a computer network, including all its \nresources, which are many; all the data in it both in storage and in transit; and all its \nusers are secure. Because it is wider than computer security, this is a more complex \nfield of study than computer security involving more detailed mathematical designs \nof cryptographic, communication, transport, and exchange protocols and best prac­\ntices. This book focuses on this field of study.\n2.1.3  Information Security\nInformation security is even a bigger field of study including computer and com­\nputer network security. This study is found in a variety of disciplines, including \ncomputer science, business management, information studies, and engineering. It \ninvolves the creation of a state in which information and data are secure. In this \nmodel, information or data is either in motion through the communication channels \nor in storage in databases on server. This, therefore, involves the study of not only \nmore detailed mathematical designs of cryptographic, communication, transport, \nand exchange protocols and best practices, but also the state of both data and infor­\nmation in motion. We are not discussing these in this book.\n2.2  Securing the Computer Network\nCreating security in the computer network model we are embarking on in this book \nmeans creating secure environments for a variety of resources. In this model, a \nresource is secure, based on the above definition, if that resource is protected from \nboth internal and external unauthorized access. These resources, physical or not, \nare objects. Ensuring the security of an object means protecting the object from \n" }, { "page_number": 67, "text": "46\b\n2  Understanding Computer Network Security\nunauthorized access both from within the object and externally. In short, we protect \nobjects. System objects are either tangible or nontangible. In a computer network \nmodel, the tangible objects are the hardware resources in the system, and the intan­\ngible object is the information and data in the system, both in transition and static \nin storage.\n2.2.1  Hardware\nProtecting hardware resources include protecting\nEnd user objects that include the user interface hardware components such as \n• \nall client system input components, including a keyboard, mouse, touch screen, \nlight pens, and others.\nNetwork objects like firewalls, hubs, switches, routers and gateways which are \n• \nvulnerable to hackers.\nNetwork communication channels to prevent eavesdroppers from intercepting \n• \nnetwork communications.\n2.2.2  Software\nProtecting software resources includes protecting hardware-based software, oper­\nating systems, server protocols, browsers, application software, and intellectual \nproperty stored on network storage disks and databases. It also involves protecting \nclient software such as investment portfolios, financial data, real estate records, \nimages or pictures, and other personal files commonly stored on home and business \ncomputers.\n2.3  Forms of Protection\nNow, we know what model objects are or need to be protected. Let us briefly, keep \ndetails for later, survey ways and forms of protecting these objects. Prevention of \nunauthorized access to system resources is achieved through a number of services \nthat include access control, authentication, confidentiality, integrity, and nonrepu­\ndiation.\n2.3.1  Access Control\nThis is a service the system uses, together with a user pre-provided identification \ninformation such as a password, to determine who uses what of its services. Let us \nlook at some forms of access control based on hardware and software.\n" }, { "page_number": 68, "text": "2.3  Forms of Protection\b\n47\n2.3.1.1  Hardware Access Control Systems\nRapid advances in technology have resulted in efficient access control tools \nthat are open and flexible, while at the same time ensuring reasonable precau­\ntions against risks. Access control tools falling in this category include the \nfollowing:\nAccess terminal. Terminal access points have become very sophisticated, and \n• \nnow they not only carry out user identification but also verify access rights, \ncontrol access points, and communicate with host computers. These activities \ncan be done in a variety of ways including fingerprint verification and real-time \nanti-break-in sensors. Network technology has made it possible for these units \nto be connected to a monitoring network or remain in a stand-alone off-line \nmode.\nVisual event monitoring. This is a combination of many technologies into one \n• \nvery useful and rapidly growing form of access control using a variety of real-\ntime technologies including video and audio signals, aerial photographs, and \nglobal positioning system (GPS) technology to identify locations.\nIdentification cards. Sometimes called proximity cards, these cards have \n• \nbecome very common these days as a means of access control in buildings, \nfinancial institutions, and other restricted areas. The cards come in a variety \nof forms, including magnetic, bar coded, contact chip, and a combination of \nthese.\nBiometric identification. This is perhaps the fastest growing form of control \n• \naccess tool today. Some of the most popular forms include fingerprint, iris, and \nvoice recognition. However, fingerprint recognition offers a higher level of \nsecurity.\nVideo surveillance. This is a replacement of CCTV of yester year, and it is \n• \ngaining popularity as an access control tool. With fast networking technologies \nand digital cameras, images can now be taken and analyzed very quickly, and \naction taken in minutes.\n2.3.1.2  Software Access Control Systems\nSoftware access control falls into two types: point of access monitoring and remote \nmonitoring. In point of access (POA), personal activities can be monitored by a \nPC-based application. The application can even be connected to a network or to a \ndesignated machine or machines. The application collects and stores access events \nand other events connected to the system operation and download access rights to \naccess terminals.\nIn remote mode, the terminals can be linked in a variety of ways, including the \nuse of modems, telephone lines, and all forms of wireless connections. Such termi­\nnals may, sometimes if needed, have an automatic calling at pre-set times if desired \nor have an attendant to report regularly.\n" }, { "page_number": 69, "text": "48\b\n2  Understanding Computer Network Security\n2.3.2  Authentication\nAuthentication is a service used to identify a user. User identity, especially of remote \nusers, is difficult because many users, especially those intending to cause harm, \nmay masquerade as the legitimate users when they actually are not. This service \nprovides a system with the capability to verify that a user is the very one he or she \nclaims to be based on what the user is, knows, and has.\nPhysically, we can authenticate users or user surrogates based on checking one \nor more of the following user items [2]:\nUser name (sometimes screen name)\n• \nPassword\n• \nRetinal images\n• \n: The user looks into an electronic device that maps his or her eye \nretina image; the system then compares this map with a similar map stored on \nthe system.\nFingerprints\n• \n: The user presses on or sometimes inserts a particular finger into \na device that makes a copy of the user fingerprint and then compares it with a \nsimilar image on the system user file.\nPhysical location\n• \n: The physical location of the system initiating an entry request \nis checked to ensure that a request is actually originating from a known and \nauthorized location. In networks, to check the authenticity of a client’s location a \nnetwork or Internet protocol (IP) address of the client machine is compared with \nthe one on the system user file. This method is used mostly in addition to other \nsecurity measures because it alone cannot guarantee security. If used alone, it \nprovides access to the requested system to anybody who has access to the client \nmachine.\nIdentity cards\n• \n: Increasingly, cards are being used as authenticating documents. \nWhoever is the carrier of the card gains access to the requested system. As is the \ncase with physical location authentication, card authentication is usually used \nas a second-level authentication tool because whoever has access to the card \nautomatically can gain access to the requested system.\n2.3.3  Confidentiality\nThe confidentiality service protects system data and information from unauthorized \ndisclosure. When data leave one extreme of a system such as a client’s computer \nin a network, it ventures out into a nontrusting environment. So, the recipient of \nthat data may not fully trust that no third party like a cryptanalysis or a man-in-the \nmiddle has eavesdropped on the data. This service uses encryption algorithms to \nensure that nothing of the sort happened while the data was in the wild.\nEncryption protects the communications channel from sniffers. Sniffers are pro­\ngrams written for and installed on the communication channels to eavesdrop on net­\nwork traffic, examining all traffic on selected network segments. Sniffers are easy to \n" }, { "page_number": 70, "text": "2.3  Forms of Protection\b\n49\nwrite and install and difficult to detect. The encryption process uses an encryption \nalgorithm and key to transform data at the source, called plaintext; turn it into an \nencrypted form called ciphertext, usually unintelligible form; and finally recover it at \nthe sink. The encryption algorithm can either be symmetric or asymmetric. Symmetric \nencryption or secret key encryption, as it is usually called, uses a common key and the \nsame cryptographic algorithm to scramble and unscramble the message. Asymmetric \nencryption commonly known as public key encryption uses two different keys: a pub­\nlic key known by all and a private key known by only the sender and the receiver. Both \nthe sender and the receiver each has a pair of these keys, one public and one private. To \nencrypt a message, a sender uses the receiver’s public key which was published. Upon \nreceipt, the recipient of the message decrypts it with his or her private key.\n2.3.4  Integrity\nThe integrity service protects data against active threats such as those that may \nalter it. Just like data confidentiality, data in transition between the sending and \nreceiving parties is susceptible to many threats from hackers, eavesdroppers, and \ncryptanalysts whose goal is to intercept the data and alter it based on their motives. \nThis service, through encryption and hashing algorithms, ensures that the integrity \nof the transient data is intact. A hash function takes an input message M and creates \na code from it. The code is commonly referred to as a hash or a message digest. \nA one-way hash function is used to create a signature of the message – just like a \nhuman fingerprint. The hash function is, therefore, used to provide the message’s \nintegrity and authenticity. The signature is then attached to the message before it is \nsent by the sender to the recipient.\n2.3.5  Nonrepudiation\nThis is a security service that provides proof of origin and delivery of service and/\nor information. In real life, it is possible that the sender may deny the ownership \nof the exchanged digital data that originated from him or her. This service, through \ndigital signature and encryption algorithms, ensures that digital data may not be \nrepudiated by providing proof of origin that is difficult to deny. A digital signature \nis a cryptographic mechanism that is the electronic equivalent of a written signature \nto authenticate a piece of data as to the identity of the sender.\nWe have to be careful here because the term “nonrepudiation” has two meanings, \none in the legal world and the other in the cryptotechnical world. Adrian McCullagh \nand Willian Caelli define “nonrepudiation” in a cryptotechnical way as follows [4]:\nIn authentication, a service that provides proof of the integrity and origin of data, \n• \nboth in a forgery-proof relationship, which can be verified by any third party at \nany time; or\n" }, { "page_number": 71, "text": "50\b\n2  Understanding Computer Network Security\nIn authentication, an authentication that with high assurance can be asserted to \n• \nbe genuine, and that cannot subsequently be refuted.\nHowever, in the legal world, there is always a basis for repudiation This basis, \nagain according to Adrian McCullagh, can be as follows:\nThe signature is a forgery.\n• \nThe signature is not a forgery, but was obtained via\n• \nUnconscionable conduct by a party to a transaction;\n• \nFraud instigated by a third party;\n• \nUndue influence exerted by a third party.\n• \nWe will use the cryptotechnical definition throughout the book. To achieve non­\nrepudiation, users and application environments require a nonrepudiation service \nto collect, maintain, and make available the irrefutable evidence. The best services \nfor nonrepudiation are digital signatures and encryption. These services offer trust \nby generating unforgettable evidence of transactions that can be used for dispute \nresolution after the fact.\n2.4  Security Standards\nThe computer network model also suffers from the standardization problem. Secu­\nrity protocols, solutions, and best practices that can secure the computer network \nmodel come in many different types and use different technologies resulting in \nincompatibility of interfaces (more in Chapter 16), less interoperability, and unifor­\nmity among the many system resources with differing technologies within the sys­\ntem and between systems. System managers, security chiefs, and experts, therefore, \nchoose or prefer standards, if no de facto standard exists, that are based on service, \nindustry, size, or mission. The type of service offered by an organization determines \nthe types of security standards used. Like service, the nature of the industry an \norganization is in also determines the types of services offered by the system, which \nin turn determines the type of standards to adopt. The size of an organization also \ndetermines what type of standards to adopt. In relatively small establishments, the \nease of implementation and running of the system influence the standards to be \nadopted. Finally, the mission of the establishment also determines the types of stan­\ndards used. For example, government agencies have a mission that differs from that \nof a university. These two organizations, therefore, may choose different standards. \nWe are, therefore, going to discuss security standards along these divisions. Before \nwe do that, however, let us look at the bodies and organizations behind the formula­\ntion, development, and maintenance of these standards. These bodies fall into the \nfollowing categories:\nInternational organizations such as the Internet Engineering Task Force (IETF), \n• \nthe Institute of Electronic and Electric Engineers (IEEE), the International \n" }, { "page_number": 72, "text": "2.4  Security Standards\b\n51\nStandards Organization (ISO), and the International Telecommunications \nUnion (ITU).\nMultinational organizations like the European Committee for Standardization \n• \n(CEN), Commission of European Union (CEU), and European Telecommunications \nStandards Institute (ETSI).\nNational governmental organizations like the National Institute of Standards \n• \nand Technology (NIST), American National Standards Institute (ANSI), and \nCanadian Standards Council (CSC).\nSector specific organizations such as the European Committee for Banking \n• \nStandards (ECBS), European Computer Manufacturers Association (ECMA), \nand Institute of Electronic and Electric Engineers (IEEE).\nIndustry standards such as RSA, the Open Group (OSF + X/Open), Object \n• \nManagement Group (OMG), World Wide Web Consortium (W3C)), and \nthe Organization for the Advancement of Structured Information Standards \n(OASIS).\nOther sources of standards in security and cryptography.\n• \nEach one of these organizations has a set of standards. Table 2.1 shows some of \nthese standards. In the table, x is any digit between 0 and 9.\n2.4.1  Security Standards Based on Type of Service/Industry\nSystem and security managers and users may choose a security standard to use \nbased on the type of industry they are in and what type of services that industry \nprovides. Table 2.2 shows some of these services and the corresponding security \nstandards that can be used for these services.\nLet us now give some details of some of these standards.\nTable 2.1  Organizations and their standards\nOrganization\t\nStandards\nIETF\t\nIPSec, XML-Signature XPath Filter2, X.509, Kerberos, S/MIME,\nISO\t\nISO 7498–2:1989 Information processing systems – Open Systems \nInterconnection, ISO/IEC 979x, ISO/IEC 997, ISO/IEC 1011x, ISO/IEC \n11xx, ISO/IEC DTR 13xxx, ISO/IEC DTR 14xxx\nITU\t\nX.2xx, X.5xx, X.7xx, X.80x,\nECBS\t\nTR-40x\nECMA\t\nECMA-13x, ECMA-20x\nNIST\t\nX3 Information Processing, X9.xx Financial, X12.xx Electronic Data \nExchange\nIEEE\t\nP1363 Standard Specifications, For Public-Key Cryptography, IEEE 802.xx, \nIEEE P802.11g, Wireless LAN Medium Access Control (MAC) and \nPhysical Layer (PHY) Specifications\nRSA\t\nPKCS #x – Public Key Cryptographic Standard\nW3C\t\nXML Encryption, XML Signature, exXensible Key Management Specification \n(XKMS)\n" }, { "page_number": 73, "text": "52\b\n2  Understanding Computer Network Security\n2.4.1.1  Public-Key Cryptography Standards (PKCS)\nIn order to provide a basis and a catalyst for interoperable security based on public-key \ncryptographic techniques, the Public-Key Cryptography Standards (PKCS) were estab­\nlished. These are recent security standards, first published in 1991 following discussions \nof a small group of early adopters of public-key technology. Since their establishment, \nthey have become the basis for many formal standards and are implemented widely.\nIn general, PKCS are security specifications produced by RSA Laboratories in \ncooperation with secure systems developers worldwide for the purpose of acceler­\nating the deployment of public-key cryptography. In fact, worldwide contributions \nfrom the PKCS series have become part of many formal and de facto standards, \nincluding ANSI X9 documents, PKIX, SET, S/MIME, and SSL.\n2.4.1.2  The Standards For Interoperable Secure MIME (S/MIME)\n S/MIME (Secure Multipurpose Internet Mail Extensions) is a specification for \nsecure electronic messaging. It came to address a growing problem of e-mail inter­\nception and forgery at the time of increasing digital communication. So, in 1995, \nTable 2.2  Security standards based on services\nArea of Application\nService\nSecurity Standard\nInternet security\nNetwork authentication\nKerberos\nSecure TCP/IP communications\nover the Internet\nIPSec\nPrivacy-enhanced electronic mail\nS/MIME, PGP\nPublic-key cryptography standards\n3-DES, DSA, RSA, MD-5,\n  SHA-1, PKCS\nSecure hypertext transfer protocol\nS-HTTP\nAuthentication of directory users\nX.509/ISO/IEC\n  9594–8:2000:\nSecurity protocol for privacy\non Internet/transport security\nSSL, TLS, SET\n\u0007Digital signature and \nencryption\n\u0007Advanced encryption standard/PKI/\ndigital certificates, XML digital\nsignatures\nX509, RSA BSAFE \nSecurXML-C, DES, \nAES, DSS/DSA, EESSI, \nISO 9xxx, ISO, SHA/\nSHS, XML Digital \nSignatures (XMLD­\nSIG), XML Encryption \n(XMLENC), XML Key \nManagement Specifica­\ntion (XKMS)\nLogin and authentication\nAuthentication of user’s right to use\nsystem or network resources.\nSAML, Liberty Alliance,\nFIPS 112\nFirewall and system\nsecurity\nSecurity of local, wide, and\nmetropolitan area networks\nSecure Data Exchange \n(SDE) protocol for IEEE\n802, ISO/IEC 10164\n" }, { "page_number": 74, "text": "2.4  Security Standards\b\n53\nseveral software vendors got together and created the S/MIME specification with \nthe goal of making it easy to secure messages from prying eyes.\nIt works by building a security layer on top of the industry standard MIME pro­\ntocol based on PKCS. The use of PKCS avails the user of S/MIME with immediate \nprivacy, data integrity, and authentication of an e-mail package. This has given the \nstandard a wide appeal, leading to S/MIME moving beyond just e-mail. Already \nvendor software warehouses, including Microsoft, Lotus, Banyan, and other on-line \nelectronic commerce services are using S/MIME.\n2.4.1.3  Federal Information Processing Standards (FIPS)\nFederal Information Processing Standards (FIPS) are National Institute of Stan­\ndards and Technology (NIST)-approved standards for advanced encryption. These \nare U.S. federal government standards and guidelines in a variety of areas in data \nprocessing. They are recommended by NIST to be used by U.S. government organi­\nzations and others in the private sector to protect sensitive information. They range \nfrom FIPS 31 issued in 1974 to current FIPS 198.\n2.4.1.4  Secure Sockets Layer (SSL)\nSSL is an encryption standard for most Web transactions. In fact, it is becoming \nthe most popular type of e-commerce encryption. Most conventional intranet and \nextranet applications would typically require a combination of security mechanisms \nthat include\nEncryption\n• \nAuthentication\n• \nAccess control\n• \nSSL provides the encryption component implemented within the TCP/IP proto­\ncol. Developed by Netscape Communications, SSL provides secure web client and \nserver communications, including encryption, authentication, and integrity check­\ning for a TCP/IP connection.\n2.4.1.5  Web Services Security Standards\nIn order for Web transactions such as e-commerce to really take off, customers \nwill need to see an open architectural model backed up by a standards-based secu­\nrity framework. Security players, including standards organizations, must provide \nthat open model and a framework that is interoperable, that is, as vendor-neutral \nas possible, and able to resolve critical, often sensitive, issues related to security. \nThe security framework must also include Web interoperability standards for access \ncontrol, provisioning, biometrics, and digital rights.\n" }, { "page_number": 75, "text": "54\b\n2  Understanding Computer Network Security\nTo meet the challenges of Web security, two industry rival standards companies are \ndeveloping new standards for XML digital signatures that include XML Encryption, \nXML Signature, and exXensible Key Management Specification (XKMS) by the World \nWide Web Consortium (W3C), and BSAFE SecurXML-C software development kit \n(SDK) for implementing XML digital signatures by rival RSA Security. In addition, \nRSA also offers a SAML Specification (Security Assertion Markup Language), an \nXML framework for exchanging authentication, and authorization information. It is \ndesigned to enable secure single sign-on across portals within and across organizations\n2.4.2  Security Standards Based on Size/Implementation\nIf the network is small or it is a small organization such as a university, for example, \nsecurity standards can be spelled out as best practices on the security of the sys­\ntem, including the physical security of equipment, system software, and application \nsoftware.\nPhysical security – this emphasizes the need for security of computers running \n• \nthe Web servers and how these machines should be kept physically secured in a \nlocked area. Standards are also needed for backup storage media like tapes and \nremovable disks.\nOperating systems. The emphasis here is on privileges and number of accounts, \n• \nand security standards are set based on these. For example, the number of users \nwith most privileged access like root in UNIX or Administrator in NT should be \nkept to a minimum. Set standards for privileged users. Keep to a minimum the \nnumber of user accounts on the system. State the number of services offered to \nclients computers by the server, keeping them to a minimum. Set a standard for \nauthentication such as user passwords and for applying security patches.\nSystem logs. Logs always contain sensitive information such as dates and times \n• \nof user access. Logs containing sensitive information should be accessible only \nto authorized staff and should not be publicly accessible. Set a standard on who \nand when logs should be viewed and analyzed.\nData security. Set a standard for dealing with files that contain sensitive data. For \n• \nexample, files containing sensitive data should be encrypted wherever possible \nusing strong encryption or should be transferred as soon as possible and practical \nto a secured system not providing public services.\nAs an example, Table 2.3 shows how such standards may be set.\nTable 2.3  Best security practices for a small organization\nApplication area\t\nSecurity standards\nOperating systems\t\nUnix, Linux, Windows, etc.\nVirus protection\t\nNorton\nEmail\t\nPGP, S/MIME\nFirewalls\nTelnet and FTP terminal applications\t\nSSH (secure shell)\n" }, { "page_number": 76, "text": "2.4  Security Standards\b\n55\n2.4.3  Security Standards Based on Interests\nIn many cases, institutions and government agencies choose to pick a security stan­\ndard based solely on the interest of the institution or the country. Table 2.4 below \nshows some security standards based on interest, and the subsections following \nthe table also show security best practices and security standards based more on \nnational interests.\n2.4.3.1  British Standard 799 (BS 7799)\n The BS 7799 standard outlines a code of practice for information security manage­\nment that further helps to determine how to secure network systems. It puts forward \na common framework that enables companies to develop, implement, and measure \neffective security management practice and provide confidence in inter-company \ntrading. BS 7799 was first written in 1993, but it was not officially published until \n1995, and it was published as an international standard BS ISO/IEC 17799:2000 in \nDecember 2000.\n2.4.3.2  Orange Book\nThis is the U.S. Department of Defense Trusted Computer System Evaluation Cri­\nteria (DOD-5200.28-STD) standard known as the Orange Book. For a long time, \nit has been the de facto standard for computer security used in government and \nindustry, but as we will see in Chapter 15, other standards have now been developed \nto either supplement it or replace it. First published in 1983, its security levels are \nreferred to as “Rainbow Series.”\n2.4.3.3  Homeland National Security Awareness\nAfter the September 11, 2001, attack on the United States, the government created a \nnew cabinet department of Homeland Security to be in charge of all national secu­\nrity issues. The Homeland Security department created a security advisory system \nmade up of five levels ranging from green (for low security) to red (severe) for \nheightened security. Figure 2.1 shows these levels.\nTable  2.4 Interest-based security standards\nArea of application\t\nService\t\nSecurity standard\nBanking\t\nSecurity within banking\t\nISO 8730, ISO 8732, ISO/TR\n\t\n  IT systems\t\n  17944\nFinancial\t\nSecurity of financial services\t\nANSI X9.x, ANSI X9.xx\n" }, { "page_number": 77, "text": "56\b\n2  Understanding Computer Network Security\n2.4.4  Best Practices in Security\nAs you noticed from our discussion, there is a rich repertoire of standards and best \npractices on the system and infosecurity landscape because as technology evolves, \nthe security situation becomes more complex and it grows more so every day. With \nthese changes, however, some truths and approaches to security remain the same. \nOne of these constants is having a sound strategy of dealing with the changing secu­\nrity landscape. Developing such a security strategy involves keeping an eye on the \nreality of the changing technology scene and rapidly increasing security threats. To \nkeep abreast of all these changes, security experts and security managers must know \nhow and what to protect and what controls to put in place and at what time. It takes \nsecurity management, planning, policy development, and the design of procedures. \nHere are some examples of best practices.\nCommonly Accepted Security Practices and Regulations (CASPR): Devel­\noped by the CASPR Project, this effort aims to provide a set of best practices \nthat can be universally applied to any organization regardless of industry, size or \nFig. 2.1  Department of Homeland Security Awareness Levels[7]\n" }, { "page_number": 78, "text": "2.4  Security Standards\b\n57\n­mission. Such best practices would, for example, come from the world’s experts \nin information security. CASPR distills the knowledge into a series of papers and \npublishes them so they are freely available on the Internet to everyone. The project \ncovers a wide area, including operating system and system security, network and \ntelecommunication security, access control and authentication, infosecurity man­\nagement, infosecurity auditing and assessment, infosecurity logging and monitor­\ning, application security, application and system development, and investigations \nand forensics. In order to distribute their papers freely, the founders of CASPR \nuse the open source movement as a guide, and they release the papers under the \nGNU Free Document License to make sure they and any derivatives remain freely \navailable.\nControl Objectives for Information and (Related) Technology (COBIT): \nDeveloped by IT auditors and made available through the Information Systems \nAudit and Control Association, COBIT provides a framework for assessing a secu­\nrity program. COBIT is an open standard for control of information technology. \nThe IT Governance Institute has, together with the world-wide industry experts, \nanalysts and academics, developed new definitions for COBIT that consist of \nMaturity Models, Critical Success Factors (CSFs), Key Goal Indicators (KGIs), \nand Key Performance Indicators (KPIs). COBIT was designed to help three distinct \naudiences [6]:\nManagement who need to balance risk and control investment in an often \n• \nunpredictable IT environment\nUsers who need to obtain assurance on the security and controls of the IT services \n• \nupon which they depend to deliver their products and services to internal and \nexternal customers\nAuditors who can use it to substantiate their opinions and/or provide advice to \n• \nmanagement on internal controls.\nOperationally Critical Threat, Asset and Vulnerability Evaluation (OCTAVE) by \nCarnegie Mellon’s CERT Coordination Center: OCTAVE is an approach for self-\ndirected information security risk evaluations that [7]\nPuts organizations in charge\n• \nBalances critical information assets, business needs, threats, and vulnerabilities\n• \nMeasures the organization against known or accepted good security practices\n• \nEstablishes an organization-wide protection strategy and information security \n• \nrisk mitigation plans\nIn short, it provides measures based on accepted best practices for evaluating \nsecurity programs. It does this in three phases:\nFirst, it determines information assets that must be protected.\n• \nEvaluates the technology infrastructure to determine if it can protect those assets \n• \nand how vulnerable it is and defines the risks to critical assets\nUses good security practices, establishes an organization-wide protection strategy \n• \nand mitigation plans for specific risks to critical assets.\n" }, { "page_number": 79, "text": "58\b\n2  Understanding Computer Network Security\nExercises\n\t 1.\t What is security and Information security? What is the difference?\n\t 2.\t It has been stated that security is a continuous process; what are the states in this \nprocess?\n\t 3.\t What are the differences between symmetric and asymmetric key systems?\n\t 4.\t What is PKI? Why is it so important in information security?\n\t 5.\t What is the difference between authentication and nonrepudiation?\n\t 6.\t Why is there a dispute between digital nonrepudiation and legal nonrepudia­\ntion?\n\t 7.\t Virtual security seems to work in some systems. Why is this so? Can you apply \nit in a network environment? Support your response.\n\t 8.\t Security best practices are security guidelines and policies aimed at enhancing sys­\ntem security. Can they work without known and proven security mechanisms?\n\t 9.\t Does information confidentiality infer information integrity? Explain your \nresponse.\n10.\t What are the best security mechanisms to ensure information confidentiality?\nAdvanced Exercises\n\t 1.\t In the chapter, we have classified security standards based on industry, size, and \nmission. What other classifications can you make and why?\n\t 2.\t Most of the encryption standards that are being used such as RSA and DES \nhave not been formally proven to be safe. Why then do we take them to be \nsecure – what evidence do we have?\n\t 3.\t IPSec provides security at the network layer. What other security mechanism is \napplicable at the network layer? Do network layer security solutions offer better \nsecurity?\n\t 5.\t Discuss two security mechanisms applied at the application layer. Are they \nsafer than those applied at the lower network layer? Support your response.\n\t 6.\t Are there security mechanisms applicable at transport layer? Is it safer?\n\t 7.\t Discuss the difficulties encountered in enforcing security best practices.\n\t 8.\t Some security experts do not believe in security policies. Do you? Why or why \nnot?\n\t 9.\t Security standards are changing daily. Is it wise to pick a security standard \nthen? Why or why not?\n10.\t If you are an enterprise security chief, how would you go about choosing a \nsecurity best practice? Is it good security policy to always use a best security \npractice? What are the benefits of using a best practice?\n11.\t Why it is important to have a security plan despite the various views of security \nexperts concerning its importance?\n" }, { "page_number": 80, "text": "References\b\n59\nReferences\n\t 1.\t Kizza, Joseph M. Social and Ethical Issues in the Information Age. 2nd edition, New York: \nSpringer, 2003.\n\t 2.\t Scherphier, A. CS596 Client-Server Programming Security. http://www.sdsu.edu/∼cs596/\nsecurity.html.\n\t 3.\t Mercuri, Rebecca and Peter Neumann. Security by Obsecurity. Communication of the ACM. \nVol.46, No.11. Page 160.\n\t 4.\t McCullagh, Adrian and Willian Caelli Non-repudiation in the Digital Environment. http://\nwww.firstmonday.dk/issues/issue5_8/mccullagh/index.html#author.\n\t 5.\t Department of Homeland Security. http://www.dohs.gov/.\n\t 6.\t CobiT a Practical Toolkit for IT Governance. http://www.ncc.co.uk/ncc/myitadviser/archive/\nissue8/business_processes.cfm.\n\t 7.\t OCTAVE: Information Security Risk Evaluation. http://www.cert.org/octave/.\n" }, { "page_number": 81, "text": "Part II\nSecurity Challenges \nto Computer Networks\n" }, { "page_number": 82, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_3, © Springer-Verlag London Limited 2009\n\b\n63\nChapter 3\nSecurity Threats to Computer Networks\nCreators of computer viruses are winning the battle with law \nenforcers and getting away with crimes that cost the global \neconomy some $13 billion a year. – Microsoft Official, Reuters \nNews Wednesday, December 3, 2003\n3.1  Introduction\nIn February, 2002, the Internet security watch group CERT Coordination Center \ndisclosed that global networks, including the Internet, phone systems, and the elec­\ntrical power grid, are vulnerable to attack because of weakness in programming in \na small but key network component. The component, an Abstract Syntax Notation \nOne, or ASN.1, is a communication protocol used widely in the Simple Network \nManagement Protocol (SNMP).\nThere was widespread fear among government, networking manufacturers, \nsecurity researchers, and IT executives because the component is vital in many \ncommunication grids, including national critical infrastructures such as parts of the \nInternet, phone systems, and the electrical power grid. These networks were vulner­\nable to disruptive buffer overflow and malformed packet attacks.\nThis example illustrates but one of many potential incidents that can cause \nwidespread fear and panic among government, networking manufacturers, security \nresearchers, and IT executives when they think of the consequences of what might \nhappen to the global networks.\nThe number of threats is rising daily, yet the time window to deal with them \nis rapidly shrinking. Hacker tools are becoming more sophisticated and powerful. \nCurrently, the average time between the point at which a vulnerability is announced \nand when it is actually deployed in the wild is getting shorter and shorter.\nTraditionally, security has been defined as a process to prevent unauthorized \naccess, use, alteration, theft, or physical damage to an object through maintaining \nhigh confidentiality and integrity of information about the object and making infor­\nmation about the object available whenever needed. However, there is a common \nfallacy, taken for granted by many, that a perfect state of security can be achieved; \nthey are wrong. There is nothing like a secure state of any object, tangible or not, \n" }, { "page_number": 83, "text": "64\b\n3  Security Threats to Computer Networks\nbecause no such object can ever be in a perfectly secure state and still be useful. \nAn object is secure if the process can maintain its highest intrinsic value. Since \nthe intrinsic value of an object depends on a number of factors, both internal and \nexternal to the object during a given time frame, an object is secure if the object \nassumes its maximum intrinsic value under all possible conditions. The process of \nsecurity, therefore, strives to maintain the maximum intrinsic value of the object at \nall times.\nInformation is an object. Although it is an intangible object, its intrinsic value \ncan be maintained in a high state, thus ensuring that it is secure. Since our focus in \nthis book is on global computer network security, we will view the security of this \nglobal network as composed of two types of objects: the tangible objects such as \nthe servers, clients, and communication channels and the intangible object such as \ninformation that is stored on servers and clients and that moves through the com­\nmunication channels.\nEnsuring the security of the global computer networks requires maintaining \nthe highest intrinsic value of both the tangible objects and information – the \nintangible one. Because of both internal and external forces, it is not easy to \nmaintain the highest level of the intrinsic value of an object. These forces con­\nstitute a security threat to the object. For the global computer network, the secu­\nrity threat is directed to the tangible and the intangible objects that make up the \nglobal infrastructure such as servers, clients, communication channels, files, and \ninformation.\nThe threat itself comes in many forms, including viruses, worms, distributed \ndenial of services, electronic bombs, and derives many motives, including revenge, \npersonal gains, hate, and joy rides, to name but a few.\n3.2  Sources of Security Threats\nThe security threat to computer systems springs from a number of factors that \ninclude weaknesses in the network infrastructure and communication protocols \nthat create an appetite and a challenge to the hacker mind, the rapid growth of \ncyberspace into a vital global communication and business network on which \ninternational commerce and business transactions are increasingly being performed \nand many national critical infrastructures are being connected, the growth of the \nhacker community whose members are usually experts at gaining unauthorized \naccess into systems that run not only companies and governments but also criti­\ncal national infrastructures, the vulnerability in operating system protocols whose \nservices run the computers that run the communication network, the insider effect \nresulting from workers who steal and sell company databases and the mailing lists \nor even confidential business documents, social engineering, physical theft from \nwithin the organizations of things such as laptop and hand-held computers with \npowerful communication technology and more potentially sensitive information, \nand security as a moving target.\n" }, { "page_number": 84, "text": "3.2  Sources of Security Threats\b\n65\n3.2.1  Design Philosophy\nAlthough the design philosophy on which both the computer network infrastruc­\nture and communication protocols built has tremendously boosted were cyberspace \ndevelopment, the same design philosophy has been a constant source of the many \nills plaguing cyberspace. The growth of the Internet and cyberspace in general \nwas based on an open architecture work in progress philosophy. This philosophy \nattracted the brightest minds to get their hands dirty and contribute to the infrastruc­\nture and protocols. With many contributing their best ideas for free, the Internet \ngrew in leaps and bounds. This philosophy also helped the spirit of individualism \nand adventurism, both of which have driven the growth of the computer industry \nand underscored the rapid and sometimes motivated growth of cyberspace.\nBecause the philosophy was not based on clear blueprints, new developments and \nadditions came about as reactions to the shortfalls and changing needs of a develop­\ning infrastructure. The lack of a comprehensive blueprint and the demand-driven \ndesign and development of protocols are causing the ever present weak points and \nloopholes in the underlying computer network infrastructure and protocols.\nIn addition to the philosophy, the developers of the network infrastructure and pro­\ntocols also followed a policy to create an interface that is as user-friendly, efficient, \nand transparent as possible so that all users of all education levels can use it unaware \nof the working of the networks and therefore are not concerned with the details.\nThe designers of the communication network infrastructure thought it was better \nthis way if the system is to serve as many people as possible. Making the interface \nthis easy and far removed from the details, though, has its own downside in that the \nuser never cares about and pays very little attention to the security of the system.\nLike a magnet, the policy has attracted all sorts of people who exploits the net­\nwork’s vulnerable and weak points in search of a challenge, adventurism, fun, and \nall forms of personal gratification.\n3.2.2  \u0007Weaknesses in Network Infrastructure and Communication \nProtocols\nCompounding the problems created by the design philosophy and policy are the \nweaknesses in the communication protocols. The Internet is a packet network that \nworks by breaking the data to be transmitted into small individually addressed \npackets that are downloaded on the network’s mesh of switching elements. Each \nindividual packet finds its way through the network with no predetermined route \nand the packets are reassembled to form the original message by the receiving ele­\nment. To work successfully, packet networks need a strong trust relationship that \nmust exist among the transmitting elements.\nAs packets are di-assembled, transmitted, and re-assembled, the security of each \nindividual packet and the intermediary transmitting elements must be guaranteed. \nThis is not always the case in the current protocols of cyberspace. There are areas \n" }, { "page_number": 85, "text": "66\b\n3  Security Threats to Computer Networks\nwhere, through port scans, determined users have managed to intrude, penetrate, \nfool, and intercept the packets.\nThe two main communication protocols on each server in the network, UDP and \nTCP, use port numbers to identify higher layer services. Each higher layer service \non a client uses a unique port number to request a service from the server and each \nserver uses a port number to identify the service needed by a client. The cardinal \nrule of a secure communication protocol in a server is never to leave any port open \nin the absence of a useful service. If no such service is offered, its port should never \nbe open. Even if the service is offered by the server, its port should never be left \nopen unless it is legitimately in use.\nIn the initial communication between a client and a server, the client addresses \nthe server via a port number in a process called a three-way handshake. The three-\nway handshake, when successful, establishes a TCP virtual connection between the \nserver and the client. This virtual connection is required before any communication \nbetween the two can begin. The process begins by a client/host sending a TCP seg­\nment with the synchronize (SYN) flag set; the server/host responds with a segment \nthat has the acknowledge valid (ACK) and SYN flags set, and the first host responds \nwith a segment that has only the ACK flag set. This exchange is shown in Fig. 3.1. \nThe three-way handshake suffers from a half-open socket problem when the server \ntrusts the client that originated the handshake and leaves its port door open for fur­\nther communication from the client.\nAs long as the half-open port remains open, an intruder can enter the system \nbecause while one port remains open, the server can still entertain other three-way \nhandshakes from other clients that want to communicate with it. Several half-open \nports can lead to network security exploits including both TCP/IP and UDP Proto­\ncols: Internet Protocol spoofing (IP spoofing), in which IP addresses of the source \nelement in the data packets are altered and replaced with bogus addresses, and SYN \nflooding where the server is overwhelmed by spoofed packets sent to it.\nSYN \nACK\nServer\nClient\n(Received by welcome port)\nSYN-ACK\n(Create communication port)\n.\n(Established connection)\nFig. 3.1  A three-way\nhandshake\n" }, { "page_number": 86, "text": "3.2  Sources of Security Threats\b\n67\n In addition to the three-way handshake, ports are used widely in network com­\nmunication. There are well-known ports used by processes that offer services. For \nexample, ports 0 through 1023 are used widely by system processes and other highly \nprivileged programs. This means that if access to these ports is compromised, the \nintruder can get access to the whole system. Intruders find open ports via port scans. \nThe two examples below from G-Lock Software illustrate how a port scan can be \nmade [1].\nTCP connect( ) scanning\n• \n is the most basic form of TCP scanning. An attacker’s \nhost is directed to issue a connect( ) system call to a list of selected ports on \nthe target machine. If any of these ports is listening, connect( ) system call will \nsucceed; otherwise, the port is unreachable and the service is unavailable.\nUDP Internet Control Message Protocol (ICMP) port unreachable scanning\n• \n is \none of the few UDP scans. Recall from Chapter 1 that UDP is a connectionless \nprotocol; so, it is harder to scan than TCP because UDP ports are not required to \nrespond to probes. Most implementations generate an ICMP port_unreachable \nerror when an intruder sends a packet to a closed UDP port. When this response \ndoes not come, the intruder has found an active port.\nIn addition to port number weaknesses usually identifiable via port scans, both \nTCP and UDP protocols suffer from other weaknesses.\nPacket transmissions between network elements can be intercepted and their \ncontents altered such as in initial sequence number attack. Sequence numbers \nare integer numbers assigned to each transmitted packet, indicating their order of \narrival at the receiving element. Upon receipt of the packets, the receiving element \nacknowledges it in a two-way communication session during which both the trans­\nmitting elements talk to each other simultaneously in full duplex.\nIn the initial sequence number attack, the attacker intercepts the communication \nsession between two or more communicating elements and then guesses the next \nsequence number in a communication session. The intruder then slips the spoofed \nIP addresses into the packets transmitted to the server. The server sends an acknowl­\nedgment to the spoofed clients. Infrastructure vulnerability attacks also include ses­\nsion attacks, packet sniffing, buffer overflow, and session hijacking. These attacks \nare discussed in later chapters.\nThe infrastructure attacks we have discussed so far are of the penetration type \nwhere the intruder physically enters the system infrastructure, either at the transmit­\nting element or in the transmitting channel levels, and alters the content of packets. \nIn the next set of infrastructure attacks, a different approach of vulnerability exploi­\ntation is used. This is the distributed denial of services (DDoS).\nThe DDoS attacks are attacks that are generally classified as nuisance attacks in \nthe sense that they simply interrupt the services of the system. System interruption \ncan be as serious as destroying a computer’s hard disk or as simple as using up all the \navailable memory of the system. DDoS attacks come in many forms, but the most \ncommon are the following: smurfing, ICMP protocol, and ping of death attacks.\nThe “smurf” attack utilizes the broken down trust relationship created by IP \nspoofing. An offending element sends a large amount of spoofed ping packets \n" }, { "page_number": 87, "text": "68\b\n3  Security Threats to Computer Networks\ncontaining the victim’s IP address as the source address. Ping traffic, also called \nProtocol Overview Internet Control Message Protocol (ICMP) in the Internet \ncommunity, is used to report out-of-band messages related to network operation \nor mis-operation such as a host or entire portion of the network being unreach­\nable, owing to some type of failure. The pings are then directed to a large number \nof network subnets, a subnet being a small independent network such as a LAN. If \nall the subnets reply to the victim address, the victim element receives a high rate \nof requests from the spoofed addresses as a result and the element begins buffer­\ning these packets. When the requests come at a rate exceeding the capacity of \nthe queue, the element generates ICMP Source Quench messages meant to slow \ndown the sending rate. These messages are then sent, supposedly, to the legiti­\nmate sender of the requests. If the sender is legitimate, it will heed the requests \nand slow down the rate of packet transmission. However, in cases of spoofed \naddresses, no action is taken because all sender addresses are bogus. The situation \nin the network can easily deteriorate further if each routing device itself takes part \nin smurfing.\nWe have outlined a small part of a list of several hundred types of known infra­\nstructure vulnerabilities that are often used by hackers to either penetrate systems \nand destroy, alter, or introduce foreign data into the system or disable the system \nthrough port scanning and DDoS. Although for these known vulnerabilities, equip­\nment manufacturers and software producers have done a considerable job of issu­\ning patches as soon as a loophole or a vulnerability is known, quite often, as was \ndemonstrated in the Code Red fiasco, not all network administrators adhere to the \nadvisories issued to them.\nFurthermore, new vulnerabilities are being discovered almost everyday either \nby hackers in an attempt to show their skills by exposing these vulnerabilities or \nby users of new hardware or software such as what happened with the Microsoft \nWindows IIS in the case of the Code Red worm. Also, the fact that most of these \nexploits use known vulnerabilities is indicative of our abilities in patching known \nvulnerabilities even if the solutions are provided.\n3.2.3  Rapid Growth of Cyberspace\nThere is always a security problem in numbers. Since its beginning as ARPANET in \nthe early 1960s, the Internet has experienced phenomenal growth, especially in the \nlast 10 years. There was an explosion in the numbers of users, which in turn ignited \nan explosion in the number of connected computers.\nJust less than 20 years ago in 1985, the Internet had fewer than 2000 computers \nconnected and the corresponding number of users was in the mere tens of thousands. \nHowever, by 2001, the figure has jumped to about 109 million hosts, according to \nTony Rutkowski at the Center for Next Generation Internet, an Internet Software \nConsortium. This number represents a significant new benchmark for the number of \nInternet hosts. At a reported current annual growth rate of 51% over the past 2 years, \n" }, { "page_number": 88, "text": "3.2  Sources of Security Threats\b\n69\nthis shows continued strong exponential growth, with an estimated growth of up to \n1 billion hosts if the same growth rate is sustained [2].\nThis is a tremendous growth by all accounts. As it grew, it brought in more and \nmore users with varying ethical standards, added more services, and created more \nresponsibilities. By the turn of the century, many countries found their national criti­\ncal infrastructures firmly intertwined in the global network. An interdependence \nbetween humans and computers and between nations on the global network has \nbeen created that has led to a critical need to protect the massive amount of infor­\nmation stored on these network computers. The ease of use of and access to the \nInternet, and large quantities of personal, business, and military data stored on the \nInternet was slowly turning into a massive security threat not only to individuals \nand business interests but also to national defenses.\n As more and more people enjoyed the potential of the Internet, more and more \npeople with dubious motives were also drawn to the Internet because of its enor­\nmous wealth of everything they were looking for. Such individuals have posed a \npotential risk to the information content of the Internet, and such a security threat \nhas to be dealt with.\nStatistics from the security company Symantec show that Internet attack activity \nis currently growing by about 64% per year. The same statistics show that during \nthe first 6 months of 2002, companies connected to the Internet were attacked, on \naverage, 32 times per week compared to only 25 times per week in the last 6 months \nof 2001. Symantec reports between 400 and 500 new viruses every month and about \n250 vulnerabilities in computer programs [3].\nIn fact, the rate at which the Internet is growing is becoming the greatest security \nthreat ever. Security experts are locked in a deadly race with these malicious hack­\ners that at the moment looks like a losing battle with the security community.\n3.2.4  The Growth of the Hacker Community\nAlthough other factors contributed significantly to the security threat, in the gen­\neral public view, the number one contributor to the security threat of computer \nand telecommunication networks more than anything else is the growth of the \nhacker community. Hackers have managed to bring this threat into news headlines \nand people’s living rooms through the ever increasing and sometimes devastating \nattacks on computer and telecommunication systems using viruses, worms, and \nDDoS.\nThe general public, computer users, policy makers, parents, and law makers \nhave watched in bewilderment and awe as the threat to their individual and national \nsecurity has grown to alarming levels as the size of the global networks have grown \nand national critical infrastructures have become more and more integrated into \nthis global network. In some cases, the fear from these attacks reached hysterical \nproportions, as demonstrated in the following major attacks between 1986 and 2003 \nthat we have rightly called the big “bungs.”\n" }, { "page_number": 89, "text": "70\b\n3  Security Threats to Computer Networks\n3.2.4.1  The Big “Bungs” (1988 through 2003)\nThe Internet Worm\nOn November 2, 1988 Robert T. Morris, Jr., a Computer Science graduate student at \nCornell University, using a computer at MIT, released what he thought was a benign \nexperimental, self-replicating, and self-propagating program on the MIT computer \nnetwork. Unfortunately, he did not debug the program well before running it. He \nsoon realized his mistake when the program he thought was benign went out of \ncontrol. The program started replicating itself and at the same time infecting more \ncomputers on the network at a faster rate than he had anticipated. There was a bug \nin his program. The program attacked many machines at MIT and very quickly went \nbeyond the campus to infect other computers around the country. Unable to stop \nhis own program from spreading, he sought a friend’s help. He and his friend tried \nunsuccessfully to send an anonymous message from Harvard over the network, \ninstructing programmers how to kill the program – now a worm and prevent its re-\ninfection of other computers. The worm spread like wildfire to infect some 6,000 \nnetworked computers, a whopping number in proportion to the 1988 size of the \nInternet, clogging government and university systems. In about 12 hours, program­\nmers in affected locations around the country succeeded in stopping the worm from \nspreading further. It was reported that Morris took advantage of a hole in the debug \nmode of the Unix sendmail program. Unix then was a popular operating system that \nwas running thousands of computers on university campuses around the country. \nSendmail runs on Unix to handle e-mail delivery.\nMorris was apprehended a few days later, taken to court, sentenced to 3 years, \nprobation, a $10,000 fine, 400 hours of community service, and dismissed from \nCornell. Morris’s worm came to be known as the Internet worm. The estimated cost \nof the Internet worm varies from $53,000 to as high as $96 million, although the \nexact figure will never be known [4].\nMichelangelo Virus\nThe world first heard of the Michelangelo virus in 1991. The virus affected only PCs \nrunning MS-DOS 2.xx and higher versions. Although it overwhelmingly affected \nPCs running DOS operating systems, it also affected PCs running other operating \nsystems such as UNIX, OS/2, and Novell. It affected computers by infecting floppy \ndisk boot sectors and hard disk master boot records. Once in the boot sectors of the \nbootable disk, the virus then installed itself in memory from where it would infect \nthe partition table of any other disk on the computer, whether a floppy or a hard \ndisk.\nFor several years, a rumor was rife, more so many believe, as a scare tactic by \nantivirus software manufactures that the virus is to be triggered on March 6 of every \nyear to commemorate the birth date of the famous Italian painter. But in real terms, \nthe actual impact of the virus was rare. However, because of the widespread publicity \n" }, { "page_number": 90, "text": "3.2  Sources of Security Threats\b\n71\nit received, the Michelangelo virus became one of the most disastrous viruses ever, \nwith damages into millions of dollars.\nPathogen, Queeg, and Smeg Viruses\nBetween 1993 and April 1994, Christopher Pile, a 26-year-old resident of Devon \nin Britain, commonly known as the “Black Baron” in the hacker community, wrote \nthree computer viruses: Pathogen, Queeg, and Smeg all named after expressions \nused in the British Sci-Fi comedy “Red Dwarf.” He used Smeg to camouflage both \nPathogen and Queeg. The camouflage of the two programs prevented most known \nantivirus software from detecting the viruses. Pile wrote the Smeg in such a way \nthat others could also write their own viruses and use Smeg to camouflage them. \nThis meant that the Smeg could be used as a locomotive engine to spread all sorts of \nviruses. Because of this, Pile’s viruses were extremely deadly at that time. Pile used \na variety of ways to distribute his deadly software, usually through bulletin boards \nand freely downloadable Internet software used by thousands in cyberspace.\nPile was arrested on May 26, 1995. He was charged with 11 counts that included \nthe creation and release of these viruses that caused modification and destruction of \ncomputer data and inciting others to create computer viruses. He pleaded guilty to \n10 of the 11 counts and was sentenced to 18 months in prison.\nPile’s case was in fact not the first one as far as creating and distributing com­\nputer viruses was concerned. In October 1992, three Cornell University students \nwere each sentenced to several hundred hours of community service for creating \nand disseminating a computer virus. However, Pile’s case was significant in that it \nwas the first widely covered and published computer crime case that ended in a jail \nsentence [5].\nMelissa Virus\nOn March 26, 1999, the global network of computers was greeted with a new virus \nnamed Melissa. Melissa was created by David Smith, a 29-year-old New Jersey \ncomputer programmer. It was later learned that he named the virus after a Florida \nstripper.\nThe Melissa virus was released from an “alt.sex” newsgroup using the America \nOnLine (AOL) account of Scott Steinmetz, whose username was “skyroket.” How­\never, Steinmetz, the owner of the AOL account who lived in the western U.S. state \nof Washington, denied any knowledge of the virus, let alone knowing anybody else \nusing his account. It looked like Smith hacked his account to disguise his tracks.\nThe virus, which spreads via a combination of Microsoft’s Outlook and Word \nprograms, takes advantage of Word documents to act as surrogates and the users’ \ne-mail address book entries to propagate it. The virus then mailed itself to each \nentry in the address book in either the original Word document named “list.doc” \nor in a future Word document carrying it after the infection. It was estimated that \n" }, { "page_number": 91, "text": "72\b\n3  Security Threats to Computer Networks\nMelissa affected more than 100,000 e-mail users and caused $80 million in dam­\nages during its rampage.\nThe Y2K Bug\nFrom 1997 to December 31, 1999, the world was gripped by apprehension over \none of the greatest myths and misnomers the history. This was never a bug; a soft­\nware bug as we know it, but a myth shrouded in the following story. Decades ago, \nbecause of memory storage restrictions and expanse of time, computer designers \nand programmers together made a business decision. They decided to represent the \ndate field by two digits such as “89” and “93” instead of the usual four digits such \nas “1956.” The purpose was noble, but the price was humongous.\nThe bug, therefore is: On New Year’s Eve of 1999, when world clocks were sup­\nposed to change over from 31/12/99 to 01/01/00 at 12:00 midnight, many comput­\ners, especially the older ones, were supposed not to know which year it was since it \nwould be represented by “00.” Many, of course, believed that computers would then \nassume anything from year “0000” to “1900,” and this would be catastrophic.\nBecause the people who knew much were unconvinced about the bug, it was known \nby numerous names to suit the believer. Among the names were: millennium bug, \nY2K computer bug, Y2K, Y2K problem, Y2K crisis, Y2K bug, and many others.\nThe good news is that the year 2000 came and went with very few incidents of \none of the most feared computer bug of our time.\nThe Goodtimes E-mail Virus\nYet another virus hoax, the Goodtimes virus, was humorous but it ended up being a \nchain e-mail annoying every one in its path because of the huge amount of “email \nvirus alerts” it generated. Its humor is embedded in the following prose: Goodtimes \nwill re-write your hard drive. Not only that, but it will also scramble any disks that \nare even close to your computer. It will recalibrate your refrigerator’s coolness set­\nting so all your ice cream melts. It will demagnetize the strips on all your credit \ncards, make a mess of the tracking on your television, and use subspace field har­\nmonics to scratch any CD you try to play.\nIt will give your ex-girlfriend your new phone number. It will mix Kool-aid into \nyour fishtank. It will drink all your beer and leave its socks out on the coffee table \nwhen company is coming over. It will put a dead kitten in the back pocket of your \ngood suit pants and hide your car keys when you are running late for work.\nGoodtimes will make you fall in love with a penguin. It will give you nightmares \nabout circus midgets. It will pour sugar in your gas tank and shave off both your \neyebrows while dating your current girlfriend behind your back and billing the din­\nner and hotel room to your Visa card.\nIt will seduce your grandmother. It does not matter if she is dead. Such is the power \nof Goodtimes; it reaches out beyond the grave to sully those things we hold most dear.\n" }, { "page_number": 92, "text": "3.2  Sources of Security Threats\b\n73\nIt moves your car randomly around parking lots so you can’t find it. It will kick \nyour dog. It will leave libidinous messages on your boss’s voice mail in your voice! \nIt is insidious and subtle. It is dangerous and terrifying to behold. It is also a rather \ninteresting shade of mauve.\nGoodtimes will give you Dutch Elm disease. It will leave the toilet seat up. It will \nmake a batch of methamphetamine in your bathtub and then leave bacon cooking on \nthe stove while it goes out to chase gradeschoolers with your new snowblower.\nDistributed Denial-of-Service (DDoS)\nFebruary 7, 2000, a month after the Y2K bug scare and Goodtimes hoax, the world \nwoke up to the real thing. This was not a hoax or a myth. On this day, a 16-year-old \nCanadian hacker nicknamed “Mafiaboy” launched his distributed denial-of-service \n(DDoS) attack. Using the Internet’s infrastructure weaknesses and tools, he unleashed \na barrage of remotely coordinated blitz of GB/s IP packet requests from selected, \nsometimes unsuspecting victim servers which, in a coordinated fashion, bombarded \nand flooded and eventually overcame and knocked out Yahoo servers for a period \nof about 3 hours. Within 2 days, while technicians at Yahoo and law enforcement \nagencies were struggling to identify the source of the attacker, on February 9, 2000, \nMafiaboy struck again, this time bombarding servers at eBay, Amazon, Buy.com, \nZDNet, CNN, E*Trade, and MSN.\nThe DDoS attack employs a network consisting of a master computer respon­\nsible for directing the attacks, the “innocent” computers commonly known as “dae­\nmons” used by the master as intermediaries in the attack, and the victim computer \n– a selected computer to be attacked. Figure 3.2 shows how this works.\nAfter the network has been selected, the hacker instructs the master node to fur­\nther instruct each daemon in its network to send several authentication requests to \nthe selected network nodes, filling up their request buffers. All requests have false \nreturn addresses; so, the victim nodes can’t find the user when they try to send back \nthe authentication approval. As the nodes wait for acknowledgments, sometimes \neven before they close the connections, they are again and again bombarded with \nmore requests. When the rate of requests exceeds the speed at which the victim node \ncan take requests, the nodes are overwhelmed and brought down.\nThe primary objective of a DDoS attack are multifaceted, including flooding a \nnetwork to prevent legitimate network traffic from going through the network, dis­\nrupting network connections to prevent access to services between network nodes, \npreventing a particular individual network node from accessing either all network \nservices or specified network services, and disrupting network services to either a \nspecific part of the network or selected victim machines on the network.\nThe Canadian judge stated that although the act was done by an adolescent, the \nmotivation of the attack was undeniable and had a criminal intent. He, therefore, \nsentenced the Mafiaboy, whose real name was withheld because he was under age, \nto serve 8 months in a youth detention center and 1 year of probation after his \nrelease from the detention center. He was also ordered to donate $250 to charity.\n" }, { "page_number": 93, "text": "74\b\n3  Security Threats to Computer Networks\nLove Bug Virus\nOn April 28, 2000, Onel de Guzman, a dropout from AMA computer college in \nManila, The Philippines, released a computer virus onto the global computer net­\nwork. The virus was first uploaded to the global networks via a popular Internet \nRelay Chat program using Impact, an Internet ISP. It was then uploaded to Sky \nInternet’s servers, another ISP in Manila, and it quickly spread to global networks, \nfirst in Asia and then Europe. In Asia, it hit a number of companies hard, includ­\ning the Dow Jones Newswire and the Asian Wall Street Journal. In Europe, it left \nthousands of victims that included big companies and parliaments. In Denmark, it \nhit TV2 channel and the Danish parliament, and in Britain, the House of Commons \nfell victim too. Within 12 hours of release, it was on the North American continent, \nwhere the U.S. Senate computer system was among the victims [6].\nIt spread via Microsoft Outlook e-mail systems as surrogates. It used a rather \nsinister approach by tricking the user to open an e-mail presumably from someone \nthe user knew (because the e-mail usually came from an address book of some­\none the user knew). The e-mail, as seen in Fig. 3.3, requests the user to check the \nattached “Love Letter.” The attachment file was in fact a Visual Basic script, which \ncontained the virus payload. The virus then became harmful when the user opened \nthe attachment. Once the file was opened, the virus copied itself to two critical \nsystem directories and then added triggers to the Windows registry to ensure that it \nDaemon\nAttacker\nDaemon\nVictim Computer\nDaemon\nDaemon\nMaster Server\nFig. 3.2  The working of a DDOS attack\n" }, { "page_number": 94, "text": "3.2  Sources of Security Threats\b\n75\nran every time the computer was rebooted. The virus then replicated itself, destroy­\ning system files including Web development such as “.js” and “.css,” multimedia \nfiles such as JPEG and MP3, searched for login names and passwords in the user’s \naddress book, and then mailed itself again [6].\nde Guzman was tracked down within hours of the release of the virus. Security \nofficials, using a Caller ID of the phone number, and ISP used by de Guzman, were \nled to an apartment in the poor part of Manila where de Guzman lived.\nThe virus devastated global computer networks, and it was estimated that it \ncaused losses ranging between $7 billion and $20 billion [7].\nPalm Virus\nIn August 2000, the actual palm virus was released under the name of Liberty Tro­\njan horse, the first known malicious program targeting the Palm OS. The Liberty \nTrojan horse duped some people into downloading a program that erased data.\nAnother palm virus shortly followed Palm Liberty. On September 21, 2000, \nMcAfee.com and F-Secure, two of the big antivirus companies, first discovered a \nreally destructive palm virus they called PalmOS/Phage. When Palm OS/Phage is \nexecuted, the screen is filled with a dark gray box, and the application is terminated. \nThe virus then replicates itself to other Palm OS applications.\nWireless device viruses have not been widespread, thanks to the fact that the \nmajority of Palm OS users do not download programs directly from the Web but via \ntheir desktop and then sync to their palm. Because of this, they have virus protection \nFig. 3.3  The love bug monitor \ndisplay\n" }, { "page_number": 95, "text": "76\b\n3  Security Threats to Computer Networks\navailable to them at either their ISP’s Internet gateway, at the desktop, or at their \ncorporation.\nThe appearance of a Palm virus in cyberspace raises many concerns about the \nsecurity of cyberspace because PDAs are difficult to check for viruses as they are \nnot hooked up to a main corporate network. PDAs are moving as users move, mak­\ning virus tracking and scanning difficult.\nAnna Kournikova virus\nOn February 12, 2001, global computer networks were hit again by a new virus, \nAnna Kournikova, named after the Russian tennis star. The virus was released by \n20-year-old Dutchman Jan de Wit, commonly known in the hacker underworld \ncommunity as “OnTheFly.” The virus, like the I LOVE YOU virus before it, was a \nmass-mailing type. Written in Visual Basic scripting language, the virus spreads by \nmailing itself, disguised as a JPEG file named Anna Kournikov, through Microsoft \nWindows, Outlook, and other e-mail programs on the Internet.\nThe subject line of mail containing the virus bears the following: “Here ya \nhave,;0)”, “Here you are ;-),” or “here you go ;-).” Once opened, Visual Basic script, \ncopies itself to a Windows directory as “AnnaKournikova.jpg.vbs.” It then mails \nitself to all entries in the user’s Microsoft Outlook e-mail address book. Figure 3.4 \nshows the Anna Kournikov monitor screen display.\nSpreading at twice the speed of the notorious “I LOVE YOU” bug, Anna quickly \ncircumvented the globe.\nFig. 3.4  Anna Kournikov \nmonitor display\n" }, { "page_number": 96, "text": "3.2  Sources of Security Threats\b\n77\nSecurity experts believe Anna was of the type commonly referred to as a “virus \ncreation kit,” “a do-it-yourself program kit” that potentially makes everyone able to \ncreate a malicious code.\nCode Red: “For one moment last week, the Internet stood still.”1\nThe code Red worm was first released on July 12, 2001 from Foshan University \nin China and it was detected the next day July 13 by senior security engineer Ken \nEichman. However, when detected, it was not taken seriously until 4 days later \nwhen engineers at eEye Digital cracked the worm code and named it “Code Red” \nafter staying awake with “Code Red”-labeled Mountain Dew [8]. By this time, the \nworm had started to spread, though slowly. Then on July 19, according to Rob \nLemos, it is believed that someone modified the worm, fixing a problem with its \nrandom-number generator. The new worm started to spread like wildfire spreading, \nleaping from 15,000 infections that morning to almost 350,000 infections by 5 p.m. \nPDT [8].\nThe worm was able to infect computers because it used a security hole, discov­\nered the month before, in computers using Microsoft’s Internet Information Server \n(IIS) in the Windows NT4 and Windows 2000 Index Services. The hole, known as \nthe Index Server ISAPI vulnerability, allowed the intruder to take control of a secu­\nrity vulnerability in these systems, resulting in one of several outcomes, including \nWeb site defacement and installation of denial of service tools. The following web \ndefacement: “HELLO! Welcome to http://www.worm.com! Hacked By Chinese!” \nusually resulted. The Web defacement was done by the worm connecting to TCP \nport 80 on a randomly chosen host. If the connection was successful, the attacking \nhost sent a crafted HTTP GET request to the victim, attempting to exploit a buffer \noverflow in the Indexing Service [9].\nBecause Code Red was self-propagating, the victim computer would then send \nthe same exploit (HTTP GET request) to another set of randomly chosen hosts\nAlthough Microsoft issued a patch when the security hole was discovered, not \nmany servers were patched before Code Red hit. Because of the large number of IIS \nserves on the Internet, Code Red found the going easy and at its peak, it hit up to \n300,000 servers. But Code Red did not do as much damage as feared; because of its \nown design flaw, the worm was quickly brought under control.\nSQL Worm\nOn Saturday, January 25, 2003, the global communication network was hit by the \nSQL Worm. The worm, which some refer to as the “SQL Slammer,” spreads to \n1 Lemos, Rob. “Code Red: Virulent worm calls into doubt our ability to protect the Net,” CNET \nNews.com, July 27, 2001.\n" }, { "page_number": 97, "text": "78\b\n3  Security Threats to Computer Networks\ncomputers that are running Microsoft SQL Server with a blank SQL administrator \npassword. Once in the system, it copies files to the infected computer and changes \nthe SQL administrator password to a string of four random characters.\nThe vulnerability exploited by the slammer warm pre-existed in the Microsoft \nSQL Server 2000 and in fact was discovered 6 months prior to the attack. When \nthe vulnerability was discovered, Microsoft offered a free patch to fix the problem; \nhowever, the word never got around to all users of the server software.\nThe worm spread rapidly in networks across Asia, Europe, and the United States \nand Canada, shutting down businesses and government systems. However, its \neffects were not very serious because of its own weaknesses that included its inabil­\nity to affect secure servers and its ease of detection.\nHackers View 8 Million Visa/MasterCard, Discover, and American Express \nAccounts\nOn Monday, February 17, 2003, the two major credit card companies Visa and \nMasterCard reported a major infiltration into a third-party payment card proces­\nsor by a hacker who gained access to more than 5 million Visa and MasterCard \naccounts throughout the United States. Card information exposed included card \nnumbers and personal information that included social security numbers, and \ncredit limits.\nThe flood of the hacker victims increased by two on Tuesday, February 18, 2003, \nwhen both Discover Financial Services and American Express reported that they \nwere also victims of the same hacker who breached the security system of a com­\npany that processes transactions on behalf of merchants.\nWhile MasterCard and Visa had earlier reported that around 2.2 million and \n3.4 million of their own cards were respectively affected, Discover and American \nExpress would not disclose how many accounts were involved. It is estimated, how­\never, that the number of affected accounts in the security breach was as high as 8 \nmillion.\n3.2.5  Vulnerability in Operating System Protocol\nOne area that offers the greatest security threat to global computer systems is the \narea of software errors, especially network operating systems errors. An operating \nsystem plays a vital role not only in the smooth running of the computer system in \ncontrolling and providing vital services, but by playing a crucial role in the security \nof the system in providing access to vital system resources. A vulnerable operating \nsystem can allow an attacker to take over a computer system and do anything that \nany authorized super user can do, such as changing files, installing and running \nsoftware, or reformatting the hard drive.\n" }, { "page_number": 98, "text": "3.2  Sources of Security Threats\b\n79\nEvery OS comes with some security vulnerabilities. In fact many security vul­\nnerabilities are OS specific. Hacker look for OS-identifying information like file \nextensions for exploits.\n3.2.6  The Invisible Security Threat – The Insider Effect\nQuite often, news media reports show that in cases of violent crimes such as murder, \none is more likely to be attacked by someone one does not know. However, real \nofficial police and court records show otherwise. This is also the case in network \nsecurity. Research data from many reputable agencies consistently show that the \ngreatest threat to security in any enterprise is the guy down the hall.\nIn 1997, the accounting firm Ernst & Young interviewed 4,226 IT managers and \nprofessionals from around the world about the security of their networks. From the \nresponses, 75 percent of the managers indicated that they believed authorized users and \nemployees represent a threat to the security of their systems. Forty-two percent of the \nErnst and Young respondents reported they had experienced external malicious attacks \nin the past year, while 43 percent reported malicious acts from employees [10].\nThe Information Security Breaches Survey 2002, a U.K. government’s Depart­\nment of Trade and Industry sponsored survey conducted by the consultancy firm \nPricewaterhouseCoopers, found that in small companies, 32 percent of the worst \nincidents were caused by insiders, and this number jumps to 48 percent in large \ncompanies [11].\nAlthough slightly smaller, similar numbers were found in the CBI Cybercrime \nSurvey 2001. In that survey, 25 percent of organizations identified employees or \nformer employees as the main cybercrime perpetrators, compared to 75 percent who \ncited hackers, organized crime, and other outsiders.\nOther studies have shown slightly varying percentages of insiders doing the \ndamage to corporate security. As the data indicates, many company executives and \nsecurity managers had for a long time neglected to deal with the guys down the hall \nselling corporate secrets to competitors.\nAccording to Jack Strauss, president and CEO of SafeCorp, a professional infor­\nmation security consultancy in Dayton, Ohio, company insiders intentionally or acci­\ndentally misusing information pose the greatest information security threat to today’s \nInternet-centric businesses. Strauss believes that it is a mistake for company security \nchiefs to neglect to lock the back door to the building, encrypt sensitive data on their \nlaptops, or not to revoke access privileges when employees leave the company [11].\n3.2.7  Social Engineering\nBeside the security threat from the insiders themselves who knowingly and willingly \nare part of the security threat, the insider effect can also involve insiders unknowingly \n" }, { "page_number": 99, "text": "80\b\n3  Security Threats to Computer Networks\nbeing part of the security threat through the power of social engineering. Social engi­\nneering consists of an array of methods an intruder such as a hacker, both from within \nor outside the organization, can use to gain system authorization through masquerad­\ning as an authorized user of the network. Social engineering can be carried out using a \nvariety of methods, including physically impersonating an individual known to have \naccess to the system, online, telephone, and even by writing. The infamous hacker \nKevin Mitnick used social engineering extensively to break into some of the nation’s \nmost secure networks with a combination of his incredible solid computer hacking \nand social engineering skills to coax information, such as passwords, out of people.\n3.2.8  Physical Theft\nAs the demand for information by businesses to stay competitive and nations to \nremain strong heats up, laptop computer and PDA theft is on the rise. There is a \nwhole list of incidents involving laptop computer theft such as the reported disap­\npearance of a laptop used to log incidents of covert nuclear proliferation from a \nsixth-floor room in the headquarters of the U.S. State Department in January, 2000. \nIn March of the same year, a British accountant working for the MI5, a British \nnational spy agency, had his laptop computer snatched from between his legs while \nwaiting for a train at London’s Paddington Station. In December 1999, someone \nstole a laptop from the car of Bono, lead singer for the megaband U2; it contained \nmonths of crucial work on song lyrics. And according to the computer-insurance \nfirm Safeware, some 319,000 laptops were stolen in 1999, at a total cost of more \nthan $800 million for the hardware alone [12]. Thousands of company executive \nlaptops and PDA disappear every year with years of company secrets.\n3.3  Security Threat Motives\nAlthough we have seen that security threats can originate from natural disasters \nand unintentional human activities, the bulk of cyberspace threats and then attacks \noriginate from humans caused by illegal or criminal acts from either insiders or out­\nsiders, recreational hackers, and criminals. The FBI’s foreign counterintelligence \nmission has broadly categorized security threats based on terrorism, military espio­\nnage, economic espionage, that targeting the National Information Infrastructure, \nvendetta and revenge, and hate [13].\n3.3.1  Terrorism\nOur increasing dependence on computers and computer communication has opened \nup the can of worms, we now know as electronic terrorism. Electronic terrorism \n" }, { "page_number": 100, "text": "3.3  Security Threat Motives\b\n81\nis used to attack military installations, banking, and many other targets of interest \nbased on politics, religion, and probably hate. Those who are using this new brand \nof terrorism are a new breed of hackers, who no longer hold the view of cracking \nsystems as an intellectual exercise but as a way of gaining from the action. The \n“new” hacker is a cracker who knows and is aware of the value of information that \nhe/she is trying to obtain or compromise. But cyber-terrorism is not only about \nobtaining information; it is also about instilling fear and doubt and compromising \nthe integrity of the data.\nSome of these hackers have a mission, usually foreign power-sponsored or for­\neign power-coordinated that, according to the FBI, may result in violent acts, dan­\ngerous to human life, that are a violation of the criminal laws of the targeted nation \nor organization and are intended to intimidate or coerce people so as to influence \nthe policy.\n3.3.2  Military Espionage\nFor generations, countries have been competing for supremacy of one form or \nanother. During the Cold War, countries competed for military spheres. After it \nended, the espionage turf changed from military aim to gaining access to highly \nclassified commercial information that would not only let them know what other \ncountries are doing but also might give them either a military or commercial advan­\ntage without their spending a great deal of money on the effort. It is not surprising, \ntherefore, that the spread of the Internet has given a boost and a new lease on life \nto a dying Cold War profession. Our high dependency on computers in the national \nmilitary and commercial establishments has given espionage a new fertile ground. \nElectronic espionage has many advantages over its old-fashion, trench-coated, sun-\nglassed, and gloved Hitchcock-style cousin. For example, it is less expensive to \nimplement; it can gain access into places that would be inaccessible to human spies, \nit saves embarrassment in case of failed or botched attempts, and it can be carried \nout at a place and time of choice.\n3.3.3  Economic Espionage\nThe end of the Cold War was supposed to bring to an end spirited and inten­\nsive military espionage. However, in the wake of the end of the Cold War, the \nUnited States, as a leading military, economic, and information superpower, found \nitself a constant target of another kind of espionage, economic espionage. In its \npure form, economic espionage targets economic trade secrets which, according \nto the 1996 U.S. Economic Espionage Act, are defined as all forms and types \nof financial, business, scientific, technical, economic, or engineering informa­\ntion and all types of intellectual property including patterns, plans, compilations, \nprogram devices, formulas, designs, prototypes, methods, techniques, processes, \n" }, { "page_number": 101, "text": "82\b\n3  Security Threats to Computer Networks\nprocedures, programs, and/or codes, whether tangible or not, stored or not, and \ncompiled or not [14]. To enforce this Act and prevent computer attacks targeting \nAmerican commercial interests, U.S. Federal Law authorizes law enforcement \nagencies to use wiretaps and other surveillance means to curb computer-supported \ninformation espionage.\n3.3.4  Targeting the National Information Infrastructure\nThe threat may be foreign power-sponsored or foreign power-coordinated, directed \nat a target country, corporation, establishments, or persons. It may target specific \nfacilities, personnel, information, or computer, cable, satellite, or telecommuni­\ncations systems that are associated with the National Information Infrastructure. \nActivities may include the following [15]:\nDenial or disruption of computer, cable, satellite, or telecommunications \n• \nservices;\nUnauthorized monitoring of computer, cable, satellite, or telecommunications \n• \nsystems;\nUnauthorized disclosure of proprietary or classified information stored within \n• \nor communicated through computer, cable, satellite, or telecommunications \nsystems;\nUnauthorized modification or destruction of computer programming codes, \n• \ncomputer network databases, stored information or computer capabilities; or\nManipulation of computer, cable, satellite, or telecommunications services \n• \nresulting in fraud, financial loss, or other federal criminal violations.\n3.3.5  Vendetta/Revenge\nThere are many causes that lead to vendettas. The demonstrations at the last \nWorld Trade Organization (WTO) in Seattle, Washington and subsequent dem­\nonstrations at the meetings in Washington, D.C. of both the World Bank and \nthe International Monetary Fund are indicative of the growing discontent of the \nmasses who are unhappy with big business, multi-nationals, big governments, \nand a million others. This discontent is driving a new breed of wild, rebellious, \nyoung people to hit back at systems that they see as not solving world prob­\nlems and benefiting all of mankind. These mass computer attacks are increas­\ningly being used as paybacks for what the attacker or attackers consider to be \ninjustices done that need to be avenged. However, most vendetta attacks are for \nmundane reasons such as a promotion denied, a boyfriend or girlfriend taken, an \nex-spouse given child custody, and other situations that may involve family and \nintimacy issues.\n" }, { "page_number": 102, "text": "3.4  Security Threat Management\b\n83\n3.3.6  Hate (National Origin, Gender, and Race)\nHate as a motive of security threat originates from and is always based on an indi­\nvidual or individuals with a serious dislike of another person or group of persons \nbased on a string of human attributes that may include national origin, gender, race, \nor mundane ones such as the manner of speech one uses. Then incensed, by one or \nall of these attributes, the attackers contemplate and threaten and sometimes carry \nout attacks of vengeance often rooted in ignorance.\n3.3.7  Notoriety\nMany, especially young, hackers try to break into a system to prove their compe­\ntence and sometimes to show off to their friends that they are intelligent or superhu­\nman in order to gain respect among their peers.\n3.3.8  Greed\nMany intruders into company systems do so to gain financially from their acts.\n3.3.9  Ignorance\nThis takes many forms but quite often it happens when a novice in computer secu­\nrity stumbles on an exploit or vulnerability and without knowing or understanding \nit uses it to attack other systems.\n3.4  Security Threat Management\nSecurity threat management is a technique used to monitor an organization’s critical \nsecurity systems in real-time to review reports from the monitoring sensors such as the \nintrusion detection systems, firewall, and other scanning sensors. These reviews help \nto reduce false positives from the sensors, develop quick response techniques for threat \ncontainment and assessment, correlate and escalate false positives across multiple sen­\nsors or platforms, and develop intuitive analytical, forensic, and management reports\nAs the workplace gets more electronic and critical company information finds its \nway out of the manila envelopes and brown folders into online electronic databases, \nsecurity management has become a full-time job for system administrators. While \nthe number of dubious users is on the rise, the number of reported criminal incidents \nis skyrocketing, and the reported response time between a threat and a real attack is \n" }, { "page_number": 103, "text": "84\b\n3  Security Threats to Computer Networks\ndown to 20 minutes or less [15]. To secure company resources, security managers \nhave to do real-time management. Real-time management requires access to real-\ntime data from all network sensors.\nAmong the techniques used for security threat management are risk assessment \nand forensic analysis.\n3.4.1  Risk Assessment\nEven if there are several security threats all targeting the same resource, each threat \nwill cause a different risk and each will need a different risk assessment. Some will \nhave low risk while others will have the opposite. It is important for the response team \nto study the risks as sensor data come in and decide which threat to deal with first.\n3.4.2  Forensic Analysis\nForensic analysis is done after a threat has been identified and contained. After con­\ntainment, the response team can launch the forensic analysis tools to interact with \nthe dynamic report displays that have come from the sensors during the duration \nof the threat or attack if the threat results in an attack. The data on which forensic \nanalysis should be performed must be kept in a secure state to preserve the evi­\ndence. It must be stored and transferred, if this is needed, with the greatest care, and \nthe analysis must be done with the utmost professionalism possible if the results of \nthe forensic analysis are to stand in court.\n3.5  Security Threat Correlation\nAs we have noted in the previous section, the interval time between the first occur­\nrence of the threat and the start of the real attack has now been reduced about 20 \nminutes. This is putting enormous pressure on organizations’ security teams to cor­\nrespondingly reduce the turnaround time, the time between the start of an incident \nand the receipt of the first reports of the incident from the sensors. The shorter the \nturnaround time, the quicker the response to an incident in progress. In fact, if the \nincident is caught at an early start, an organization can be saved a great deal of \ndamage.\nThreat correlation, therefore, is the technique designed to reduce the turnaround time \nby monitoring all network sensor data and then use that data to quickly analyze and dis­\ncriminate between real threats and false positives. In fact, threat correlation helps in\nReducing false positives because if we get the sensor data early enough, analyze \n• \nit, and detect false positives, we can quickly re-tune the sensors so that future \nfalse positives are reduced.\n" }, { "page_number": 104, "text": "3.6  Security Threat Awareness\b\n85\nReducing false negatives; similarly by getting early sensor reports, we can \n• \nanalyze it, study where false negatives are coming from, and re-tune the sensors \nto reveal more details.\nVerifying sensor performance and availability; by getting early reports we can \n• \nquickly check on all sensors to make sure that they are performing as needed.\n3.5.1  Threat Information Quality\nThe quality of data coming from the sensor logs depends on several factors including\nCollection – When data is collected, it must be analyzed. The collection \n• \ntechniques specify where the data is to be analyzed. To reduce on bandwidth and \ndata compression problems, before data is transported to a central location for \nanalysis, some analysis is usually done at the sensor and then reports are brought \nto the central location. But this kind of distributed computation may not work \nwell in all cases.\nConsolidation – Given that the goal of correlation is to pull data out of the \n• \nsensors, analyze it, correlate it, and deliver timely and accurate reports to the \nresponse teams, and also given the amount of data generated by the sensors, and \nfurther the limitation to bandwidth, it is important to find good techniques to \nfilter out relevant data and consolidate sensor data either through compression or \naggregation so that analysis is done on only real and active threats.\nCorrelation – Again given the goals of correlation, if the chosen technique of \n• \ndata collection is to use a central database, then a good data mining scheme \nmust be used for appropriate queries on the database that will result in outputs \nthat will realize the goals of correlation. However, many data mining techniques \nhave problems.\n3.6  Security Threat Awareness\nSecurity threat awareness is meant to bring widespread and massive attention of \nthe population to the security threat. Once people come to know of the threat, \nit is hoped that they will become more careful, more alert, and more responsi­\nble in what they do. They will also be more likely to follow security guidelines. \nA good example of how massive awareness can be planned and brought about is \nthe efforts of the new U.S. Department of Homeland Security. The department was \nformed after the September 11, 2001 attack on the United States to bring maximum \nnational awareness to the security problems facing not only the country but also \nevery individual. The idea is to make everyone proactive to security. Figure 3.5 \nshows some of the efforts of the Department of Homeland Security for massive \nsecurity awareness.\n" }, { "page_number": 105, "text": "86\b\n3  Security Threats to Computer Networks\nExercises\n\t 1.\t Although we discussed several sources of security threats, we did not exhaust \nall. There are many such sources. Name and discuss five.\n\t 2.\t We pointed out that the design philosophy of the Internet infrastructure was \npartly to blame for the weaknesses and hence a source of security threats. Do you \nthink a different philosophy would have been better? Comment on your answer.\n\t 3.\t Give a detailed account of why the three-way handshake is a security threat.\nDuring a national emergency, one of the most valuable assets an individual and organiza-\ntion can posses is timely, accurate and useful information. Increase your level of situational \nawareness with these free power resources.\nHomeland Security Weekly electronic magazine (free) is a \npowerful fusion of important news items, terror alert details, \nlegislative policy and hard hitting, useful resources. Unique in all \nrespects\nThe Terror Alert Mailing List (free) assists in creating an in-\ncreased level of \"situational awareness\" through immediate, \nconcise email notification of terror threats against the United \nStates and it's citizens abroad, as well as changes in overall \nthreat posture as defined by the federal Homeland Security Ad-\nvisory System.\nThreat Condition Banners (free) are auto - updating banners \nand buttons which can be added to an internet website. If there \nis a change in the overall national threat posture as defined by \nthe federal Homeland Security Advisory System, these banners \nwill automatically update to reflect the new condition.\nFig. 3.5  Department of homeland security efforts for massive security awareness [16]\n" }, { "page_number": 106, "text": "Advanced Exercises\b\n87\n\t 4.\t In the chapter, we gave two examples of how a port scan can be a threat to secu­\nrity. Give three more examples of port scans that can lead to system security \ncompromise.\n\t 5.\t Comment on the rapid growth of the Internet as a contributing factor to the \nsecurity threat of cyberspace. What is the responsible factor in this growth? Is \nit people or the number of computers?\n\t 6.\t There seems to have been an increase in the number of reported virus and worm \nattacks on computer networks. Is this really a sign of an increase, more report­\ning, or more security awareness on the part of the individual? Comment on each \nof these factors.\n\t 7.\t Social engineering has been frequently cited as a source of network security \nthreat. Discuss the different elements within social engineering that contribute \nto this assertion.\n\t 8.\t In the chapter, we gave just a few of the many motives for security threat. Dis­\ncuss five more, giving details of why there are motives.\n\t 9.\t Outline and discuss the factors that influence threat information quality.\n10.\t Discuss the role of data mining techniques in the quality of threat information.\nAdvanced Exercises\n\t 1.\t Research the effects of industrial espionage and write a detailed account of a \nprofile of a person who sells and buys industrial secrets. What type of industrial \nsecrets is likely to be traded?\n\t 2.\t The main reasons behind the development of the National Strategy to Secure \nCyberspace were the realization that we are increasingly dependent on the com­\nputer networks, the major components of the national critical infrastructure are \ndependent on computer networks, and our enemies have the capabilities to dis­\nrupt and affect any of the infrastructure components at will. Study the National \nInformation Infrastructure, the weaknesses inherent in the system, and suggest \nways to harden it.\n\t 3.\t Study and suggest the best ways to defend the national critical infrastructure \nfrom potential attackers.\n\t 4.\t We indicated in the text that the best ways to manage security threats is to \ndo an extensive risk assessment and more forensic analysis. Discuss how \nreducing the turnaround time can assist you in both risk assessment and \nforensic analysis. What are the inputs into the forensic analysis model? \nWhat forensic tools are you likely to use? How do you suggest to deal with \nthe evidence?\n\t 5.\t Do research on intrusion detection and firewall sensor false positives and false \nnegatives. Write an executive report on the best ways to deal with both of these \nunwanted reports.\n" }, { "page_number": 107, "text": "88\b\n3  Security Threats to Computer Networks\nReferences\n\t 1.\t G-Lock Software. “TCP and UDP port scanning examples.” http://www.glocksoft.com/\ntcpudpscan.htm\n\t 2.\t Rutkowski, Tony. “Internet Survey reaches 109 million host level”. Center for Next Genera­\ntion Internet. http://www.ngi.org/trends/TrendsPR0102.txt\n\t 3.\t “Battling the Net Security Threat,” Saturday, 9 November, 2002, 08:15 GMT, http://news.bbc.\nco.uk/2/hi/technology/2386113.stm.\n\t 4.\t “Derived in part from a letter by Severo M. Ornstein”, in the Communications of the ACM, \nJune 1989, 32(6).\n\t 5.\t “Virus Writer Christopher Pile (Black Barron) Sent to Jail for 18 Months Wednesday 15 \nNovember 1995”. http://www.gps.jussieu.fr/comp/VirusWriter.html\n\t 6.\t Hopper, Ian. “Destructive ‘I LOVE YOU’ Computer virus strikes worldwide”. CNN Interac­\ntive Technology. http://www.cnn.com/2000/TECH/computing/05/04/iloveyou/.\n\t 7.\t “Former student: Bug may have been spread accidentally”. CNN Interactive. http:/www.cnn.\ncom/2000/ASIANOWsoutheast/05/11/iloveyou.02/\n\t 8.\t “National Security Threat List”. http://rf-web.tamu.edu/security/SECGUIDE/T1threat/Nstl.\nhtm\n\t 9.\t “CERT® Advisory CA-2001–19 ‘Code Red’ Worm Exploiting Buffer Overflow In IIS Indexing \nService DLL.” http://www.cert.org/advisories/CA-2001–19.html\n\t10.\t “Is IT Safe?” InfoTrac. Tennessee Electronic Library. HP Professional, December 1997, \n1(12), 14–20.\n\t11.\t “Insider Abuse of Information is Biggest Security Threat, SafeCop Says”. InfoTrac. Tennessee \nElectronic Library. Business Wire. November 10, 2000, p. 1.\n\t12.\t Hollows, Phil. “Security Threat Correlation: The Next Battlefield”. eSecurityPlanetcom. \nhttp://www.esecurityplanet.com/views/article.php/10752_1501001.\n\t13.\t “Awareness of National Security Issues and Response [ANSIR]”. FBI’s Intelligence Resource \nProgram. http://www.fas.org/irp/ops/ci/ansir.htm\n\t14.\t Andrew Grosso. “The Economic Espionage ACT: Touring the Minefields”. Communications \nof the ACM, August 2000, 43(8), 15–18.\n\t15.\t “ThreatManager ™ – The Real-Time Security Threat Management Suite”. http://www.open.\ncom/responsenetworks/products/threatmanager/threatmanager.htm?ISR1\n\t16.\t “Department of Homeland Security”. http://www.dohs.gov/\n" }, { "page_number": 108, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_4, © Springer-Verlag London Limited 2009\n\b\n89\nChapter 4\nComputer Network Vulnerabilities\n4.1  Definition\nSystem vulnerabilities are weaknesses in the software or hardware on a server or a \nclient that can be exploited by a determined intruder to gain access to or shut down \na network. Donald Pipkin defines system vulnerability as a condition, a weakness \nof or an absence of security procedure, or technical, physical, or other controls that \ncould be exploited by a threat [1].\nVulnerabilities exist do not only in hardware and software that constitute a com­\nputer system but also in policies and procedures, especially security policies and \nprocedures, that are used in a computer network system and in users and employees \nof the computer network systems. Since vulnerabilities can be found in so many \nareas in a network system, one can say that a security vulnerability is indeed any­\nthing in a computer network that has the potential to cause or be exploited for an \nadvantage. Now that we know what vulnerabilities are, let us look at their possible \nsources.\n4.2  Sources of Vulnerabilities\nThe frequency of attacks in the last several years, and the speed and spread \nof these attacks, indicate serious security vulnerability problems in our network \nsystems. There is no definitive list of all possible sources of these system vulner­\nabilities. Many scholars and indeed many security incident reporting agencies such \nas Bugtraq: the mailing list for vulnerabilities, CERT/CC: the U.S.A. Computer \nEmergency Response Team, NTBugtraq: the mailing list for Windows security, \nRUS-CERT: the Germany Computer Emergency Response Team, and U.S.DOE-\nCIAC: the U.S. Department of Energy Computer Incident Adversary Capability, \nhave called attention to not only one but multiple factors that contribute to these \nsecurity problems and pose obstacles to the security solutions. Among the most \nfrequently mentioned sources of security vulnerability problems in computer net­\nworks are design flaws, poor security management, incorrect implementation, \nInternet technology vulnerability, the nature of intruder activity, the difficulty of \n" }, { "page_number": 109, "text": "90\b\n4  Computer Network Vulnerabilities\nfixing vulnerable systems, the limits of effectiveness of reactive solutions, and \nsocial engineering [2].\n4.2.1  Design Flaws\nThe two major components of a computer system, hardware and software, quite \noften have design flaws. Hardware systems are less susceptible to design flaws than \ntheir software counterparts owing to less complexity, which makes them easier to \ntest; limited number of possible inputs and expected outcomes, again making it easy \nto test and verify; and the long history of hardware engineering. But even with all \nthese factors backing up hardware engineering, because of complexity in the new \ncomputer systems, design flaws are still common.\nBut the biggest problems in system security vulnerability are due to software \ndesign flaws. A number of factors cause software design flaws, including over­\nlooking security issues all together. However, three major factors contribute a great \ndeal to software design flaws: human factors, software complexity, and trustworthy \nsoftware sources [3].\n4.2.1.1  Human Factors\nIn the human factor category, poor software performance can be a result of the fol­\nlowing:\n1.\t Memory lapses and attentional failures: For example, someone was supposed to \nhave removed or added a line of code, tested, or verified, but did not because of \nsimple forgetfulness.\n2.\t Rush to finish: The result of pressure, most often from management, to get the \nproduct on the market either to cut development costs or to meet a client deadline \ncan cause problems.\n3.\t Overconfidence and use of nonstandard or untested algorithms: Before algo­\nrithms are fully tested by peers, they are put into the product line because they \nseem to have worked on a few test runs.\n4.\t Malice: Software developers, like any other professionals, have malicious people \nin their ranks. Bugs, viruses, and worms have been known to be embedded and \ndownloaded in software, as is the case with Trojan horse software, which boots \nitself at a timed location. As we will see in Section 8.4, malice has traditionally \nbeen used for vendetta, personal gain (especially monetary), and just irresponsi­\nble amusement. Although it is possible to safeguard against other types of human \nerrors, it is very difficult to prevent malice.\n5.\t Complacency: When either an individual or a software producer has significant \nexperience in software development, it is easy to overlook certain testing and \nother error control measures in those parts of software that were tested previ­\nously in a similar or related product, forgetting that no one software product can \nconform to all requirements in all environments.\n" }, { "page_number": 110, "text": "4.2.1.2  Software Complexity\nBoth software professionals and nonprofessionals who use software know the dif­\nferences between software programming and hardware engineering. In these dif­\nferences underlie many of the causes of software failure and poor performance.\nConsider the following:\n1.\t Complexity: Unlike hardwired programming in which it is easy to exhaust the \npossible outcomes on a given set of input sequences, in software programming \na similar program may present billions of possible outcomes on the same input \nsequence. Therefore, in software programming, one can never be sure of all the \npossibilities on any given input sequence.\n2.\t Difficult testing: There will never be a complete set of test programs to check \nsoftware exhaustively for all bugs for a given input sequence.\n3.\t Ease of programming: The fact that software programming is easy to learn \nencourages many people with little formal training and education in the field \nto start developing programs, but many are not knowledgeable about good pro­\ngramming practices or able to check for errors.\n4.\t Misunderstanding of basic design specifications: This affects the subsequent \ndesign phases including coding, documenting, and testing. It also results in \nimproper and ambiguous specifications of major components of the software \nand in ill-chosen and poorly defined internal program structures.\n4.2.1.3  Trustworthy Software Sources\nThere are thousands of software sources for the millions of software products on \nthe market today. However, if we were required to name well known software pro­\nducers, very few of us would succeed in naming more than a handful. Yet we buy \nsoftware products every day without even ever minding their sources. Most impor­\ntantly, we do not care about the quality of that software, the honesty of the anony­\nmous programmer, and of course its reliability as long as it does what we want it \nto do.\nEven if we want to trace the authorship of the software product, it is impossible \nbecause software companies are closed within months of their opening. Chances are \nwhen a software product is 2 years old, its producer is likely to be out of business. \nIn addition to the difficulties in tracing the producers of software who go out of \nbusiness as fast as they come in, there is also fear that such software may not even \nhave been tested at all.\nThe growth of the Internet and the escalating costs of software production have \nled many small in-house software developers to use the marketplace as a giant test­\ning laboratory through the use of beta testing, shareware, and freeware. Shareware \nand freeware have a high potential of bringing hostile code into trusted systems.\nFor some strange reason, the more popular the software product gets, the less it \nis tested. As software products make market inroads, their producers start thinking \nof producing new versions and releases with little to no testing of current versions. \n4.2  Sources of Vulnerabilities\b\n91\n" }, { "page_number": 111, "text": "92\b\n4  Computer Network Vulnerabilities\nThis leads to the growth of what is called a common genesis software product, \nwhere all its versions and releases are based on a common code. If such a code has \nnot been fully tested, which is normally the case, then errors are carried through \nfrom version to version and from release to release.\nIn the last several years, we have witnessed the growth of the Open Source move­\nment. It has been praised as a novel idea to break the monopoly and price gauging \nby big software producers and most important as a timely solution to poor software \ntesting. Those opposed to the movement have criticized it for being a source of \nuntrusted and many times untested software. Despite the wails of the critics, major \nopen-source products such as Linux operating system have turned out with few \nsecurity flaws; still there are fears that hackers can look at the code and perhaps find \na way to cause mischief or steal information.\nThere has been a rise recently in Trojan horses inserted into open source code. \nIn fact security experts are not recommending running readily available programs \nsuch as MD5 hashes to ensure that code hasn’t been altered. Using MD5 hashes and \nsimilar programs such as MD4, SHA and SHA-1 continually compare codes gen­\nerated by “healthy” software to hashes of programs in the field, thus exposing the \nTrojans. According to the recent CERT advisory, crackers are increasingly inserting \nTrojans into the source code for tcpdump, a utility that monitors network traffic, and \nlibpcap, a packet capture library tool [4].\nHowever, according to the recent study by the Aberdeen Group, open-source \nsoftware now accounts for more than half of all security advisories published in the \npast year by the Computer Emergency Response Team (CERT). Also according to \nindustry study reports, open-source software commonly used in Linux, Unix, and \nnetwork routing equipment accounted for 16 of the 29 security advisories during \nthe first 10 months of 2002, and there is an upswing in new virus and Trojan horse \nwarnings for Unix, Linux, Mac OSX, and open source software [4].\n4.2.1.4  Software Re-use, Re-engineering, and Outlived Design\nNew developments in software engineering are spearheading new developments \nsuch as software re-use and software re-engineering. Software re-use is the integra­\ntion and use of software assets from a previously developed system. It is the process \nin which old or updated software such as library, component, requirements and \ndesign documents, and design patterns is used along with new software.\nBoth software re-engineering and re-use are hailed for cutting down on the esca­\nlating development and testing costs. They have brought efficiency by reducing \ntime spent designing or coding, popularized standardization, and led to common \n“look-and-feel” between applications. They have made debugging easier through \nuse of thoroughly tested designs and code.\nHowever, both software techniques have the potential to introduce security \nflaws in systems. Among some of the security flaws that have been introduced into \nprogramming is first the mismatch where re-used requirements specifications and \ndesigns may not completely match the real situation at hand and nonfunctional \n" }, { "page_number": 112, "text": "characteristics of code may not match those of the intended recipient. Second, when \nusing object programming, it is important to remember that objects are defined with \ncertain attributes, and any new application using objects defined in terms of the old \nones will inherit all their attributes.\nIn Chapter 4, we discussed many security problems associated with script pro­\ngramming. Yet there is now momentum in script programming to bring more dyna­\nmism into Web programming. Scripting suffers from a list of problems including \ninadequate searching and/or browsing mechanisms before any interaction between \nthe script code and the server or client software, side effects from software assets \nthat are too large or too small for the projected interface, and undocumented inter­\nfaces.\n4.2.2  Poor Security Management\nSecurity management is both a technical and an administrative security process that \ninvolves security policies and controls that the organization decides to put in place \nto provide the required level of protection. In addition, it also involves security \nmonitoring and evaluation of the effectiveness of those policies. The most effec­\ntive way to meet these goals is to implement security risk assessment through a \nsecurity policy and secure access to network resources through the use of firewalls \nand strong cryptography. These and others offer the security required for the dif­\nferent information systems in the organization in terms of integrity, confidentiality, \nand availability of that information. Security management by itself is a complex \nprocess; however, if it is not well organized, it can result in a security nightmare for \nthe organization.\nPoor security management is a result of little control over security implementa­\ntion, administration, and monitoring. It is a failure in having solid control of the \nsecurity situation of the organization when the security administrator does not know \nwho is setting the organization’s security policy, administering security compliance, \nand who manages system security configurations and is in charge of security event \nand incident handling.\nIn addition to the disarray in the security administration, implementation, and \nmonitoring, a poor security administration team may even lack a plan for the wire­\nless component of the network. As we will see in Chapter 17, the rapid growth of \nwireless communication has brought with it serious security problems. There are \nso many things that can go wrong with security if security administration is poor. \nUnless the organization has a solid security administration team with a sound secu­\nrity policy and secure security implementation, the organization’s security may be \ncompromised. An organization’s system security is as good as its security policy \nand its access control policies and procedures and their implementation.\nGood security management is made up of a number of implementable secu­\nrity components that include risk management, information security policies and \nprocedures, standards, guidelines, information classification, security monitoring, \n4.2  Sources of Vulnerabilities\b\n93\n" }, { "page_number": 113, "text": "94\b\n4  Computer Network Vulnerabilities\nand security education. These core components serve to protect the organization’s \nresources.\nA risk analysis will identify these assets, discover the threats that put them at risk, \n• \nand estimate the possible damage and potential loss a company could endure if \nany of these threats become real. The results of the risk analysis help management \nconstruct a budget with the necessary funds to protect the recognized assets from \ntheir identified threats and develop applicable security policies that provide \ndirection for security activities. Security education takes this information to each \nand every employee.\nSecurity policies and procedures to create, implement, and enforce security \n• \nissues that may include people and technology.\nStandards and guidelines to find ways, including automated solution for creating, \n• \nupdating, and tracking compliance of security policies across the organization.\nInformation classification to manage the search, identification, and reduction of \n• \nsystem vulnerabilities by establishing security configurations.\nSecurity monitoring to prevent and detect intrusions, consolidate event logs \n• \nfor future log and trend analysis, manage security events in real-time, manage \nparameter security including multiple firewall reporting systems, and analyze \nsecurity events enterprise-wide.\nSecurity education to bring security awareness to every employee of the \n• \norganization and teach them their individual security responsibility.\n4.2.3  Incorrect Implementation\nIncorrect implantation very often is a result of incompatible interfaces. Two product \nmodules can be deployed and work together only if they are compatible. That means \nthat the module must be additive, that is, the environment of the interface needs to \nremain intact. An incompatible interface, on the other hand, means that the intro­\nduction of the module has changed the existing interface in such a way that existing \nreferences to the interface can fail or behave incorrectly.\nThis definition means that the things we do on the many system interfaces can \nresult in incompatibility that results result in bad or incomplete implementation. For \nexample, ordinary addition of a software or even an addition or removal of an argu­\nment to an existing software module may cause an imbalanced interface. This inter­\nface sensitivity tells us that it is possible because of interposition that the addition of \na simple thing like a symbol or an additional condition can result in an incompatible \ninterface, leading the new symbol or condition to conflict with all applications that \nhave been without problems.\nTo put the interface concept into a wide system framework, consider a system-\nwide integration of both hardware and software components with differing tech­\nnologies with no standards. No information system products, whether hardware or \nsoftware, are based on a standard that the industry has to follow. Because of this, \nmanufacturers and consumers must contend with the constant problems of system \n" }, { "page_number": 114, "text": "compatibility. Because of the vast number of variables in information systems, \nespecially network systems, involving both hardware and software, it is not possible \nto test or verify all combinations of hardware and software. Consider, for example, \nthat there are no standards in the software industry. Software systems involve differ­\nent models based on platforms and manufacturer. Products are heterogeneous both \nsemantically and syntactically.\nWhen two or more software modules are to interface one another in the sense \nthat one may feed into the other or one may use the outputs of the other, incompat­\nibility conditions may result from such an interaction. Unless there are methodolo­\ngies and algorithms for checking for interface compatibility, errors are transmitted \nfrom one module into another. For example, consider a typical interface created by \na method-call between software modules. Such an interface always makes assump­\ntions about the environment having the necessary availability constraints that the \naccessibility of local methods to certain states of the module. If such availability \nconstraints are not checked before the modules are allowed to pass parameters via \nmethod calls, errors may result.\nIncompatibility in system interfaces may be cause by a variety of conditions usu­\nally created by things such as\nToo much detail\n• \nNot enough understanding of the underlying parameters\n• \nPoor communication during design\n• \nSelecting the software or hardware modules before understanding the receiving \n• \nsoftware\nIgnoring integration issues\n• \nError in manual entry.\n• \nMany security problems result from the incorrect implementation of both hard­\nware and software. In fact, system reliability in both software and hardware is based \non correct implementation, as is the security of the system.\n4.2.4  Internet Technology Vulnerability\nIn Section 4.2.1, we discussed design flaws in technology systems as one of the \nleading causes of system vulnerabilities. In fact we pointed out that systems are \ncomposed of software, hardware, and humanware. There are problems in each one \nof these components. Since the humanware component is influenced by the technol­\nogy in the software and hardware, we will not discuss this any further.\nThe fact that computer and telecommunication technologies have developed at \nsuch an amazing and frightening speed and people have overwhelmingly embraced \nboth of them has caused security experts to worry about the side effects of these \nbooming technologies. There were reasons to worry. Internet technology has been \nand continues to be vulnerable. There have been reports of all sorts of loopholes, \nweaknesses, and gaping holes in both software and hardware technologies.\n4.2  Sources of Vulnerabilities\b\n95\n" }, { "page_number": 115, "text": "96\b\n4  Computer Network Vulnerabilities\nAccording to Table 4.1, the number of reported system vulnerabilities has been \non the rise from 171 in 1995 to 4,129 in 2002, a 24-fold growth, and this is only \nwhat is reported. There is agreement among security experts that what is reported is \nthe tip of the iceberg. Many vulnerabilities are discovered and, for various reasons, \nare not reported.\nBecause these technologies are used by many who are not security experts (in \nfact the majority of users are not security literate), one can say that many vulner­\nabilities are observed and probably not reported because those who observe them do \nnot have the knowledge to classify what has been observed as a vulnerability. Even \nif they do, they may not know how and where to report.\nNo one knows how many of these vulnerabilities are there in both software and \nhardware. The assumption is that there are thousands. As history has shown us, a \nfew are always discovered every day by hackers. Although the list spans both hard­\nware and software, the problem is more prevalent with software. In fact, software \nvulnerabilities can be put into four categories:\nOperating system vulnerabilities: Operating systems are the main sources of \n• \nall reported system vulnerabilities. Going by the SANS (SysAdmin, Audit, \nNetwork, Security) Institute, a cooperative research and education organization \nserving security professionals, auditors, system administrators, and network \nadministrators, together with the FBI’s National Infrastructure Protection Center \n(NIPC), of the annual top 10 and top 20 vulnerabilities, popular operating \nsystems cause the most vulnerabilities. This is always so because hackers tend to \ntake the easiest route by exploiting the best-known flaws with the most effective \nand widely known and available attack tools. Based on SANS/FBI Top Twenty \nreports in the last 3 years, the operating systems with most reported attacks are \nUNIX, LINUX, WINDWS, OS/2, and MacOS.\nPort-based vulnerabilities: Besides operating systems, network service ports \n• \ntake second place is sourcing system vulnerabilities. For system administrators, \nknowing the list of most vulnerable ports can go a long way to help enhance \nsystem security by blocking those known ports at the firewall. Such an operation, \nthough not comprehensive, adds an extra layer of security to the network. In \nfact it is advisable that in addition to blocking and deny-everything filtering, \nsecurity administrators should also monitor all ports including the blocked \nones for intruders who entered the system by some other means. For the most \ncommon vulnerable port numbers, the reader is referred to the latest SANS/FBI \nTop Twenty list at: http://www.sans.org/\nApplication software-based errors\n• \nSystem protocol software such as client and server browser.\n• \nTo help in the hunt for and fight against system vulnerabilities, SANS, in cooper­\nation with the FBI’s National Infrastructure Protection Center (NIPC) and a number \nof individuals and institutions around the world, has been issuing two lists annu­\nally: the top 10 and top 20 vulnerabilities and a list of vulnerable ports. The first \nof those reported only the top 10 system vulnerabilities. In subsequent years, the \nlist was extended to cover the top 20 vulnerabilities. Table 4.2, drawn from the top \n" }, { "page_number": 116, "text": "Table 4.1  Vulnerabilities reported to CERT between 1995 and 2003\nYear\n1988\n1989\n1990\n1991\n1992\n1993\n1994\n1995\n1996\n1997\n1998\n1999\n2000\n2001\n2002\n2003\nIncident\n6\n132\n252\n406\n773\n1,334\n2,340\n2,412\n2,573\n2,134\n3,734\n9,859\n21,756\n52,658\n82,094\n42,586\nCERT/CC Statistics 1988–2003(Q1-Q3), http://www.cert.org/stats/\n4.2  Sources of Vulnerabilities\b\n97\n" }, { "page_number": 117, "text": "98\b\n4  Computer Network Vulnerabilities\n20 vulnerabilities of the last 4 years, shows the most common and most persistent \nvulnerabilities in the last 4 years. In drawing up this table, we wanted to highlight \nthe fact that most times hackers do not discover new vulnerabilities but always look \nfor the most common vulnerabilities with the most easily available tools and go for \nthose. This, of course, says a lot about system administrators because these vulner­\nabilities are very well known with available patches. Yet they are persistently in the \ntop 20 most common vulnerabilities 4 years in a row. Following is the Department \nof Homeland Security and SANS/FBI report of last year’s top vulnerabilities. U/L \nin the table stands for UNIX/LINUX operating system vulnerability, and W denotes \nWindows.\nIn addition to highlighting the need for system administrators to patch the most \ncommon vulnerabilities, we hope this will also help many organizations that lack \nthe resources to train security personnel to have a choice of either focusing on the \nmost current or the most persistent vulnerability. One would wonder why a vulner­\nability would remain among the most common year after year, while there are advi­\nsories on it and patches for it. The answer is not very far fetched, but simple: system \nadministrators do not correct many of these flaws because they simply do not know \nwhich vulnerabilities are most dangerous; they are too busy to correct them all or \nthey do not know how to correct them safely.\nAlthough these vulnerabilities are cited, many of them year after year, as the \nmost common vulnerabilities, there are traditionally thousands of vulnerabilities \nthat hackers often use to attack systems. Because they are so numerous and new \nones being discovered every day, many system administrators may be overwhelmed, \nwhich may lead to loss of focus on the need to ensure that all systems are protected \nagainst the most common attacks.\nLet us take stock of what we have said so far. Lots and lots of system vulner­\nabilities have been observed and documented. SANS and FBI have been issuing the \ntop 20 and top 10 lists annually for several years now. However, there is a stubborn \npersistence of a number of vulnerabilities making the list year after year. This obser­\nvation, together with the nature of software, as we have explored in Section 4.2.1, \nTable 4.2  Most common vulnerabilities in the last year\nOperating system\t\nVulnerability\nW\t\nOutlook\nW\t\nInternet Information Server (IIS)\nW\t\nMicrosoft Data Access Components (MDAC)\nW\t\nWindows Peer to Peer File Sharing (P2P)\nW\t\nMicrosoft SQL Server\nU/L\t\nBIND (domain name service)\nU/L\t\nRPC\nU/L\t\nOpenSSL\nU/L\t\nSSH\nU/L\t\nSNMP\nU/L\t\nApache\nU/L\t\nSendmail\nDepartment of Homeland Security in Cooperation with SANS Institute: http://www.sans.org/top20/top20paller03.pdf\n" }, { "page_number": 118, "text": "means it is possible that what has been observed so far is a very small fraction of \na potential sea of vulnerabilities; many of them probably will never be discovered \nbecause software will ever be subjected to either unexpected input sequences or \noperated in unexpected environments.\nBesides the inherently embedded vulnerabilities resulting from flawed designs, \nthere are also vulnerabilities introduced in the operating environments as a result \nof incorrect implementations by operators. The products may not have weaknesses \ninitially, but such weaknesses may be introduced as a result of bad or careless instal­\nlations. For example, quite often products are shipped to customers with security \nfeatures disabled, forcing the technology users to go through the difficult and error-\nprone process of properly enabling the security features by oneself.\n4.2.5  Changing Nature of Hacker Technologies and Activities\nIt is ironic that as “useful” technology develops so does the “bad” technology. \nWhat we call useful technology is the development in all computer and telecom­\nmunication technologies that are driving the Internet, telecommunication, and \nthe Web. “Bad” technology is the technology that system intruders are using to \nattack systems. Unfortunately these technologies are all developing in tandem. In \nfact, there are times when it looks like hacker technologies are developing faster \nthan the rest of the technologies. One thing is clear, though: hacker technology is \nflourishing.\nAlthough it used to take intelligence, determination, enthusiasm, and persever­\nance to become a hacker, it now requires a good search engine, time, a little bit of \nknowledge of what to do, and owning a computer. There are thousands of hacker \nWeb sites with the latest in script technologies and hundreds of recipe books and \nsources on how to put together an impact virus or a worm and how to upload it.\nThe ease of availability of these hacker tools; the ability of hackers to disguise \ntheir identities and locations; the automation of attack technology which further \ndistances the attacker from the attack; the fact that attackers can go unidentified, \nlimiting the fear of prosecution; and the ease of hacker knowledge acquisition have \nput a new twist in the art of hacking, making it seem easy and hence attracting more \nand younger disciples.\nBesides the ease of becoming a hacker and acquiring hacker tools, because of \nthe Internet sprawl, hacker impact has become overwhelming, impressive, and \nmore destructive in shorter times than ever before. Take, for example, recent virus \nincidents such as the “I Love You,” “Code Red,” “Slammer,” and the “Blaster” \nworms’ spread. These worms and viruses probably spread around the world much \nfaster than the human cold virus and the dreaded severe acute respiratory syndrome \n(SARS).\nWhat these incidents have demonstrated is that the turnaround time, the time \na virus is first launched in the wild and the time it is first cited as affecting the \nsystem, is becoming incredibly shorter. Both the turnaround time and the speed \n4.2  Sources of Vulnerabilities\b\n99\n" }, { "page_number": 119, "text": "100\b\n4  Computer Network Vulnerabilities\nat which the virus or a worm spreads reduce the response time, the time a security \nincident is first cited in the system and the time an effective response to the incident \nshould have been initiated. When the response time is very short, security experts \ndo not have enough time to respond to a security incident effectively. In a broader \nframework, when the turnaround time is very short, system security experts who \ndevelop patches do not have enough time to reverse-engineer and analyze the attack \nin order to produce counter immunization codes. It has been and it is still the case \nin many security incidents for anti-virus companies to take hours and sometime \ndays, such as in the case of the Code Red virus, to come up with an effective cure. \nHowever, even after a patch is developed, it takes time before it is filtered down \nto the system managers. Meantime, the damage has already been done, and it is \nmultiplying. Likewise, system administrators and users have little time to protect \ntheir systems.\n4.2.6  Difficulty of Fixing Vulnerable Systems\nIn his testimony to the Subcommittee on Government Efficiency, Financial \nManagement and Intergovernmental Relations of the U.S. House Committee on \n­Government Reform, Richard D. Pethia, Director, CERT Centers, pointed out \nthe difficulty in fixing known system vulnerabilities as one of the sources of \nsystem vulnerabilities. His concern was based on a number of factors, including \nthe ever-rising number of system vulnerabilities and the ability of system admin­\nistrators to cope with the number of patches issued for these vulnerabilities. As \nthe number of vulnerabilities rises, system and network administrators face a \ndifficult situation. They are challenged with keeping up with all the systems they \nhave and all the patches released for those systems. Patches can be difficult to \napply and might even have unexpected side effects as a result of compatibility \nissues [2].\nBesides the problem of keeping abreast of the number of vulnerabilities and the \ncorresponding patches, there are also logistic problems between the time at which a \nvendor releases a security patch and the time at which a system administrator fixes \nthe vulnerable computer system. There are several factors affecting the quick fix­\ning of patches. Sometimes, it is the logistics of the distribution of patches. Many \nvendors disseminate the patches on their Web sites; others send e-mail alerts. How­\never, sometimes busy systems administrators do not get around to these e-mails and \nsecurity alerts until sometime after. Sometimes, it can be months or years before the \npatches are implemented on a majority of the vulnerable computers.\nMany system administrators are facing the same chronic problems: the never-\nending system maintenance, limited resources, and highly demanding management. \nUnder these conditions, the ever-increasing security system complexity, increasing \nsystem vulnerabilities, and the fact that many administrators do not fully understand \nthe security risks, system administrators neither give security a high enough priority \n" }, { "page_number": 120, "text": "nor assign adequate resources. Exacerbating the problem is the fact that the demand \nfor skilled system administrators far exceeds the supply [2].\n4.2.7  Limits of Effectiveness of Reactive Solutions\nData from Table 4.1 shows a growing number of system attacks reported. However, \ngiven that just a small percentage of all attacks is reported, this table indicates a \nserious growing system security problem. As we have pointed out earlier, hacker \ntechnology is becoming more readily available, easier to get and assemble, more \ncomplex, and their effects more far reaching. All these indicate that urgent action is \nneeded to find an effective solution to this monstrous problem.\nThe security community, including scrupulous vendors, have come up with vari­\nous solutions, some good and others not. In fact, in an unexpected reversal of for­\ntunes, one of the new security problems is to find a “good” solution from among \nthousands of solutions and to find an “expert” security option from the many dif­\nferent views.\nAre we reaching the limits of our efforts, as a community, to come up with a \nfew good and effective solutions to this security problem? There are many signs to \nsupport an affirmative answer to this question. It is clear that we are reaching the \nlimits of effectiveness of our reactive solutions. Richard D. Pethia gives the follow­\ning reasons [2]:\nThe number of vulnerabilities in commercial off-the-shelf software is now at the \n• \nlevel that it is virtually impossible for any but the best resourced organizations to \nkeep up with the vulnerability fixes.\nThe Internet now connects more than 109,000,000 computers and continues to \n• \ngrow at a rapid pace. At any point in time, there are hundreds of thousands of \nconnected computers that are vulnerable to one form of attack or another.\nAttack technology has now advanced to the point where it is easy for attackers \n• \nto take advantage of these vulnerable machines and harness them together to \nlaunch high-powered attacks.\nMany attacks are now fully automated, thus reducing the turnaround time even \n• \nfurther as they spread around cyberspace.\nThe attack technology has become increasingly complex and in some cases \n• \nintentionally stealthy, thus reducing the turnaround time and increasing the \ntime it takes to discover and analyze the attack mechanisms in order to produce \nantidotes.\nInternet users have become increasingly dependent on the Internet and now use \n• \nit for many critical applications so that a relatively minor attack has the potential \nto cause huge damages.\nWithout being overly pessimistic, these factors, taken together, indicate that \nthere is a high probability that more attacks are likely and since they are getting \n4.2  Sources of Vulnerabilities\b\n101\n" }, { "page_number": 121, "text": "102\b\n4  Computer Network Vulnerabilities\nmore complex and attacking more computers, they are likely to cause significant \ndevastating economic losses and service disruptions.\n4.2.8  Social Engineering\nAccording to John Palumbo, social engineering is an outside hacker’s use of psy­\nchological tricks on legitimate users of a computer system in order to gain the infor­\nmation (usernames and passwords) one needs to gain access to the system [5].\nMany have classified social engineering as a diversion, in the process of system \nattack, on people’s intelligence to utilize two human weaknesses: first, no one wants \nto be considered ignorant and second is human trust. Ironically, these are two weak­\nnesses that have made social engineering difficult to fight because no one wants \nto admit falling for it. This has made social engineering a critical system security \nhole.\nMany hackers have and continue to use it to get into protected systems. Kevin \nMitnick, the notorious hacker, used it successfully and was arguably one of the \nmost ingenious hackers of our time; he was definitely very gifted with his ability to \nsocially engineer just about anybody [5].\nHackers use many approaches to social engineering, including the \nfollowing [6]:\nTelephone.\n• \n This is the most classic approach, in which hackers call up a targeted \nindividual in a position of authority or relevance and initiate a conversation with \nthe goal of gradually pulling information out of the target. This is done mostly \nto help desks and main telephone switch boards. Caller ID cannot help because \nhackers can bypass it through tricks and the target truly believes that the hacker \nis actually calling from inside the corporation.\nOnline.\n• \n Hackers are harvesting a boom of vital information online from careless \nusers. The reliance on and excessive use of the Internet has resulted in people \nhaving several online accounts. Currently an average user has about four to five \naccounts including one for home use, one for work, and an additional one or two \nfor social or professional organizations. With many accounts, as probably any \nreader may concur, one is bound to forget some passwords, especially the least \nused ones. To overcome this problem, users mistakenly use one password on \nseveral accounts. Hackers know this and they regularly target these individuals \nwith clever baits such as telling them they won lotteries or were finalists in \nsweepstakes where computers select winners, or they have won a specific number \nof prizes in a lotto, where they were computer selected. However, in order to \nget the award, the user must fill in an online form, usually Web-based, and this \ntransmits the password to the hacker. Hackers have used hundreds of tricks on \nunsuspecting users in order for them to surrender their passwords.\nDumpster diving\n• \n is now a growing technique of information theft not only \nin social engineering but more so in identity theft. The technique, also known \nas trashing, involves an information thief scavenging through individual and \n" }, { "page_number": 122, "text": "company dumpsters for information. Large and critical information can be dug \nout of dumpsters and trash cans. Dumpster diving can recover from dumpsters \nand trash cans individual social security numbers, bank accounts, individual vital \nrecords, and a whole list of personal and work-related information that gives the \nhackers the exact keys they need to unlock the network.\nIn person\n• \n is the oldest of the information stealing techniques that pre-dates \ncomputers. It involves a person physically walking into an organization’s offices \nand casually checking out note boards, trash diving into bathroom trash cans \nand company hallway dumpsters, and eating lunches together and initiating \nconversations with employees. In big companies, this can be done only on a few \noccasions before trusted friendships develop. From such friendships, information \ncan be passed unconsciously.\nSnail mail\n• \n is done in several ways and is not limited only to social engineering \nbut has also been used in identity theft and a number of other crimes. It has \nbeen in the news recently because of identity theft. It is done in two ways: the \nhacker picks a victim and goes to the Post Office and puts in a change of address \nform to a new box number. This gives the hacker a way to intercept all snail \nmail of the victim. From the intercepted mail, the hacker can gather a great \ndeal of information that may include the victim’s bank and credit card account \nnumbers and access control codes and pins by claiming to have forgotten his or \nher password or pin and requesting a re-issue in the mail. In another form, the \nhacker drops a bogus survey in the victim’s mailbox offering baits of cash award \nfor completing a “few simple” questions and mailing them in. The questions, in \nfact, request far more than simple information from an unsuspecting victim.\nImpersonation\n• \n is also an old trick played on unsuspecting victims by criminals \nfor a number of goodies. These days the goodies are information. Impersonation \nis generally acting out a victim’s character role. It involves the hacker playing a \nrole and passing himself or herself as the victim. In the role, the thief or hacker \ncan then get into legitimate contacts that lead to the needed information. In large \norganizations with hundreds or thousands of employees scattered around the \nglobe, it is very easy to impersonate a vice president or a chief operations officer. \nSince most employees always want to look good to their bosses, they will end up \nsupplying the requested information to the imposter.\nOverall, social engineering is a cheap but rather threatening security problem \nthat is very difficult to deal with.\n4.3  Vulnerability Assessment\nVulnerability assessment is a process that works on a system to identify, track, and \nmanage the repair of vulnerabilities on the system. The assortment of items that are \nchecked by this process in a system under review varies depending on the organization. \nIt may include all desktops, servers, routers, and firewalls. Most vulnerability assess­\nment services will provide system administrators with\n4.3  Vulnerability Assessment\b\n103\n" }, { "page_number": 123, "text": "104\b\n4  Computer Network Vulnerabilities\nnetwork mapping and system finger printing of all known vulnerabilities\n• \na complete vulnerability analysis and ranking of all exploitable weaknesses based \n• \non potential impact and likelihood of occurrence for all services on each host\nprioritized list of misconfigurations.\n• \nIn addition, at the end of the process, a final report is always produced detailing \nthe findings and the best way to go about overcoming such vulnerabilities. This report \nconsists of prioritized recommendations for mitigating or eliminating weaknesses, and \nbased on an organization’s operational schedule, it also contains recommendations of \nfurther reassessments of the system within given time intervals or on a regular basis.\n4.3.1  Vulnerability Assessment Services\nDue to the massive growth of the number of companies and organizations own­\ning their own networks, the growth of vulnerability monitoring technologies, the \nincrease in network intrusions and attacks with viruses, and world-wide publicity of \nsuch attacks, there is a growing number of companies offering system vulnerability \nservices. These services, targeting the internals and perimeter of the system, Web-\nbased applications, and providing a baseline to measure subsequent attacks against, \ninclude scanning, assessment and penetration testing, and application assessment.\n4.3.1.1  Vulnerability Scanning\nVulnerability scanning services provide a comprehensive security review of the sys­\ntem, including both the perimeter and system internals. The aim of this kind of scan­\nning is to spot critical vulnerabilities and gaps in the system’s security practices. \nComprehensive system scanning usually results in a number of both false positives \nand negatives. It is the job of the system administrator to find ways of dealing \nwith these false positives and negatives. The final report produced after each scan \nconsists of strategic advice and prioritized recommendations to ensure that critical \nholes are addressed first. System scanning can be scheduled, depending on the level \nof the requested scan, by the system user or the service provider, to run automati­\ncally and report by either automated or periodic e-mail to a designated user. The \nscans can also be stored on a secure server for future review.\n4.3.1.2  Vulnerability Assessment and Penetration Testing\nThis phase of vulnerability assessment is a hands-on testing of a system for identi­\nfied and unidentified vulnerabilities. All known hacking techniques and tools are \ntested during this phase to reproduce real-world attack scenarios. One of the out­\ncomes of these real-life testings is that new and sometimes obscure vulnerabilities \nare found, processes and procedures of attack are identified, and sources and severity \nof ­vulnerabilities are categorized and prioritized based on the user-provided risks.\n" }, { "page_number": 124, "text": "4.3.1.3  Application Assessment\nAs Web applications become more widespread and more entrenched into e-com­\nmerce and all other commercial and business areas, applications are slowly becom­\ning the main interface between the user and the network. The increased demands on \napplications have resulted into new directions in automation and dynamism of these \napplications. As we saw in Chapter 6, scripting in Web applications, for example, has \nopened a new security paradigm in system administration. Many organizations have \ngotten sense of these dangers and are making substantial progress in protecting their \nsystems from attacks via Web-based applications. Assessing the security of system \napplications is, therefore, becoming a special skills requirement needed to secure \ncritical systems.\n4.3.2  Advantages of Vulnerability Assessment Services\nVulnerability online services have many advantages for system administrators. \nThey can, and actually always do, provide and develop signatures and updates for \nnew vulnerabilities and automatically include them in the next scan. This eliminates \nthe need for the system administrator to schedule periodic updates.\nReports from these services are very detailed not only on the vulnerabilities, \nsources of vulnerabilities, and existence of false positives, but they also focus on \nvulnerability identification and provide more information on system configuration \nthat may not be readily available to system administrators. This information alone \ngoes a long way in providing additional security awareness to security experts \nabout additional avenues whereby systems may be attacked. The reports are then \nencrypted and stored in secure databases accessible only with the proper user cre­\ndentials. This is because these reports contain critically vital data on the security \nof the system and they could, therefore, be a pot of gold for hackers if found. This \nadditional care and awareness adds security to the system.\nProbably, the best advantage to an overworked and many times resource-strapped \nsystem administrator is the automated and regularly scheduled scan of all network \nresources. They provide, in addition, a badly needed third-party “security eye,” thus \nhelping the administrator to provide an objective yet independent security evalua­\ntion of the system.\nExercises\n  1.\t What is a vulnerability? What do you understand by a system vulnerability?\n  2.\t Discuss four sources of system vulnerabilities.\n  3.\t What are the best ways to identify system vulnerabilities?\n  4.\t What is innovative misuse? What role does it play in the search for solutions to \nsystem vulnerability?\nExercises\b\n105\n" }, { "page_number": 125, "text": "106\b\n4  Computer Network Vulnerabilities\n  5.\t What is incomplete implementation? Is it possible to deal with incomplete \nimplementation as a way of dealing with system vulnerabilities? In other words, \nis it possible to completely deal with incomplete implementation?\n  6.\t What is social engineering? Why is it such a big issue yet so cheap to perform? \nIs it possible to completely deal with it? Why or why not?\n  7.\t Some have described social engineering as being perpetuated by our internal \nfears. Discuss those fears.\n  8.\t What is the role of software security testing in the process of finding solutions \nto system vulnerabilities?\n  9.\t Some have sounded an apocalyptic voice as far as finding solutions to system \nvulnerabilities. Should we take them seriously? Support your response.\n10.\t What is innovative misuse? What role does it play in the search for solutions to \nsystem vulnerabilities?\nAdvanced Exercises\n1.\t Why are vulnerabilities difficult to predict?\n2.\t Discuss the sources of system vulnerabilities.\n3.\t Is it possible to locate all vulnerabilities in a network? In other words, can one \nmake an authoritative list of those vulnerabilities? Defend your response.\n4.\t Why are design flaws such a big issue in the study of vulnerability?\n5.\t Part of the problem in design flaws involves issues associated with software verifi­\ncation and validation (V&V). What is the role of V&V in system ­vulnerability?\nReferences\n\t 1.\t Pipkin, Donald. Information Security: Protecting the Global Enterprise. Upper Saddle River, \nNJ: Prentice Hall PTR, 2000.\n\t 2.\t Pethia, Richard D. Information Technology—Essential But Vulnerable: How Prepared Are We \nfor Attacks? http://www.cert.org/congressional_testimony/Pethia_testimony_Sep26.html\n\t 3.\t Kizza, Joseph. M. Ethical and Social Issues in the Information Age. Second Edition. New \nYork: Springer-Verlag, 2003.\n\t 4.\t Hurley, Jim and Eric Hemmendinger. Open Source and Linux: 2002 Poster Children for Secu­\nrity Problems. http://www.aberdeen.com/ab_abstracts/2002/11/11020005.htm\n\t 5.\t Palumbo, John. Social Engineering: What is it, why is so little said about it and what can be \ndone? SANS, http://www.sans.org/rr/social/social.php\n\t 6.\t Granger, Sarah. Social Engineering Fundamentals, Part I: Hacker Tactics http://www.securi­\ntyfocus.com/infocus/1527.\n" }, { "page_number": 126, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_5, © Springer-Verlag London Limited 2009\n\b\n107\nChapter 5\nCyber Crimes and Hackers\n5.1  Introduction\nThe greatest threats to the security, privacy, and reliability of computer networks \nand other related information systems in general are cyber crimes committed by \ncyber criminals, but most importantly hackers. Judging by the damage caused \nby past cyber criminal and hacker attacks to computer networks in businesses, \ngovernments, and individuals, resulting in inconvenience and loss of productivity \nand credibility, one cannot fail to see that there is a growing community demand \nto software and hardware companies to create more secure products that can be \nused to identify threats and vulnerabilities, to fix problems, and to deliver security \nsolutions.\nThe rise of the hacker factor, the unprecedented and phenomenal growth of the \nInternet, the latest developments in globalization, hardware miniaturization, wire­\nless and mobile technology, the mushrooming of connected computer networks, and \nsociety’s ever growing appetite for and dependency on computers, have all greatly \nincreased the threats both the hacker and cybercrimes pose to the global communi­\ncation and computer networks. Both these factors are creating serious social, ethi­\ncal, legal, political, and cultural problems. These problems involve, among others, \nidentity theft, hacking, electronic fraud, intellectual property theft, and national \ncritical infrastructure attacks and are generating heated debates on finding effective \nways to deal with them, if not stop them.\nIndustry and governments around the globe are responding to these threats \nthrough a variety of approaches and collaborations such as\nFormation of organizations, such as the \n• \nInformation Sharing and Analysis \nCenters (ISACs).\nGetting together of industry portals and ISPs on how to deal with distributed \n• \ndenial of service attacks including the establishment of Computer Emergency \nResponse Teams (CERTs).\nIncreasing the use of sophisticated tools and services by companies to deal \n• \nwith network vulnerabilities. Such tools include the formation of Private Sector \nSecurity Organizations (PSSOs) such as SecurityFocus, Bugtraq, and the \nInternational Chamber of Commerce’s Cybercrime Unit.\n" }, { "page_number": 127, "text": "108\b\n5  Cyber Crimes and Hackers\nSetting up national strategies similar to the \n• \nU.S. National Strategy to Secure \nCyberspace, an umbrella initiative of all initiatives from various sectors of the national \ncritical infrastructure grid and the Council of Europe Convention on Cybercrimes.\n5.2  Cyber Crimes\nAccording to the director of the U.S. National Infrastructure Protection Center \n(NIPC), cyber crimes present the greatest danger to e-commerce and the general \npublic in general [1]. The threat of crime using the Internet is real and growing and it \nis likely to be the scourge of the 21st century. A cyber crime is a crime like any other \ncrime, except that in this case, the illegal act must involve a connected computing \nsystem either as an object of a crime, an instrument used to commit a crime or a \nrepository of evidence related to a crime. Alternatively, one can define a cyber crime \nas an act of unauthorized intervention into the working of the telecommunication \nnetworks or/and the sanctioning of an authorized access to the resources of the \ncomputing elements in a network that leads to a threat to the system’s infrastructure \nor life or that causes significant property loss.\nBecause of the variations in jurisdiction boundaries, cyber acts are defined as illegal \nin different ways depending on the communities in those boundaries. Communities \ndefine acts to be illegal if such acts fall within the domains of that community’s com­\nmission of crimes that a legislature of a state or a nation has specified and approved. \nBoth the International Convention of Cyber Crimes and the European Convention on \nCyber Crimes have outlined the list of these crimes to include the following:\nUnlawful access to information\n• \nIllegal interception of information\n• \nUnlawful use of telecommunication equipment.\n• \nForgery with use of computer measures\n• \nIntrusions of the Public Switched and Packet Network\n• \nNetwork integrity violations\n• \nPrivacy violations\n• \nIndustrial espionage\n• \nPirated computer software\n• \nFraud using a computing system\n• \nInternet/email abuse\n• \nUsing computers or computer technology to commit murder, terrorism, \n• \npornography, and hacking.\n5.2.1  Ways of Executing Cyber Crimes\nBecause for any crime to be classified as a cyber crime, it must be committed with \nthe help of a computing resource, as defined above, cyber crimes are executed in \none of two ways: penetration and denial of service attacks.\n" }, { "page_number": 128, "text": "5.2.1.1  Penetration\nA penetration cyber attack is a successful unauthorized access to a protected system \nresource, or a successful unauthorized access to an automated system, or a success­\nful act of bypassing the security mechanisms of a computing system [2]. A penetra­\ntion cyber attack can also be defined as any attack that violates the integrity and \nconfidentiality of a computing system’s host.\nHowever defined, a penetration cyber attack involves breaking into a computing \nsystem and using known security vulnerabilities to gain access to any cyberspace \nresource. With full penetration, an intruder has full access to all that computing \nsystem’s resources. Full penetration, therefore, allows an intruder to alter data \nfiles, change data, plant viruses, or install damaging Trojan horse programs into \nthe computing system. It is also possible for intruders, especially if the victim \ncomputer is on a computer network, to use it as a launching pad to attack other net­\nwork resources. Penetration attacks can be local, where the intruder gains access \nto a computer on a LAN on which the program is run, or global on a WAN such \nas the Internet, where an attack can originate thousands of miles from the victim \ncomputer.\n5.2.1.2  Distributed Denial of Service (DDoS)\nA denial of service is an interruption of service resulting from system unavailability \nor destruction. It is prevents any part of a target system from functioning as planned. \nThis includes any action that causes unauthorized destruction, modification, or \ndelay of service. Denial of service can also be caused by intentional degradation \nor blocking of computer or network resources [2]. These denial-of-service attacks, \ncommonly known as distributed denial of service (DDoS) attacks, are a new form \nof cyber attacks. They target computers connected to the Internet. They are not pen­\netration attacks and, therefore, they do not change, alter, destroy, or modify system \nresources. However, they affect the system through diminishing the system’s ability \nto function; hence, they are capable of degrading of the system’s performance even­\ntually bringing a system down without destroying its resources.\nAccording to the Economist [3], the software tools used to carry out DDoS first \ncame to light in the summer of 1999, and the first security specialists conference to \ndiscuss how to deal with them was held in November of the same year. Since then, \nthere has been a growing trend in DDoS attacks mainly as a result of the growing \nnumber, sizes, and scope of computer networks which increase first an attacker’s \naccessibility to networks and second the number of victims. But at the same time, as \nthe victim base and sizes of computer networks have increased, there have been no \nto little efforts to implement spoof prevention filters or any other preventive action. \nIn particular, security managers have implemented little, if any, system protection \nagainst these attacks.\nLike penetration electronic attacks (e-attacks), DDoS attacks can also be either \nlocal, where they can shut down LAN computers, or global, originating thousands \n5.2  Cyber Crimes\b\n109\n" }, { "page_number": 129, "text": "110\b\n5  Cyber Crimes and Hackers\nof miles away on the Internet, as was the case in the Canadian-generated DDoS \nattacks. Attacks in this category include the following:\nIP-spoofing is forging of an IP packet address. In particular, a source address \n• \nin the IP packet is forged. Since network routers use packet destination address \nto route packets in the network, the only time a source address is used is by the \ndestination host to respond back to the source host. So forging the source IP \naddress causes the responses to be misdirected, thus creating problems in the \nnetwork. Many network attacks are a result of IP spoofing.\nSYN-Flooding: In Chapter 3, we discussed a three-way handshake used by the \n• \nTCP protocols to initiate a connection between two network elements. During \nthe handshake, the port door is left half open. A SYN flooding attack is flooding \nthe target system with so many connection requests coming from spoofed source \naddresses that the victim server cannot complete because of the bogus source \naddresses. In the process, all its memory gets hogged up and the victim is thus \noverwhelmed by these requests and can be brought down.\nSmurf attack: In this attack, the intruder sends a large number of spoofed ICMP \n• \nEcho requests to broadcast IP addresses. Hosts on the broadcast multicast IP \nnetwork, say, respond to these bogus requests with reply ICMP Echo. This \nmay significantly multiply the reply ICMP Echos to the hosts with spoofed \naddresses.\nBuffer overflow is an attack in which the attacker floods a carefully chosen field \n• \nsuch as an address field with more characters than it can accommodate. These \nexcessive characters, in malicious cases, are actually executable code, which the \nattacker can execute to cause havoc in the system, effectively giving the attacker \ncontrol of the system. Because anyone with little knowledge of the system can \nuse this kind of attack, buffer overflow has become one of the most serious \nclasses of security threats.\nPing of Death: A system attacker sends IP packets that are larger than the 65,536 \n• \nbytes allowed by the IP protocol. Many operating systems, including network \noperating systems, cannot handle these oversized packets; so, they freeze and \neventually crash.\nLand.c attack: The land.c program sends TCP SYN packets whose source and \n• \ndestination IP addresses and port numbers are those of the victim’s.\nTeardrop.c attack uses a program that causes fragmentation of a TCP packet. It \n• \nexploits a re-assembly and causes the victim system to crash or hang.\nSequence number sniffing: In this attack, the intruder takes advantage of the \n• \npredictability of sequence numbers used in TCP implementations. The attacker \nthen uses a sniffed next sequence number to establish legitimacy.\nMotives of DDoS Attack\nDDoS attacks are not like penetration attacks where the intruders expect to gain \nfrom such attacks; they are simply a nuisance to the system. As we pointed out \n" }, { "page_number": 130, "text": "earlier, since these attacks do not penetrate systems, they do not affect the integ­\nrity of the resources other than deny access to them. This means that the intruders \ndo not expect to get many material gains as would be expected from penetration \nattacks. So, because of this, most DDoS attacks are generated with very specific \ngoals. Among them are\npreventing others from using a network connection with such attacks as \n• \nSmurf, \nUDP, and ping flood attacks;\npreventing others from using a host or a service by severely impairing or disabling \n• \nsuch a host or its IP stack with suck attacks as Land, Teardrop, Bonk, Boink, SYN \nflooding, and Ping of death;\nnotoriety for computer savvy individuals who want to prove their ability and \n• \ncompetence in order to gain publicity.\n5.2.2  Cyber Criminals\nWho are the cyber criminals? They are ordinary users of cyberspace with a mes­\nsage. As the number of users swells, the number of criminals among them also \nincreases at almost the same rate. A number of studies have identified the following \ngroups as the most likely sources of cyber crimes [4]:\nInsiders:\n• \n For a long time, system attacks were limited to in-house employee-\ngenerated attacks to systems and theft of company property. In fact, disgruntled \ninsiders are a major source of computer crimes because they do not need a great \ndeal of knowledge about the victim computer system. In many cases, such \ninsiders use the system everyday. This allows them to gain unrestricted access to \nthe computer system, thus causing damage to the system and/or data. The 1999 \nComputer Security Institute/FBI report notes that 55% of respondents reported \nmalicious activity by insiders [5].\nHackers:\n• \n Hackers are actually computer enthusiasts who know a lot about \ncomputers and computer networks and use this knowledge with a criminal intent. \nSince the mid-1980s, computer network hacking has been on the rise mostly \nbecause of the widespread use of the Internet.\nCriminal groups:\n• \n A number of cyber crimes are carried out by criminal groups for \ndifferent motives ranging from settling scores to pure thievery. For example, such \ncriminal groups with hacking abilities have broken into credit card companies to \nsteal thousands of credit card numbers (see Chapter 3).\nDisgruntled ex-employees:\n• \n Many studies have shown that disgruntled ex-\nemployees also pose a serious threat to organizations as sources of cyber crimes \ntargeting their former employers for a number of employee employer issues that \nled to the separation. In some cases, ex-employees simply use their knowledge \nof the system to attack the organization for purely financial gains.\nEconomic espionage spies:\n• \n The growth of cyberspace and e-commerce and \nthe forces of globalization have created a new source of crime syndicates, the \n5.2  Cyber Crimes\b\n111\n" }, { "page_number": 131, "text": "112\b\n5  Cyber Crimes and Hackers\norganized economic spies that plough the Internet looking for company secrets. \nAs the price tag for original research skyrockets, and competition in the market \nplace becomes global, companies around the globe are ready to pay any amount \nfor stolen commercial, marketing, and industrial secrets.\n5.3  Hackers\nThe word hacker has changed meaning over the years as technology changed. Cur­\nrently, the word has two opposite meanings. One definition talks of a computer \nenthusiast as an individual who enjoys exploring the details of computers and how \nto stretch their capabilities, as opposed to most users who prefer to learn only the \nminimum necessary. The opposite definition talks of a malicious or inquisitive med­\ndler who tries to discover information by poking around [2].\nBefore acquiring its current derogatory meaning, the term hacking used to \nmean expert writing and modification of computer programs. Hackers were \nconsidered people who were highly knowledgeable about computing; they were \nconsidered computer experts who could make the computer do all the wonders \nthrough programming. Today, however, hacking refers to a process of gaining \nunauthorized access into a computer system for a variety of purposes, includ­\ning the stealing and altering of data and electronic demonstrations. For some \ntime now, hacking as a political or social demonstration has been used during \ninternational crises. During a crisis period, hacking attacks and other Internet \nsecurity breaches usually spike in part because of sentiments over the crisis. For \nexample, during the two Iraq wars, there were elevated levels of hacker activi­\nties. According to the Atlanta-based Internet Security Systems, around the start \nof the first Iraq war, there was a sharp increase of about 37 percent from the \nfourth quarter of the year before, the largest quarterly spike the company has \never recorded [1].\n5.3.1  History of Hacking\nThe history of hacking has taken as many twists and turns as the word hacking itself \nhas. One can say that the history of hacking actually began with the invention of \nthe telephone in 1876 by Alexander Graham Bell. For it was this one invention that \nmade internetworking possible. There is agreement among computer historians that \nthe term hack was born at MIT. According to Slatalla, in the 1960s, MIT geeks had \nan insatiable curiosity about how things worked. However, in those days of colossal \nmainframe computers, “it was very expensive to run those slow-moving hunks of \nmetal; programmers had limited access to the dinosaurs. So, the smarter ones cre­\nated what they called “hacks” – programming shortcuts – to complete computing \ntasks more quickly. Sometimes their shortcuts were more elegant than the original \nprogram” [6].\n" }, { "page_number": 132, "text": "Although many early hack activities had motives, many took them to be either \nhighly admirable acts by expert computer enthusiasts or elaborate practical jokes, \nincluding the first recorded hack activity in 1969 by Joe Engressia, commonly \nknown as “The Whistler.” Engressia, the grand father of phone phreaking, was born \nblind and had a high pitch which he used to his advantage. He used to whistle into \nthe phones and could whistle perfectly any tone he wanted. He discovered phreaking \nwhile listening to the error messages caused by his calling of unconnected numbers. \nWhile listening to these messages he used to whistle into the phone and quite often \ngot cut off. After getting cut off numerous times, he phoned AT&T to inquire why \nwhen he whistled a tune into the phone receiver he was cut off. He was surprised by \nan explanation on the working of the 2600-Hz tone by a phone company engineer. \nJoe learned how to phreak. It is said that phreakers across the world used to call Joe \nto tune their “blue boxes” [7].\nBy 1971 a Vietnam veteran, John Draper, commonly known as “Captain Crunch,” \ntook this practical whistling joke further and discovered that using a free toy whistle \nfrom a cereal box to carefully blow into the receiver of a telephone produces the \nprecise tone of 2600 Hz needed to make free long distance phone calls [8]. With \nthis act, “phreaking,” a cousin of hacking, was born and it entered our language. \nThree distinct terms began to emerge: hacker, cracker, and phreaker. Those who \nwanted the word hack to remain pure and innocent preferred to be called hackers; \nthose who break into computer systems were called crackers, and those targeting \nphones came to be known as phreakers. Following Captain Crunch’s instructions, \nAl Gilbertson (not his real name) created the famous little “blue box.” Gilbertson’s \nbox was essentially a super telephone operator because it gave anyone who used it \nfree access to any telephone exchange. In the late 1971, Ron Anderson published an \narticle on the existence and working of this little blue box in Esquire magazine. Its \npublication created an explosive growth in the use of blue boxes and an initiation of \na new class of kids into phreaking [9].\nWith the starting of a limited national computer network by ARPNET, in the 1970s, \na limited form of a system of break-in from outsiders started appearing. Through \nthe 1970s, a number of developments gave impetus to the hacking movement. The \nfirst of these developments was the first publication of the Youth International Party \nLine newsletter by activist Abbie Hoffman, in which he erroneously advocated for \nfree phone calls by stating that phone calls are part of an unlimited reservoir and \nphreaking did not hurt anybody and therefore should be free. The newsletter, whose \nname was later changed to TAP, for Technical Assistance Program, by Hoffman’s \npublishing partner, Al Bell, continued to publish complex technical details on how \nto make free calls [6].\nThe second was the creation of the bulletin boards. Throughout the seventies, the \nhacker movement, although becoming more active, remained splinted. This came to \nan end in 1978 when two guys from Chicago, Randy Seuss and Ward Christiansen, \ncreated the first personal-computer bulletin-board system (BBS).\nThe third development was the debut of the personal computer (PC). In 1981, \nwhen IBM joined the PC wars, a new front in hacking was opened. The PCs brought \nthe computing power to more people because they were cheap, easy to program, \n5.3  Hackers\b\n113\n" }, { "page_number": 133, "text": "114\b\n5  Cyber Crimes and Hackers\nand somehow more portable. On the back of the PC was the movie “WarGames” in \n1983. The science fiction movie watched by millions glamorized and popularized \nhacking. The 1980s saw tremendous hacker activities with the formation of gang-\nlike hacking groups. Notorious individuals devised hacking names such as Kevin \nMitnick (“The Condor”), Lewis De Payne (“Roscoe”), Ian Murphy (“Captain zap”), \nBill Landreth (“The Cracker”), “Lex Luther” (founder of the Legion of Doom), \nChris Goggans (“Erik Bloodaxe”), Mark Abene (“Phiber Optik”), Adam Grant \n(“The Urvile”), Franklin Darden (“The Leftist”), Robert Riggs (“The Prophet”), \nLoyd Blankenship (“The Mentor”), Todd Lawrence (“The Marauder”), Scott \nChasin (“Doc Holiday”), Bruce Fancher (“Death Lord”), Patrick K. Kroupa (“Lord \nDigital”), James Salsman (“Karl Marx”), Steven G. Steinberg (“Frank Drake”), and \n“Professor Falken” [10].\nThe notorious hacking groups of the 1970s and 1980s included the “414- Club,” the \n“Legion of Doom,” the “Chaos Computer Club” based in Germany, “NuPrometheus \nLeague,” and the “Atlanta Three.” All these groups were targeting either phone \ncompanies where they would get free phone calls or computer systems to steal \ncredit card and individual user account numbers.\nDuring this period, a number of hacker publications were founded including \nThe Hacker Quarterly and Hacker’zine. In addition, bulletin boards were created, \nincluding “The Phoenix Fortress” and “Plovernet.” These forums gave the hacker \ncommunity a clearing house to share and trade hacking ideas.\nHacker activities became so worrisome that the FBI started active tracking and \narrests, including the arrest, the first one, of Ian Murphy (Captain Zap) in 1981 fol­\nlowed by the arrest of Kevin Mitnick in the same year. It is also during this period \nthat the hacker culture and activities went global with reported hacker attacks and \nactivities from Australia, Germany, Argentina, and the United States. Ever since, we \nhave been on a wild ride.\nThe first headline-making hacking incident that used a virus and got national \nand indeed global headlines took place in 1988 when a Cornell graduate stu­\ndent created a computer virus that crashed 6,000 computers and effectively shut \ndown the Internet for two days [11]. Robert Morris’s action forced the U.S.A. \ngovernment to form the federal Computer Emergency Response Team (CERT) \nto investigate similar and related attacks on the nation’s computer networks. The \nlaw enforcement agencies started to actively follow the comings and goings of \nthe activities of the Internet and sometimes eavesdropped on communication net­\nworks traffic. This did not sit well with some activists, who formed the Electronic \nFrontier Foundation in 1990 to defend the rights of those investigated for alleged \ncomputer hacking.\nThe 1990s saw heightened hacking activities and serious computer network \n“near” meltdowns, including the 1991 expectation without incident of the “Michel­\nangelo” virus that was expected to crash computers on March 6, 1992, the artist’s \n517th birthday. In 1995, the notorious, self-styled hacker Kevin Mitnick was first \narrested by the FBI on charges of computer fraud that involved the stealing of thou­\nsands of credit card numbers. In the second half of the 1990s, hacking activities \nincreased considerably, including the 1998 Solar Sunrise, a series of attacks \n" }, { "page_number": 134, "text": "targeting Pentagon computers that led the Pentagon to establish round-the-clock, \nonline guard duty at major military computer sites, and a coordinated attacker on \nPentagon computers by Ehud Tenebaum, an Israeli teenager known as “The Ana­\nlyzer,” and an American teen. The close of the twentieth century saw the heightened \nanxiety in the computing and computer user communities of both the millennium \nbug and the ever rising rate of computer network break-ins. So, in 1999, President \nClinton announced a $1.46 billion initiative to improve government computer secu­\nrity. The plan would establish a network of intrusion detection monitors for certain \nfederal agencies and encourage the private sector to do the same [8]. The year \n2000 probably saw the most costly and most powerful computer network attacks \nthat included the “Mellisa,” the “Love Bug,” the “Killer Resume,” and a number \nof devastating DDoS attacks. The following year, 2001, the elusive “Code Red” \nvirus was released. The future of viruses is as unpredictable as the kinds of viruses \nthemselves.\nThe period between 1980 and 2002 saw a sharp growth in reported incidents of \ncomputer attacks. Two factors contributed to this phenomenal growth: the growth \nof the Internet and the massive news coverage of virus incidents.\n5.3.2  Types of Hackers\nThere are several sub-sects of hackers based on hacking philosophies. The biggest \nsub-sects are crackers, hacktivists, and cyber terrorists.\n5.3.2.1  Crackers\nA cracker is one who breaks security on a system. Crackers are hardcore hackers \ncharacterized more as professional security breakers and thieves. The term was \nrecently coined only in the mid-1980s by purist hackers who wanted to differentiate \nthemselves from individuals with criminal motives whose sole purpose is to sneak \nthrough security systems. Purist hackers were concerned journalists were misusing \nthe term “hacker.” They were worried that the mass media failed to understand \nthe distinction between computer enthusiasts and computer criminals, calling both \nhackers. The distinction has, however, failed; so, the two terms hack and crack are \nstill being often used interchangeably.\nEven though the public still does not see the difference between hackers and \ncrackers, purist hackers are still arguing that there is a big difference between what \nthey do and what crackers do. For example, they say cyber terrorists, cyber vandals, \nand all criminal hackers are not hackers but crackers by the above definition.\nThere is a movement now of reformed crackers who are turning their hacking \nknowledge into legitimate use, forming enterprises to work for and with cyber secu­\nrity companies and sometimes law enforcement agencies to find and patch potential \nsecurity breaches before their former counterparts can take advantage of them.\n5.3  Hackers\b\n115\n" }, { "page_number": 135, "text": "116\b\n5  Cyber Crimes and Hackers\n5.3.2.2  Hacktivists\nHacktivism is a marriage between pure hacking and activism. Hacktivists are con­\nscious hackers with a cause. They grew out of the old phreakers. Hacktivists carry \nout their activism in an electronic form in hope of highlighting what they consider \nnoble causes such as institutional unethical or criminal actions and political and \nother causes. Hacktivism also includes acts of civil disobedience using cyberspace. \nThe tactics used in hacktivism change with the time and the technology. Just as in \nthe real world where activists use different approaches to get the message across, \nin cyberspace, hacktivists also use several approaches including automated e-mail \nbombs, web de-facing, virtual sit-ins, and computer viruses and worms [12].\nAutomated E-mail Bomb: E-mail bombs are used for a number of mainly activ­\nist issues such as social and political, electronic, and civil demonstrations, but can \nalso and has been used in a number of cases for coursing, revenge, and harassment \nof individuals or organizations. The method of approach here is to choose a selec­\ntion of individuals or organizations and bombard them with thousands of automated \ne-mails, which usually results in jamming and clogging the recipient’s mailbox. If \nseveral individuals are targeted on the same server, the bombardment may end up \ndisabling the mail server. Political electronic demonstrations were used in a number \nof global conflicts including the Kosovo and Iraq wars. And economic and social \ndemonstrations took place to electronically and physically picket the new world \neconomic order as was represented by the World Bank and the International Moni­\ntory Fund (IMF) sitting in Seattle, Washington, and Washington DC in U.S.A, and \nin Prague, Hungry, and Genoa, Italy.\nWeb De-facing: The other attention getter for the hacktivist is Web de-facing. It \nis a favorite form of hacktivism for nearly all causes, political, social, or economic. \nWith this approach, the hacktivists penetrate into the Web server and replace the \nselected site’s content and links with whatever they want the viewers to see. Some \nof this may be political, social, or economic messages. Another approach similar to \nweb de-facing is to use the domain name service (DNS) to change the DNS server \ncontent so that the victim’s domain name resolves to a carefully selected IP address \nof a site where the hackers have their content they want the viewers to see.\nOne contributing factor to Web de-facing is the simplicity of doing it. There is \ndetailed information for free on the Web outlining the bugs and vulnerabilities in \nboth the Web software and Web server protocols. There is also information that \ndetails what exploits are needed to penetrate a web server and de-face a victim’s \nWeb site. De-facing technology has, like all other technologies, been developing \nfast. It used to be that a hacker who wanted to deface a web site would, remotely or \notherwise, break into the server that held the web pages, gaining the access required \nto edit the Web page, then alter the page. Breaking into a Web server would be \nachieved through a remote exploit; for example, that would give the attacker access \nto the system. The hacktivist would then sniff connections between computers to \naccess remote systems.\nNewer scripts and Web server vulnerabilities now allow hackers to gain remote \naccess to Web sites on Web servers without gaining prior access to the server. This \n" }, { "page_number": 136, "text": "is so because vulnerabilities and newer scripts utilize bugs that overwrite or append \nto the existing page without ever gaining a valid login and password combination \nor any other form of legitimate access. As such, the attacker can only overwrite or \nappend to files on the system.\nSince a wide variety of Web sites offer both hacking and security scripts and \nutilities required to commit these acts, it is only a matter of minutes before scripts \nare written and web sites are selected and a victim is hit.\nAs an example, in November 2001, a Web de-facing duo calling themselves \nSm0ked Crew defaced The New York Times site. Sm0ked Crew had earlier hit the \nWeb sites of big name technology giants such as Hewlett-Packard, Compaq Com­\nputer, Gateway, Intel, AltaVista, and Disney’s Go.com [13].\nOn the political front, in April 2003, during the second Iraq war, hundred of sites \nwere defaced by both antiwar and pro-war hackers and hacktivists; among them \nwere a temporary defacement of the White House’s Web site and an attempt to \nshut down British Prime Minister Tony Blair’s official site. In addition to de-facing \nof Web sites, at least nine viruses or “denial of service” attacks cropped up in the \nweeks leading to war [1].\nVirtual Sit-ins: A virtual sit-in or a blockade is the cousin of a physical sit-in \nor blockade. These are actions of civil concern about an issue, whether social, eco­\nnomic, or political. It is a way to call public attention to that issue. The process \nworks through disruption of normal operation of a victim site and denying or pre­\nventing access to the site. This is done by the hacktivists generating thousands of \ndigital messages directed at the site either directly or through surrogates. In many \nof these civil disobedience cases, demonstrating hacktivists set up many automated \nsites that generate automatic messages directed to the victim site. By the time of this \nwriting, it is not clear whether virtual sit-ins are legal.\nOn April 20, 2001, a group calling itself the Electrohippies Collective had \na planned virtual sit-in of Web sites associated with the Free Trade Area of the \nAmericas (FTAA) conference. The sit-in, which started at 00.00 UTC, was to object \nto the FTAA conference and the entire FTAA process by generating an electronic \nrecord of public pressure through the server logs of the organizations concerned. \nFigure 5.1 shows a logo an activist group against global warming may display.\nOn February 7, 2002, during the annual meeting of the World Economic Forum \n(WEF) in New York City, more than 160,000 demonstrators, organized by among oth­\ners, Ricardo Dominguez, co-founder of the Electronic Disturbance Theater (EDT), \nwent online to stage a “virtual sit-in” at the WEF home page. Using downloaded \nFig. 5.1  A Logo of an \nActivist Group to Stop \nGlobal Warming\n5.3  Hackers\b\n117\nGlobal Warming\nCampaign Against Global Warming\n" }, { "page_number": 137, "text": "118\b\n5  Cyber Crimes and Hackers\nsoftware tools that constantly reloaded the target Web sites, the protestors replicated \na denial-of-service attack on the site on the first day of the conference and by 10:00 \nAM of that day, the WEF site had collapsed and remained down until late night of \nthe next day [14].\n5.3.2.3  Computer Viruses and Worms\nPerhaps, the most widely used and easiest method of hacktivists is sending viruses \nand worms. Both viruses and worms are forms of malicious code, although the \nworm code may be less dangerous. Other differences include the fact that worms are \nusually more autonomous and can spread on their own once delivered as needed, \nwhile a virus can only propagate piggy-backed on or embedded into another code. \nWe will give a more detailed discussion of both viruses and worms in Chapter 14.\n5.3.2.4  Cyberterrorists\nBased on motives, cyberterrorists can be divided into two categories: the terrorists \nand information warfare planners.\nTerrorists. The World Trade Center attack in 2001 brought home the realization \nand the potential for a terrorist attack on not only organizations’ digital infrastructure \nbut also a potential for an attack on the national critical infrastructure. Cyberterrorists \nwho are terrorists have many motives, ranging from political, economic, religious, \nto personal. Most often, the techniques of their terror are through intimidation, \ncoercion, or actual destruction of the target.\nInformation Warfare Planners. This involves war planners to threaten attacking \na target by disrupting the target’s essential services by electronically controlling and \nmanipulating information across computer networks or destroying the information \ninfrastructure.\n5.3.3  Hacker Motives\nSince the hacker world is closed to nonhackers and no hacker likes to discuss one’s \nsecrets with non-members of the hacker community, it is extremely difficult to \naccurately list all the hacker motives. From studies of attacked systems and some \nwriting from former hackers who are willing to speak out, we learn quite a lot \nabout this rather secretive community. For example, we have learned that hackers’ \nmotives can be put in two categories: those of the collective hacker community and \nthose of individual members. As a group, hackers like to interact with others on \nbulletin boards, through electronic mail, and in person. They are curious about new \ntechnologies, adventurous to control new technologies, and they have a desire and \nare willing to stimulate their intellect through learning from other hackers in order \n" }, { "page_number": 138, "text": "to be accepted in more prestigious hacker communities. Most important, they have \na common dislike for and resistance to authority.\nMost of these collective motives are reflected in the hacker ethic. According to \nSteven Levy, the hacker ethic has the following six tenets [1]:\nAccess to computers and anything that might teach you something about the way \n• \nthe world works should be unlimited and total. Always yield to the hands-on \nimperative!\nAll information should be free.\n• \nMistrust authority and promote decentralization.\n• \nHackers should be judged by their hacking, not bogus criteria such as degrees, \n• \nage, race, or position.\nYou can create art and beauty on a computer.\n• \nComputers can change your life for the better.\n• \nCollective hacker motives can also be reflected in the following three additional \nprinciples (“Doctor Crash,” 1986) [10]:\nHackers reject the notion that “businesses” are the only groups entitled to access \n• \nand use of modern technology.\nHacking is a major weapon in the fight against encroaching computer \n• \ntechnology.\nThe high cost of computing equipment is beyond the means of most hackers, \n• \nwhich results in the perception that hacking and phreaking are the only recourse \nto spreading computer literacy to the masses\nApart from collective motives, individual hackers, just as any other computer \nsystem users, have their own personal motives that drive their actions. Among these \nare the following [15]:\nVendetta and/or revenge: Although a typical hacking incident is usually non­\nfinancial and is, according to hacker profiles, for recognition and fame, there are \nsome incidents, especially from older hackers, that are for reasons that are only \nmundane, such as a promotion denied, a boyfriend or girlfriend taken, an ex-spouse \ngiven child custody, and other situations that may involve family and intimacy \nissues. These may result in hacker-generated attack targeting the individual or the \ncompany that is the cause of the displeasure. Also, social, political and religious \nissues, especially issues of passion, can drive rebellions in people that usually lead \nto revenge cyber attacks. These mass computer attacks are also increasingly being \nused as paybacks for what the attacker or attackers consider to be injustices done \nthat need to be avenged.\nJokes, Hoaxes, and Pranks: Even though it is extremely unlikely that seri­\nous hackers can start cyber attacks just for jokes, hoaxes, or pranks, there are less \nserious ones who can and have done so. Hoaxes are scare alerts started by one or \nmore malicious people and are passed on by innocent users who think that they \nare helping the community by spreading the warning. Most hoaxes are viruses and \nworms, although there are hoaxes that are computer-related folklore stories and \nurban legends or true stories sent out as text messages. Although many virus hoaxes \n5.3  Hackers\b\n119\n" }, { "page_number": 139, "text": "120\b\n5  Cyber Crimes and Hackers\nare false scares, there are some that may have some truth about them, but that often \nbecome greatly exaggerated, such as “The Good Times” and “The Great Salmon.” \nVirus hoaxes infect mailing lists, bulletin boards, and Usenet newsgroups. Worried \nsystem administrators sometimes contribute to this scare by posting dire warnings \nto their employees that become hoaxes themselves.\nThe most common hoax has been and still is that of the presence of a virus. \nAlmost every few weeks there is always a virus hoax of a virus, and the creator of \nsuch a hoax sometimes goes on to give remove remedies which, if one is not care­\nful, results in removing vital computer systems’ programs such as operating systems \nand boot programs. Pranks usually appear as scare messages, usually in the form of \nmass e-mails warning of serious problems on a certain issue. Innocent people usu­\nally read such e-mails and get worried. If it is a health issue, innocent people end up \ncalling their physicians or going into hospitals because of a prank.\nJokes, on the other hand, are not very common for a number of reasons: first, it \nis difficult to create a good joke for a mass of people such as the numbers of people \nin cyberspace, and second, it is difficult to create a clear joke that many people will \nappreciate.\nTerrorism: Although cyberterrorism has been going on at a low level, very few \npeople were concerned about it until after September 11, 2001, with the attack on the \nWorld Trade Center. Ever since, there has been a high degree of awareness, thanks \nto the Department of Homeland Security. We now realize that with globalization, \nwe live in a networked world and that there is a growing dependence on computer \nnetworks. Our critical national infrastructure and large financial and business sys­\ntems are interconnected and interdependent on each other. Targeting any point in the \nnational network infrastructure may result in serious disruption of the working of \nthese systems and may lead to a national disaster. The potential for electronic warfare \nis real and national defense, financial, transportation, water, and power grid systems \nare susceptible to an electronic attack unless and until the nation is prepared for it.\nPolitical and Military Espionage: The growth of the global network of com­\nputers, with the dependence and intertwining of both commercial and defense-\nrelated business information systems, is creating fertile ground for both political \nand military espionage. Cyberspace is making the collection, evaluation, analysis, \nintegration, and interpretation of information from around the global easy and fast. \nModern espionage focuses on military, policy, and decision-making information. For \nexample, military superiority cannot be attained only with advanced and powerful \nweaponry unless one controls the information that brings about the interaction and \ncoordination between the central control, ships and aircrafts that launch the weapon, \nand the guidance system on the weapon. Military information to run these kinds \nof weapons is as important as the weapons themselves. So, having such advanced \nweaponry comes with a heavy price of safeguarding the information on the devel­\nopment and working of such systems. Nations are investing heavily in acquiring \nmilitary secrets for such weaponry and governments’ policies issues. The increase \nin both political and military espionage has led to a boom in counterintelligence in \nwhich nations and private businesses are paying to train people that will counter the \nflow of information to the highest bidder.\n" }, { "page_number": 140, "text": "Business Espionage: One of the effects of globalization and the interdependence \nof financial, marketing, and global commerce has been the rise in the efforts to steal \nand market business, commerce, and marketing information. As businesses become \nglobal and world markets become one global bazaar, the market place for business \nideas and market strategies is becoming very highly competitive and intense. This \nhigh competition and the expense involved have led to an easier way out: busi­\nness espionage. In fact, business information espionage is one of the most lucrative \ncareers today. Cyber sleuths are targeting employees using a variety of techniques, \nincluding system break-ins, social engineering, sniffing, electronic surveillance of \ncompany executive electronic communications, and company employee chat rooms \nfor information. Many companies now boast competitive or business intelligence \nunits, sometimes disguised as marketing intelligence or research but actually doing \nbusiness espionage. Likewise, business counterintelligence is also on the rise.\nHatred: The Internet communication medium is a paradox. It is the medium that \nhas brought nations and races together. Yet it is the same medium that is being used \nto separate nations and races through hatred. The global communication networks \nhave given a new medium to homegrown cottage industry of hate that used only to \ncirculate through fliers and words of mouth. These hate groups have embraced the \nInternet and have gone global. Hackers who hate others based on a string of human \nattributes that may include national origin, gender, race, or mundane ones such as \nthe manner of speech one uses can target carefully selected systems where the vic­\ntim is and carry out attacks of vengeance often rooted in ignorance.\nPersonal Gain/Fame/Fun/Notoriety: Serious hackers are usually profiled as \nreclusive. Sometimes, the need to get out of this isolation and to look and be normal \nand fit in drives them to try and accomplish feats that will bring them that sought \nafter fame and notoriety, especially within their hacker communities. However, \nsuch fame and notoriety is often gained through feats of accomplishments of some \nchallenging tasks. Such a task may be and quite often does involve breaking into a \nrevered system.\nIgnorance: Although they are profiled as super-intelligent with a great love for \ncomputers, they still fall victim to what many people fall victims to – ignorance. \nThey make decisions with no or little information. They target the wrong system \nand the wrong person. At times also such acts usually occur as a result of individuals \nauthorized or not, but ignorant of the workings of the system stumbling upon weak­\nnesses or performing forbidden acts that result in system resource modification or \ndestruction.\n5.3.4  Hacking Topologies\nWe pointed out earlier, hackers are often computer enthusiasts with a very good \nunderstanding of the working of computers and computer networks. They use this \nknowledge to plan their system attacks. Seasoned hackers plan their attacks well in \nadvance and their attacks do not affect unmarked members of the system. To get to \n5.3  Hackers\b\n121\n" }, { "page_number": 141, "text": "122\b\n5  Cyber Crimes and Hackers\nthis kind of precision, they usually use specific attack patterns of topologies. Using \nthese topologies, hackers can select to target one victim among a sea of network \nhosts, a subnet of a LAN, or a global network. The attack pattern, the topology, is \naffected by the following factors and network configuration:\nEquipment availability\n• \n – This is more important if the victim is just one host. \nThe underlying equipment to bring about an attack on only one host and not \naffect others must be available. Otherwise, an attack is not possible.\nInternet access availability\n• \n – Similarly, it is imperative that a selected victim \nhost or network be reachable. To be reachable, the host or subnet configuration \nmust avail options for connecting to the Internet.\nThe environment of the network\n• \n – Depending on the environment where the \nvictim host or subnet or full network is, care must be taken to isolate the target \nunit so that nothing else is affected.\nSecurity regime\n• \n – It is essential for the hacker to determine what type of defenses \nis deployed around the victim unit. If the defenses are likely to present unusual \nobstacles, then a different topology that may make the attack a little easier may \nbe selected.\nThe pattern chosen, therefore, is primarily based on the type of victim(s), motive, \nlocation, method of delivery, and a few other things. There are four of these pat­\nterns: one-to-one, one-to-many, many-to-many, and many-to-one [15].\n5.3.4.1  One-to-One\nThese hacker attacks originate from one attacker and are targeted to a known victim. \nThey are personalized attacks where the attacker knows the victim, and sometimes \nthe victim may know the attacker. One-to-one attacks are characterized by the fol­\nlowing motives:\nHate:\n• \n This is when the attacker causes physical, psychological, or financial \ndamage to the victim because of the victim’s race, nationality, gender, or any \nother social attributes. In most of these attacks, the victim is innocent.\nVendetta\n• \n: This is when the attacker believes he/she is the victim paying back for \na wrong committed or an opportunity denied.\nPersonal gain\n• \n: This is when the attacker is driven by personal motives, usually \nfinancial gain. Such attacks include theft of personal information from the victim, \nfor ransom, or for sale.\nJoke:\n• \n This is when the attacker, without any malicious intentions, simply wants \nto send a joke to the victim. Most times, such jokes end up degrading and/or \ndehumanizing the victim.\nBusiness espionage:\n• \n This is when the victim is usually a business competitor. \nSuch attacks involve the stealing of business data, market plans, product \nblueprints, market analyses, and other data that have financial and business \nstrategic and competitive advantages.\n" }, { "page_number": 142, "text": "5.3.4.2  One-to-Many\nThese attacks are fueled by anonymity. In most cases, the attacker does not know \nany of the victims. Moreover, in all cases, the attackers will, at least that is what they \nassume, remain anonymous to the victims. This topography has been the technique \nof choice in the last two to three years because it is one of the easiest to carry out.\nThe motives that drive attackers to use this technique are as follows:\nHate:\n• \n There is hate when the attacker may specifically select a cross section \nof a type of people he or she wants to hurt and deliver the payload to the most \nvisible location where such people have access. Examples of attacks using this \ntechnique include a number of email attacks that have been sent to colleges and \nchurches that are predominantly of one ethnic group.\nPersonal satisfaction\n• \n occurs when the hacker gets fun/satisfaction from other \npeoples’ suffering. Examples include all the recent e-mail attacks such as the \n“Love Bug,” “Killer Resume,” and “Melissa.”\nJokes/Hoaxes\n• \n are involved when the attacker is playing jokes or wants to \nintimidate people.\n5.3  Hackers\b\n123\nFig. 5.2  Shows a one-to-one topology\nAttack Computer\nVictim Computer/Server\nInternet\nFig. 5.3  Shows a one-to-many topology.\nAttack Computer\nVictim Computer/Server\nInternet\nVictim Computer\nVictim Workstation\n" }, { "page_number": 143, "text": "124\b\n5  Cyber Crimes and Hackers\n5.3.4.3  Many-to-One\nThese attacks so far have been rare, but they have recently picked up momentum \nas the DDoS attacks have once again gained favor in the hacker community. In a \nmany-to-one attack technique, the attacker starts the attack by using one host to \nspoof other hosts, the secondary victims, which are then used as the new source \nof an avalanche of attacks on a selected victim. These types of attacks need a high \ndegree of coordination and, therefore, may require advanced planning and a good \nunderstanding of the infrastructure of the network. They also require a very well \nexecuted selection process in choosing the secondary victims and then eventually \nthe final victim. Attacks in this category are driven by\nPersonal vendetta:\n• \n There is personal vendetta when the attacker may want to \ncreate the maximum possible effect, usually damage, to the selected victim site.\nHate\n• \n is involved when the attacker may select a site for no other reasons than \nhate and bombard it in order to bring it down or destroy it.\nTerrorism:\n• \n Attackers using this technique may also be driven by the need to \ninflict as much terror as possible. Terrorism may be related to or part of crimes \nlike drug trafficking, theft where the aim is to destroy evidence after a successful \nattack, or even political terrorism.\nAttention and fame:\n• \n In some extreme circumstances, what motivates this \ntopography may be just a need for personal attention or fame. This may be the \ncase if the targeted site is deemed to be a challenge or a hated site.\nFig. 5.4  Shows a Many-to-One Topology\nAttack Computer\nVictim Computer/Server\nInternet\nAttack Computer\nAttack Computer\n" }, { "page_number": 144, "text": "5.3.4.4  Many-to-Many\nAs in the previous topography, attacks using this topography are rare; however, \nthere has been an increase recently in reported attacks using this technique. For \nexample, in some of the recent DDoS cases, there has been a select group of sites \nchosen by the attackers as secondary victims. These are then used to bombard \nanother select group of victims. The numbers involved in each group many vary \nfrom a few to several thousands. As was the case in the previous many-to-one \ntopography, attackers using this technique need a good understanding of the net­\nwork infrastructure and a good and precise selection process to pick the second­\nary victims and eventually selecting the final pool of victims. Attacks utilizing this \ntopology are mostly driven by a number of motives including\nAttention and fame\n• \n are sought when the attacker seeks publicity resulting from \na successful attack.\nTerrorism:\n• \n Terrorism is usually driven by a desire to destroy something; this \nmay be a computer system or a site that may belong to financial institutions, \npublic safety systems, or a defense and communication infrastructure. Terrorism \nhas many faces including drug trafficking, political and financial terrorism, and \nthe usual international terrorism driven by international politics.\nFun/hoax:\n• \n This type of attack technique may also be driven by personal \ngratification in getting famous and having fun.\n5.3  Hackers\b\n125\nFig. 5.5  Shows a many-to-many topology.\nAttack Computer\nVictim Computer/Server\nInternet\nVictim Computer\nAttack Computer\nAttack Computer\nVictim Computer\n" }, { "page_number": 145, "text": "126\b\n5  Cyber Crimes and Hackers\n5.3.5  Hackers’ Tools of System Exploitation\nEarlier on, we discussed how hacking uses two types of attacking systems: DDoS \nand penetration. In the DDoS, there are a variety of ways of denying access to \nthe system resources, and we have already discussed those. Let us now look at \nthe most widely used methods in system penetration attacks. System penetration \nis the most widely used method of hacker attacks. Once in, a hacker has a wide \nvariety of choices, including viruses, worms, and sniffers [15].\n5.3.5.1  Viruses\nLet us start by giving a brief description of a computer virus and defer a more \ndetailed description of it until Chapter 14. A computer virus is a program that infects \na chosen system resource such as a file and may even spread within the system and \nbeyond. Hackers have used various types of viruses in the past as tools, including \nmemory/resident, error-generating, program destroyers, system crushers, time theft, \nhardware destroyers, Trojans, time bombs, trapdoors, and hoaxes. Let us give a \nbrief description of each and differ a more detailed study of each until chapter 14.\nMemory/Resident virus: This is more insidious, difficult to detect, fast spread­\ning, and extremely difficult to eradicate, and one of the most damaging computer \nviruses that hackers use to attack the central storage part of a computer system. \nOnce in memory, the virus is able to attack any other program or data in the sys­\ntem. As we will see in Chapter 14, they are of two types: transient, the category \nthat includes viruses that are active only when the inflicted program is executing, \nand resident, a brand that attaches itself, via a surrogate software, to a portion of \nmemory and remains active long after the surrogate program has finished execut­\ning. Examples of memory resident viruses include all boot sector viruses such as \nthe Israel virus [16].\nError-Generating virus: Hackers are fond of sending viruses that are difficult \nto discover and yet are fast moving. Such viruses are deployed in executable code. \nEvery time the software is executed, errors are generated. The errors vary from \n“hard” logical errors, resulting in complete system shut down, to simple “soft” logi­\ncal errors which may cause simple momentary blimps of the screen.\nData and program destroyers: These are serious software destroyers that \nattach themselves to a software and then use it as a conduit or surrogate for growth, \nreplication, and as a launch pad for later attacks to this and other programs and \ndata. Once attached to a software, they attack any data or program that the software \nmay come in contact with, sometimes altering the contents, deleting, or completely \ndestroying those contents.\nSystem Crusher: Hackers use system crusher viruses to completely disable the \nsystem. This can be done in a number of ways. One way is to destroy the system \nprograms such as operating system, compilers, loaders, linkers and others. Another \napproach is to self-replicate until the system is overwhelmed and crashes.\n" }, { "page_number": 146, "text": "Computer Time Theft Virus: Hackers use this type of virus to steal system \ntime either by first becoming a legitimate user of the system or by preventing other \nlegitimate users from using the system by first creating a number of system inter­\nruptions. This effectively puts other programs scheduled to run into indefinite wait \nqueues. The intruder then gains the highest priority, like a superuser with full access \nto all system resources. With this approach, system intrusion is very difficult to \ndetect.\nHardware Destroyers: Although not very common, these “killer viruses” are \nused by hackers to selectively destroy a system device by embedding the virus into \ndevice micro-instructions or “mic” such as bios and device drivers. Once embedded \ninto the mic, they may alter it in such ways that may cause the devices to move into \npositions that normally result in physical damage. For example, there are viruses \nthat are known to lock up keyboards, disable mice, and cause disk read/write heads \nto move to nonexisting sectors on the disk, thus causing the disk to crash.\nTrojans: These are a class of viruses that hackers hide, just as in the Greek \nTrojan Horse legend, into trusted programs such as compilers, editors, and other \ncommonly used programs.\nLogic/Time Bombs: Logic bombs are timed and commonly used type of virus to \npenetrate system, embedding themselves in the system’s software, and lying in wait \nuntil a trigger goes off. Trigger events can vary in type depending on the motive of \nthe virus. Most triggers are timed events. There are various types of these viruses \nincluding Columbus Day, Valentine’s Day, Jerusalem-D, and the Michelangelo, \nwhich was meant to activate on Michelangelo’s 517 birthday anniversary.\nTrapdoors: Probably, these are some of the most used virus tools by hackers. \nThey find their way into the system through weak points and loopholes that are \nfound through system scans. Quite often, software manufacturers, during software \ndevelopment and testing, intentionally leave trapdoors in their products, usually \nundocumented, as secret entry points into the programs so that modification can be \ndone on the programs at a later date. Trapdoors are also used by programmers as \ntesting points. As is always the case, trapdoors can also be exploited by malicious \npeople, including programmers themselves. In a trapdoor attack, an intruder may \ndeposit virus-infected data file on a system instead of actually removing, copying, \nor destroying the existing data files.\nHoaxes: Very common form of viruses, they most often do not originate from \nhackers but from system users. Though not physically harmful, hoaxes can be a \ndisturbing type of nuisance to system users.\n5.3.5.2  Worm\nA worm is very similar to a virus. In fact, their differences are few. They are both \nautomated attacks, both self-generate or replicate new copies as they spread, and \nboth can damage any resource they attack. The main difference between them, \nhowever, is that while viruses always hide in software as surrogates, worms are \nstand-alone programs.\n5.3  Hackers\b\n127\n" }, { "page_number": 147, "text": "128\b\n5  Cyber Crimes and Hackers\nHackers have been using worms as frequently as they have been using viruses to \nattack computer systems.\n5.3.5.3  Sniffer\nA sniffer is a software script that sniffs around the target system looking for pass­\nwords and other specific information that usually lead to identification of system \nexploits. Hackers use sniffers extensively for this purpose.\n5.3.6  Types of Attacks\nWhatever their motives, hackers have a variety of techniques in their arsenal to \ncarry out their goals. Let us look at some of them here.\nSocial Engineering: This involves fooling the victim for fun and profit. Social \nengineering depends on trusting that employees will fall for cheap hacker “tricks” \nsuch as calling or e-mailing them masquerading as a system administrator, for \nexample, and getting their passwords which eventually lets in the intruder. Social \nengineering is very hard to protect against. The only way to prevent it is through \nemployee education and employee awareness.\nImpersonation is stealing access rights of authorized users. There are many \nways an attacker such as a hacker can impersonate a legitimate user. For example, \na hacker can capture a user telnet session using a network sniffer such as tcpdump \nor nitsniff. The hacker can then later login as a legitimate user with the stolen login \naccess rights of the victim.\nExploits: This involves exploiting a hole in software or operating systems. As is \nusually the case, many software products are brought on the market either through \na rush to finish or lack of testing, with gaping loopholes. Badly written software is \nvery common even in large software projects such as operating systems. Hackers \nquite often scan network hosts for exploits and use them to enter systems.\nTransitive Trust exploits host-to-host or network-to-network trust. Either \nthrough client-server three-way handshake or server-to-server next-hop relation­\nships, there is always a trust relationship between two network hosts during any \ntransmission. This trust relationship is quite often compromised by hackers in a \nvariety of ways. For example, an attacker can easily do an IP-spoof or a sequence \nnumber attack between two transmitting elements and gets away with information \nthat compromises the security of the two communicating elements.\nData Attacks: Script programming has not only brought new dynamism into \nWeb development, but it has also brought a danger of hostile code into systems \nthrough scripts. Current scripts can run on both the server, where they tradition­\nally used to run, and also on the client. In doing so, scripts can allow an intruder to \ndeposit hostile code into the system, including Trojans, worms, or viruses. We will \ndiscuss scripts in detail in the next chapter.\n" }, { "page_number": 148, "text": "Infrastructure Weaknesses: Some of the greatest network infrastructure \nweaknesses are found in the communication protocols. Many hackers, by virtue of \ntheir knowledge of the network infrastructure, take advantage of these loopholes \nand use them as gateways to attack systems. Many times, whenever a loophole \nis found in the protocols, patches are soon made available but not many system \nadministrators follow through with patching the security holes. Hackers start by \nscanning systems to find those unpatched holes. In fact, most of the system attacks \nfrom hackers use known vulnerabilities that should have been patched.\nDenial of Service: This is a favorite attack technique for many hackers, especially \nhacktivists. It consists of preventing the system from being used as planned through \noverwhelming the servers with traffic. The victim server is selected and then \nbombarded with packets with spoofed IP addresses. Many times, innocent hosts are \nforced to take part in the bombardment of the victim to increase the traffic on the \nvictim until the victim is overwhelmed and eventually fails.\nActive Wiretap: In an active wiretap, messages are intercepted during \ntransmission. When the interception happens, two things may take place: First, \nthe data in the intercepted package may be compromised by introduction of new \ndata such as change of source or destination IP address or the change in the packet \nsequence numbers. Secondly, data may not be changed but copied to be used later \nsuch as in the scanning and sniffing of packets. In either case, the confidentiality of \ndata is compromised and the security of the network is put at risk.\n5.4  Dealing with the Rising Tide of Cyber Crimes\nMost system attacks take place before even experienced security experts have \nadvance knowledge of them. Most of the security solutions are best practices as \nwe have so far seen, and we will continue to discuss them as either preventive or \nreactive. An effective plan must consist of three components: prevention, detection, \nand analysis and response.\n5.4.1  Prevention\nPrevention is probably the best system security policy, but only if we know what to \nprevent the systems from. It has been and it continues to be an uphill battle for the \nsecurity community to be able to predict what type of attack will occur the next time \naround. Although prevention is the best approach to system security, the future of \nsystem security cannot and should not rely on the guesses of a few security people, \nwho have and will continue to get it wrong sometimes. In the few bright spots in \nthe protection of systems through prevention has been the fact that most of the \nattack signatures are repeat signatures. Although it is difficult and we are constantly \nchasing the hackers who are always ahead of us, we still need to do something. \nAmong those possible approaches are the following:\n5.4  Dealing with the Rising Tide of Cyber Crimes\b\n129\n" }, { "page_number": 149, "text": "130\b\n5  Cyber Crimes and Hackers\nA security policy\n• \nRisk management\n• \nPerimeter security\n• \nEncryption\n• \nLegislation\n• \nSelf-regulation\n• \nMass education\n• \nWe will discuss all these in detail in the chapters that follow.\n5.4.2  Detection\nIn case, prevention fails the next best strategy should be early detection. Detecting \ncyber crimes before they occur constitutes a 24-hour monitoring system to alert \nsecurity personnel whenever something unusual (something with a non-normal \npattern, different from the usual pattern of traffic in and around the system) occurs. \nDetection systems must continuously capture, analyze, and report on the daily \nhappenings in and around the network. In capturing, analyzing, and reporting, \nseveral techniques are used, including intrusion detection, vulnerability scanning, \nvirus detection, and other ad hoc methods. We will look at these in the coming \nchapters.\n5.4.5  Recovery\nWhether or not prevention or detection solutions were deployed on the system, if \na security incident has occurred on a system, a recovery plan, as spelled out in the \nsecurity plan, must be followed.\n5.5  Conclusion\nDealing with rising cyber crimes in general and hacker activities in particular, in \nthis fast moving computer communication revolution in which everyone is likely to \nbe affected, is a major challenge not only to the people in the security community \nbut for all of us. We must devise means that will stop the growth, stop the spiral, \nand protect the systems from attacks. But this fight is already cut out for us and it \nbe tough in that we are chasing the enemy who seems, on many occasions, to know \nmore than we do and is constantly ahead of us.\nPreventing cyber crimes requires an enormous amount of effort and planning The \ngoal is to have advance information before an attack occurs. However, the challenge \nis to get this advance information. Also getting this information in advance does not \nhelp very much unless we can quickly analyze it and plan an appropriate response \n" }, { "page_number": 150, "text": "in time to prevent the systems from being adversely affected. In real life, however, \nthere is no such thing as the luxury of advance information before an attack.\nExercises\n1.\t Define the following terms:\n(i)\t Hacker\n(ii)\t Hacktivist\n(iii)\tCracker\n  2.\t Why is hacking a big threat to system security?\n  3.\t What is the best way to deal with hacking?\n  4.\t Discuss the politics of dealing with hacktivism.\n  5.\t Following the history of hacking, can you say that hacking is getting under \ncontrol? Why or why not?\n  6.\t What kind of legislation can be effective to prevent hacking?\n  7.\t List and discuss the types of hacker crimes.\n  8.\t Discuss the major sources of computer crimes.\n  9.\t Why is crime reporting so low in major industries?\n10.\t Insider abuse is a major crime category. Discuss ways to solve it.\nAdvanced Exercises\n1.\t Devise a plan to compute the cost of computer crime.\n2.\t What major crimes would you include in the preceding study?\n3.\t From your study, identify the most expensive attacks.\n4.\t Devise techniques to study the problem of non-reporting. Estimate the costs \nassociated with it.\n5.\t Study the reporting patterns of computer crimes reporting by industry. Which \nindustry reports best?\nReferences\n1.\t Cybercrime threat “real and growing” http://news.bbc.co.uk/2/hi/science/nature/978163.stm\n2.\t Glossary of Vulnerability Testing Terminology http://www.ee.oulu.fi/research/ouspg/sage/\nglossary/\n3.\t Anatomy of an attack The Economist, February 19–25, 2000\n4.\t Joseph M. Kizza. Social and Ethical Issues in the Information Age. 2nd edition. New York: \nSpringer, 2003.\nReferences\b\n131\n" }, { "page_number": 151, "text": "132\b\n5  Cyber Crimes and Hackers\n  5.\t Louis J. Freeh. FBI Congressional Report on Cybercrime http://www.fbi.gov/congress00/\ncyber021600.htm.\n  6.\t Michelle Slatalla. A brief History of Hacking http://tlc.discovery.com/convergence/hackers/\narticles/history.html\n  7.\t Phone Phreaking: The Telecommunications Underground http://telephonetribute.com/phone­\nphreaking.html.\n  8.\t “Timeline of Hacking” http://fyi.cnn.com/fyi/interactive/school.tools/timelines/1999/com­\nputer.hacking/frameset.exclude.html\n  9.\t Ron Rosenbaum. Secrets of the Little Blue Box http://www.webcrunchers.com/crunch/esq-\nart.html.\n10.\t The Complete History of Hacking http://www.wbglinks.net/pages/history/\n11.\t Peter J. Denning. Computers Under Attack: Intruders, Worms and Viruses. New York: ACM \nPress, 1990.\n12.\t Denning, Dorothy. “Activism, Hacktivism, and Cyberterrorisim: The Internet as a Tool or \nInfluencing Foreign Policy http://www.nautilus.og/info-policy/workshop/papers/denning.\nhtml.\n13.\t Lemos, Robert. Online vandals smoke New York Times site CNET News.com. http://news.\ncom.com/2009–1001–252754.html.\n14.\t Shachtman, Noah. Hacktivists Stage Virtual Sit-In at WEF Web site AlterNet. http://www.\nalternet.org/story.html?StoryID = 12374.\n15.\t Joseph M. Kizza. Computer Network Security and Cyber Ethics. North Calorina. McFarland, \n2001.\n16.\t Karen Forchet. Computer Security Management. Boyd & Frasher Publishing, 1994.\n" }, { "page_number": 152, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_6, © Springer-Verlag London Limited 2009\n\b\n133\nChapter 6\nHostile Scripts\n6.1  Introduction\nThe rapid growth of the Internet and its ability to offer services have made it the \nfastest growing medium of communication today. Today’s and tomorrow’s business \ntransactions involving financial data; product development and marketing; storage \nof sensitive company information; and the creation, dissemination, sharing, and \nstoring of information are and will continue to be made online, most specifically on \nthe Web. The automation and dynamic growth of an interactive Web has created a \nhuge demand for a new type of Web programming to meet the growing demand of \nmillions of Web services from users around the world. Some services and requests \nare tedious and others are complex, yet the rate of growth of the number of requests, \nthe amount of services requested in terms of bandwidth, and the quality of infor­\nmation requested warrant a technology to automate the process. Script technology \ncame in timely to the rescue. Scripting is a powerful automation technology on the \nInternet that makes the web highly interactive.\nScripting technology is making the Web interactive and automated as Web serv­\ners accept inputs from users and respond to user inputs. While scripting is making \nthe Internet and in particular, the Web is alive and productive, it also introduces a \nhuge security problem to an already security-burdened cyberspace. Hostile scripts \nembedded in Web pages, as well as HTML formatted e-mail, attachments, and \napplets introduce a new security paradigm in cyberspace security. In particular, \nsecurity problems are introduced in two areas: at the server and at the client. Before \nwe look at the security at both of these points, let us first understand the scripting \nstandard.\n6.2  Introduction to the Common Gateway Interface (CGI)\nThe Common Gateway Interface, or CGI, is a standard to specify a data format \nthat servers, browsers, and programs must use in order to exchange information. A \nprogram written in any language that uses this standard to exchange data between \na Web server and a client’s browser is a CGI script. In other words, a CGI script is \n" }, { "page_number": 153, "text": "134\b\n6  Hostile Scripts\nan external gateway program to interface with information servers such as HTTP \nor Web servers and client browsers. CGI scripts are great in that they allow the web \nservers to be dynamic and interactive with the client browser as the server receives \nand accepts user inputs and responds to them in a measured and relevant way to \nsatisfy the user. Without CGI, the information the users would get from an informa­\ntion server would not be packaged based on the request but based on how it is stored \non the server.\nCGI programs are of two types: those written in programming languages such as \nC/C++ and Fortran that can be compiled to produce an executable module stored on \nthe server, and scripts written in scripting languages such as PERL, Java, and Unix \nshell. For these CGI scripts, no associated source code needs to be stored by the \ninformation server as is the case in CGI programs. CGI scripts written in scripting \nlanguages are not complied like those in nonscripting languages. Instead, they are \ntext code which is interpreted by the interpreter on the information server or in the \nbrowser and run right away. The advantage to this is you can copy your script with \nlittle or no changes to any machine with the same interpreter and it will run. In addi­\ntion, the scripts are easier to debug, modify, and maintain than a typical compiled \nprogram.\nBoth CGI programs or scripts, when executed at the information server, help \norganize information for both the server and the client. For example, the server may \nwant to retrieve information entered by the visitors and use it to package a suitable \noutput for the clients. In addition, GCI may be used to dynamically set field descrip­\ntions on a form and in real-time inform the user on what data has been entered and \nyet to be entered. At the end, the form may even be returned to the user for proof­\nreading before it is submitted.\nCGI scripts go beyond dynamic form filling to automating a broad range of ser­\nvices in search engines and directories and taking on mundane jobs such as making \ndownload available, granting access rights to users, and order confirmation.\nAs we pointed out earlier, CGI scripts can be written in any programming lan­\nguage that an information server can execute. Many of these languages include script \nlanguages such as Perl, JavaScript, TCL, Applescript, Unix shell, VBScript, and \nnonscript languages such as C/C++, Fortran, and Visual Basic. There is dynamism in \nthe languages themselves; so, we may have new languages in the near future.\n6.3  CGI Scripts in a Three-Way Handshake\nAs we discussed in Chapter 3, the communication between a server and a client \nopens with the same etiquette we use when we meet a stranger. First, a trust rela­\ntionship must be established before any requests are made. This can be done in a \nnumber of ways. Some people start with a formal “Hello, I’m . . . ,” then, “I need . . . ” \nupon which the stranger says “Hello, I’m . . . ” then. “Sure I can. . . . ” Others carry it \nfurther to hugs, kisses, and all other ways people use to break the ice. If the stranger \nis ready for a request, then this information is passed back to you in a form of an \nacknowledgement to your first embrace. However, if the stranger is not ready to talk \n" }, { "page_number": 154, "text": "to you, there is usually no acknowledgment of your initial advances and no further \ncommunication may follow until the stranger’s acknowledgment comes through. \nAt this point, the stranger puts out a welcome mat and leaves the door open for you \nto come in and start business. Now, it is up to the initiator of the communication to \nstart full communication.\nWhen computers are communicating they follow these etiquette patterns and \nprotocols, and we call this procedure a handshake. In fact, for computers it is \ncalled a three-way handshake. A three-way handshake starts with the client sending \na packet, called a SYN (short for synchronization), which contains both the cli­\nent and server addresses together with some initial information for introductions. \nUpon receipt of this packet by the server’s welcome open door, called a port, the \nserver creates a communication socket with the same port number such as the client \nrequested through which future communication with the client will go. After creat­\ning the communication socket, the server puts the socket in queue and informs the \nclient by sending an acknowledgment called a SYN-ACK. The server’s communica­\ntion socket will remain open and in queue waiting for an ACK from the client and \ndata packets thereafter. The socket door remains half-open until the server sends the \nclient an ACK packet signaling full communication. During this time, however, the \nserver can welcome many more clients that want to communicate, and communica­\ntion sockets will be opened for each.\nThe CGI script is a server-side language that resides on the server side and it \nreceives the client’s SYN request for a service. The script then executes and lets the \nserver and client start to communicate directly. In this position, the script is able to \ndynamically receive and pass data between the client and server. The client browser \nhas no idea that the server is executing a script. When the server receives the script’s \noutput, it then adds the necessary protocol data and sends the packet or packets back \nto the client’s browser. Figure 6.1 shows the position of a CGI script in a three-way \nhandshake.\nFig. 6.1  The Position of a \nCGI Script in a Three-Way \nHandshake\n6.3  CGI Scripts in a Three-Way Handshake\b\n135\nSYN \nSYN-ACK \nACK \n.\nClient\nServer\n(Received by welcome port}\n( CGI Script) \n(Create communication port) \n(Established connection) \n" }, { "page_number": 155, "text": "136\b\n6  Hostile Scripts\nThe CGI scripts reside on the server side, in fact on the computer on which \nthe server is, because a user on a client machine cannot execute the script in a \nbrowser on the server; one can view only the output of the script after it executes \non the server and transmits the output using a browser on the client machine the \nuser is on.\n6.4  Server – CGI Interface\nIn the previous section, we stated that the CGI script is on the server side of the rela­\ntionship between the client and the server. The scripts are stored on the server and \nare executed by the server to respond to the client demands. There is, therefore, an \ninterface that separates the server and the script. This interface, as shown in Fig. 6.2, \nconsists of information from the server supplied to the script that includes input \nvariables extracted from an HTTP header from the client and information from the \nscript back to the server. Output information from the server to the script and from \nthe script to the server is passed through environment variables and through script \ncommand lines. Command line inputs instruct a script to do certain tasks such as \nsearch and query.\nFig. 6.2  A Client CGI Script Interface\nClient Using a Web Browser\n1. Client's browser\nextablishes an\nHTTP/S connection\nwith the Web Server\n2. Web Server executes a\nresident CGI Script\n3. The CGI Script using\nresiources on the server (Email,\nDatabase, etc.) creates an\nHTML page\n4. Web Server forwards the\nHTML page to the Client browser\nWeb Server\n" }, { "page_number": 156, "text": "6.5  CGI Script Security Issues\nTo an information server, the CGI script is like an open window to a private house \nwhere passers-by can enter the house to request services. It is an open gateway that \nallows anyone anywhere to run an executable program on your server and even send \ntheir own programs to run on your server. An open window like this on a server is \nnot the safest thing to have, and security issues are involved. But since CGI script­\ning is the fastest growing component of the Internet, it is a problem we have to con­\ntend with and meet head on. CGI scripts present security problems to cyberspace in \nseveral ways including\nProgram development\n• \n: During program development, CGI scripts are written \nin high level programming language and complied before being executed or \nthey are written in a scripting language and they are interpreted before they \nare executed. In either way, programming is more difficult than composing \ndocuments with HTML, for example. Because of the programming complexity \nand owing to lack of program development discipline, errors introduced into the \nprogram are difficult to find, especially in noncompiled scripts.\nTransient nature of execution:\n• \n When CGI scripts come into the server, they run \nas separate processes from that of the host server. Although this is good because \nit isolates the server from most script errors, the imported scripts may introduce \nhostile code into the server.\nCross-pollination:\n• \n The hostile code introduced into the server by a transient script \ncan propagate into other server applications and can even be re-transmitted to \nother servers by a script bouncing off this server or originating from this server.\nResource-guzzling:\n• \n Scripts that are resource intensive could cause a security \nproblem to a server with limited resources.\nRemote execution:\n• \n Since servers can send CGI scripts to execute on surrogate \nservers, both the sending and receiving servers are left open to hostile code \nusually transmitted by the script.\nIn all these situations, a security threat occurs when someone breaks into a script. \nBroken scripts are extremely dangerous.\nKris Jamsa gives the following security threats that can happen to a broken \nscript (2):\nGiving an attacker access to the system’s password file for decryption.\n• \nMailing a map of the system which gives the attacker more time offline to analyze \n• \nthe system’s vulnerabilities\nStarting a login server on a high port and telneting in.\n• \nBeginning a distributed denial-of-service attack against the server.\n• \nErasing or altering the server’s log files.\n• \nIn addition to these others, the following security threats are also possible (3):\nMalicious code provided by one client for another client: This can happen, \n• \nfor example, in sites that host discussion groups where one client can embed \n6.5  CGI Script Security Issues\b\n137\n" }, { "page_number": 157, "text": "138\b\n6  Hostile Scripts\nmalicious HTML tags in a message intended for another client. According to \nthe Computer Emergency Response Team (CERT), an attacker might post a \nmessage like\nHello message board. This is a message.\n\nThis is the end of my message.\nWhen a victim with scripts enabled in his or her browser\nreads this message, the malicious code may be executed\nunexpectedly. Many different scripting tags that can be\nembedded in this way include ”>\nClick\nhere \nWhen an unsuspecting user clicks on this link, the URL\nsent to example.com includes the malicious code. If the\nWeb server sends a page back to the user including the\nvalue of mycomment, the malicious code may be\nexecuted unexpectedly on the client.\nAll these security threats point at one security problem with scripts: They all let \nin unsecured data.\n6.6  Web Script Security Issues\nOur discussion of script security issues above has centered on CGI scripts stored \nand executed on the server. However, as the automation of the Web goes into over­\ndrive, there are now thousands of Web scripts doing a variety of web services from \nform filling to information gathering. Most of these scripts either transient or reside \non Web servers. Because of their popularity and widespread use, most client and \nserver Web browsers today have the capability to interpret scripts embedded in Web \npages downloaded from a Web server. Such scripts may be written in a variety of \nscripting languages. In addition, most browsers are installed with the capability to \nrun scripts enabled by default.\n" }, { "page_number": 158, "text": "6.7  Dealing with the Script Security Problems\nThe love of Web automation is not likely to change soon and the future of a dynamic \nWeb is here to stay. In addition, more and more programs written for the Web are \ninteracting with networked clients and servers, raising the fear of a possibility that \nclients and servers may be attacked by these programs using embedded scripts to \ngain unauthorized access.\nIt is, therefore, necessary to be aware of the following:\nScript command line statements: Scripting languages such as PERL, PHP, and \n• \nthe Bourne shell pass information needed to perform tasks through command \nline statements which are then executed by an interpreter. This can be very \ndangerous.\nClients may use special characters in input strings to confuse other clients, \n• \nservers, or scripts.\nProblems with server side include user-created documents in NCSA HTTPd that \n• \nprovide simple information, such as current date, the file’s last modification date, \nand the size or last modification of other files, to clients on the fly. Sometimes this \ninformation can provide a powerful interface to CGI. In an unfortunate situation, \nserver-side scripts are a security risk because they let clients execute dangerous \ncommands on the server.\nWe summarize the three concerns above in two good solutions: one is to use only \nthe data from a CGI, only if it will not harm the system; and the second is to check \nall data into or out of the script to make sure that it is safe.\n6.8  Scripting Languages\nCGI scripts can be written in any programming language. Because of the need for \nquick execution of the scripts both at the server and in the client browsers and the \nneed of not storing source code at the server, it is getting more and more convenient \nto use scripting languages that are interpretable instead of languages that are com­\npiled like C and C++. The advantages of using interpretable scripting languages, as \nwe discussed earlier, are many: see Section 6.2. There are basically two categories \nof scripting languages, those whose scripts are on the server side of the client–\nserver programming model and those whose scripts are on the client side.\n6.8.1  Server-Side Scripting Languages\nEver since the debut of the World Wide Web and the development of HTML to \nspecify and present information, there has been a realization that HTML documents \nare too static.\n6.8  Scripting Languages\b\n139\n" }, { "page_number": 159, "text": "140\b\n6  Hostile Scripts\nThere was a need to put dynamism into HTTP so that the interaction between \nthe client and the server would become dynamic. This problem was easy to solve \nbecause the hardware on which Web server software runs has processing power and \nmany applications such as e-mail, database manipulation, or calendaring already \ninstalled and ripe for utilization [1]. The CGI concept was born.\nAmong the many sever-side scripting languages are ERL, PHP, ColdFusion, \nASP, MySQL, Java servlets, and MivaScript.\n6.8.1.1  Perl Scripts\nPractical Extraction and Report Language (Perl) is an interpretable programming \nlanguage that is also portable. It is used extensively in Web programming to make \ntext processing interactive and dynamic. Developed in 1986 by Larry Wall, the \nlanguage has become very popular. Although it is an interpreted language, unlike C \nand C++, Perl has many features and basic constructs and variables similar to C and \nC++. However, unlike C and C++, Perl has no pointers and defined data types.\nOne of the security advantages of Perl over C, say, is that Perl does not use point­\ners where a programmer can misuse a pointer and access unauthorized data. Perl \nalso introduces a gateway into Internet programming between the client and the \nserver. This gateway is a security gatekeeper scrutinizing all incoming data into the \nserver to prevent malicious code and data into the server. Perl does this by denying \nprograms written in Perl from writing to a variable, whereby this variable can cor­\nrupt other variables.\nPerl also has a version called Taintperl that always checks data dependencies to \nprevent system commands from allowing untrusted data or code into the server.\n6.8.1.2  PHP\nPHP (Hypertext Preprocessor) is a widely used general-purpose scripting language \nthat is especially suited for Web development and can be embedded into HTML. It is \nan open source language suited for Web development, and this makes it very popular.\nJust like Perl, PHP code is executed on the server and the client just receives the \nresults of running a PHP script on the server. With PHP, you can do just about any­\nthing other CGI program can do, such as collect form data, generate dynamic page \ncontent, or send and receive cookies.\n6.8.1.3  Server-Side Script Security Issues\nA server-side script, whether compiled or interpreted, and its interpreter is included \nin a Web server as a module or executed as a separate CGI binary. It can access \nfiles, execute commands, and open network connections on the server. These \ncapabilities make server-side scripts a security threat because they make anything \n" }, { "page_number": 160, "text": "run on the Web server unsecure by default. PHP is no exception to this problem; \nit is just like Perl and C. For example, PHP, like other server-side scripts, was \ndesigned to allow user-level access to the file system, but it is entirely possible \nthat a PHP script can allow a user to read system files such as /etc/passwd which \ngives the user access to all passwords and the ability to modify network connec­\ntions and change device entries in /dev/ or COM1, configuration files /etc/ files, \nand .ini files.\nSince databases have become an integral part of daily computing in a networked \nworld and large databases are stored on network servers, they become easy prey to \nhostile code. To retrieve or to store any information in a database, nowadays you \nneed to connect to the database, send a legitimate query, fetch the result, and close \nthe connection all using a query language, the Structured Query Language (SQL). \nAn attacker can create or alter SQL commands to inject hostile data and code, or to \noverride valuable ones, or even to execute dangerous system-level commands on \nthe database host.\n6.8.2  Client-Side Scripting Languages\nThe World Wide Web (WWW) created the romance of representing portable infor­\nmation from a wide range of applications for a global audience. This was accom­\nplished by the HyperText Markup Language (HTML), a simple markup language \nthat revolutionized document representation on the Internet. But for a while, HTML \ndocuments were static. The scripting specifications were developed to extend \nHTML and make it more dynamic and interactive. Client-side scripting of HTML \ndocuments and objects embedded within HTML documents have been developed \nto bring dynamism and automation of user documents. Scripts including JavaScript \nand VBScript are being used widely to automate client processes.\nFor a long time, during the development of CI programming, programmers \nnoticed that much of what CGI does, such as maintaining a state, filling out forms, \nerror checking, or performing numeric calculation, can be handled on the client’s \nside. Quite often, the client computer has quite a bit of CPU power idle, while the \nserver is being bombarded with hundreds or thousands of CGI requests for the mun­\ndane jobs above. The programmers saw it justifiable to shift the burden to the client, \nand this led to the birth of client-side scripting\nAmong the many client-side scripting languages are DTML/CSS, Java, \nJavaScript, and VBScript.\n6.8.2.1  JavaScripts\nJavaScript is a programming that performs client-side scripting, making Web \npages more interactive. Client-side scripting means that the code works only on \nthe user’s computer, not on the server-side. It was developed by Sun Microsystems \nto bridge the gap between Web designers who needed a dynamic and interactive \n6.8  Scripting Languages\b\n141\n" }, { "page_number": 161, "text": "142\b\n6  Hostile Scripts\nWeb ­environment and Java programmers. It is an interpretable language like Perl. \nThat means the interpreter in the browser is all that is needed for a JavaSrcipt to be \nexecuted by the client and it will run. JavaScript’s ability to run scripts on the cli­\nent’s browser makes the client able to run interactive Web scripts that do not need a \nserver. This feature makes creating JavaScript scripts easy because they are simply \nembedded into any HTML code. It has, therefore, become the de facto standard for \nenhancing and adding functionality to Web pages.\nThis convenience, however, creates a security threat because when a browser can \nexecute a JavaScript at any time, it means that hostile code can be injected into the \nscript and the browser would run it from any client. This problem can be fixed only \nif browsers can let an executing script perform a limited number of commands. In \naddition, scripts run from a browser can introduce into the client systems program­\nming errors in the coding of the script itself, which may lead to a security threat in \nthe system itself.\n6.8.2.2  VBScript\nBased in part on the popularity of the Visual Basic programming language and on \nthe need to have a scripting language to counter JavaScript, Microsoft developed \nVBScript (V and B for Visual Basic). VBScript has a syntax similar to the Visual \nBasic programming language syntax. Since VBScript is based on Microsoft \nVisual Basic, and unlike JavaScript which can run in many browsers, VBScript \ninterpreter is supported only in the Microsoft Internet Explorer.\n6.8.2.3  Security Issues in JavaScript and VBScript\nRecall that using all client-side scripts like JavaScript and VBScript that execute \nin the browser can compromise the security of the user system. These scripts cre­\nate hidden frames on Web sites so that as a user navigates a Web site, the scripts \nrunning in the browser can store information from the user for short-time use, just \nlike a cookie. The hidden frame is an area of the Web page that is invisible to the \nuser but remains in place for the script to use. Data stored in these hidden frames \ncan be used by multiple Web pages during the user session or later. Also, when a \nuser visits a Web site, the user may not be aware that there are scripts executing at \nthe Web site. Hackers can use these loopholes to threaten the security of the user \nsystem.\nThere are several ways of dealing with these problems including\nLimit browser functions and operations of the browser scripts so that the script, \n• \nfor example, cannot write on or read from the user’s disk.\nMake it difficult for others to read the scripts.\n• \nPut the script in an external file and reference the file only from the document \n• \nthat uses it.\n" }, { "page_number": 162, "text": "Exercises\n1.\t How did CGI revolutionize Web programming?\n2.\t What are the differences between client-side and server-side scripting? Is one \nbetter than the other?\n3.\t In terms of security, is client-side scripting better than server-side scripting? \nWhy or why not?\n4.\t Suggest ways to improve script security threats.\n5.\t Why was VBScript not very popular?\n6.\t The biggest script security threat has always been the acceptance of untrusted \ndata. What is the best way for scripts to accept data and preserve the trust?\nAdvance Exercises\n1.\t The most common CGI function is to fill in forms, the processing script actually \ntakes the data input by the Web surfer and sends it as e-mail to the form admin­\nistrator. Discuss the different ways such a process can fall victim to an attacker.\n2.\t CGI is also used in discussions allowing users to talk to the customer and back. \nCGI helps in creating an ongoing dialog between multiple clients. Discuss the \nsecurity implications of dialogs like this.\n3.\t CGI is often used to manage extensive databases. Databases store sensitive \ninformation. Discuss security measures you can use to safeguard the databases.\nReference\n1.\t Sol, Selena. Server-side Scripting. http://www.wdvl.com/Authoring/Scripting/WebWare/\nServer/.\nAdditional References\n1.\t The World Wide Web Security FAQ. http://www.w3.org/Security/Faq/wwwsf4.html.\n2.\t Jamsa, Kris. Hacker Proof: The Ultimate Guide to Network Security. 2nd edition. Thomason \nDelmar Learning, 2002.\n3.\t CERT® Advisory CA-2000–02 Malicious HTML Tags Embedded in Client Web Requests. \nhttp://www.cert.org/advisories/CA-2000–02.html.\nAdditional References\b\n143\n" }, { "page_number": 163, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_7, © Springer-Verlag London Limited 2009\n\b\n145\nChapter 7\nSecurity Assessment, Analysis, and Assurance\n7.1  Introduction\nThe rapid development in both computer and telecommunication technologies has \nresulted in massive interconnectivity and interoperability of systems. The world is \ngetting more and more interconnected every day. Most major organization systems \nare interconnected to other systems through networks. The bigger the networks, \nthe bigger the security problems involving the system resources on these networks. \nMany companies, businesses, and institutions whose systems work in coordination \nand collaboration with other systems as they share each others’ resources and com­\nmunicate with each other, face a constant security threat to these systems, yet the \ncollaboration must go on.\nThe risks and potential of someone intruding into these systems for sabotage, van­\ndalism, and resource theft are high. For security assurance of networked systems, such \nrisks must be assessed to determine the adequacy of existing security measures and \nsafeguards and also to determine if improvement in the existing measures is needed. \nSuch an assessment process consists of a comprehensive and continuous analysis of \nthe security threat risk to the system that involves an auditing of the system, assessing \nthe vulnerabilities of the system, and maintaining a creditable security policy and a \nvigorous regime for the installation of patches and security updates. In addition, there \nmust also be a standard process to minimize the risks associated with nonstandard \nsecurity implementations across shared infrastructures and end systems.\nThe process to achieve all these and more consists of several tasks including \na system security policy, security requirements specification, identification of \nthreat and threat analysis, vulnerability assessment, security certification, and the \n­monitoring of vulnerabilities and auditing. The completion of these tasks marks a \ncompletion of a security milestone on the road to a system’s security assurance. \nThese tasks are shown in Table 7.1 below.\nSecurity is a process. Security assurance is a continuous security state of the \nsecurity process. The process, illustrated in Table 7.1 and depicted in Fig. 7.1, starts \nwith a thorough system security policy, whose components are used for system \nrequirement specifications. The security requirement specifications are then used to \nidentify threats to the system resources. An analysis of these identified threats per \n" }, { "page_number": 164, "text": "146\b\n7  Security Assessment, Analysis, and Assurance\nresource is then done. The vulnerabilities identified by the threats are then assessed, \nand if the security measures taken are good enough, they are then certified, along \nwith the security staff.\nAfter certification, the final component of the security process is the auditing \nand monitoring phase. This phase may reveal more security problems which require \nrevisiting the security policy that makes the process start to repeat itself. That secu­\nrity cycle process is security assurance. The process of security assurance is shown \nin Fig. 7.1.\nTable 7.1  System security process\nSystem Security Policy\nSecurity Requirements Specification\nThreat Identification\nThreat Analysis\nVulnerability Identification and Assessment\nSecurity Certification\nSecurity Monitoring and Auditing\nAttackers\nThreat\nVulnerability\nResource Access\n/Damage\nSystem Collapses\nResponses\nSecurity Policy\nLeads to\nExploits\nBased on\nFig. 7.1  System Security Assurance Cycle\n" }, { "page_number": 165, "text": "7.2  System Security Policy\b\n147\n7.2  System Security Policy\nTo a system administrator, the security of the organization’s system is very impor­\ntant. For any organization system, there must be somebody to say no when the no \nneeds to be said. The no must be said because the administrator wants to limit the \nnumber of network computers, resources, and capabilities people have been using \nto ensure the security of the system. One way of doing this in fairness to all is \nthrough the implementation of a set of policies, procedures, and guidelines that tell \nall employees and business partners what constitutes acceptable and unacceptable \nuse of the organization’s computer system. The security policy also spells out what \nresources need to be protected and how organization can protect such resources. A \nsecurity policy is a living set of policies and procedures that impact and potentially \nlimit the freedoms and of course levels of individual security responsibilities of all \nusers. Such a structure is essential to an organization’s security. Having said that, \nhowever, let us qualify our last statement. There are as many opinions on the useful­\nness of security policies in the overall system security picture as there are security \nexperts. However, security policies are still important in the security plan of a sys­\ntem. It is important for several reasons including\nFirewall installations: If a functioning firewall is to be configured, its rulebase \n• \nmust be based on a sound security policy.\nUser discipline: All users in the organization who connect to a network such as \n• \nthe Internet, through a firewall, say, must conform to the security policy.\nWithout a strong security policy that every employee must conform to, the organiza­\ntion may suffer from data loss, employee time loss, and productivity loss all because \nemployees may spend time fixing holes, repairing vulnerabilities, and recovering \nlost or compromised data among other things.\nA security policy covers a wide variety of topics and serves several important \npurposes in the system security cycle. Constructing a security policy is like building \na house; it needs a lot of different components that must fit together. The security \npolicy is built in stages and each stage adds value to the overall product, making it \nunique for the organization. To be successful, a security policy must\nHave the backing of the organization top management.\n• \nInvolve every one in the organization by explicitly stating the role everyone will \n• \nplay and the responsibilities of everyone in the security of the organization.\nPrecisely describe a clear vision of a secure environment stating what needs to \n• \nbe protected and the reasons for it.\nSet priorities and costs of what needs to be protected.\n• \nBe a good teaching tool for everyone in the organization about security and what \n• \nneeds to be protected, why, and how it is to be protected.\nSet boundaries on what constitutes appropriate and inappropriate behavior as far \n• \nas security and privacy of the organization resources are concerned.\nCreate a security clearing house and authority.\n• \nBe flexible enough to adapt to new changes.\n• \nBe consistently implemented throughout the organization.\n• \n" }, { "page_number": 166, "text": "148\b\n7  Security Assessment, Analysis, and Assurance\nTo achieve these sub goals, a carefully chosen set of basic steps must be followed \nto construct a viable implementable, and useful security policy.\nAccording to Jasma, the core five steps are the following [1, 2]:\nDetermine the resources that must be protected, and for each resource, draw \n• \na profile of its characteristics. Such resources should include physical, logical, \nnetwork, and system assets. A table of these items ordered in importance should \nbe developed.\nFor each identified resource, determine from whom you must protect.\n• \nFor each identifiable resource, determine the type of threat and the likelihood of \n• \nsuch a threat. For each threat, identify the security risk and construct an ordered \ntable for these based on importance. Such risks may include\nDenial of service\n• \nDisclosure or modification of information\n• \nUnauthorized access\n• \nFor each identifiable resource, determine what measures will protect it the best \n• \nand from whom.\nDevelop a policy team consisting of at least one member from senior \n• \nadministration, legal staff, employees, member of IT department, and an editor \nor writer to help with drafting the policy.\nDetermine what needs to be audited. Programs such as Tripwire perform audits on \n• \nboth Unix and Windows systems. Audit security events on servers and firewalls and \nalso on selected network hosts. For example, the following logs can be audited:\nLogfiles for all selected network hosts, including servers and firewalls.\n• \nObject accesses\n• \nDefine acceptable use of system resources such as\n• \nEmail\n• \nNews\n• \nWeb\n• \nConsider how to deal with each of the following:\n• \nEncryption\n• \nPassword\n• \nKey creation and distributions\n• \nWireless devices that connect on the organization’s network.\n• \nProvide for remote access to accommodate workers on the road and those working \n• \nfrom home and also business partners who may need to connect through a Virtual \nPrivate Network (VPN).\nFrom all this information, develop two structures, one describing the access \nrights of users to the resources identified and the other structure describing user \nresponsibilities in ensuring security for a given resource. Finally, schedule a time to \nreview these structures regularly.\n" }, { "page_number": 167, "text": "7.3  Building a Security Policy\b\n149\n7.3  Building a Security Policy\nSeveral issues, including the security policy access matrix, need to be constructed \nfirst before others can fit in place. So let us start with that.\n7.3.1  Security Policy Access Rights Matrix\nThe first consideration in building a security policy is to construct a security policy \naccess rights matrix M = {S, R} where S = {set of all user groups, some groups may \nhave one element} and R = {set of system resources}. For example R = {network \nhosts, switches, routers, firewalls, access servers, databases, files, e-mail, Web \nsite, remote access point, etc.}. And S = [{administrator}, {support technicians}, \n{Human Resource users}, {Marketing users}, etc..}].\nFor each element rj of R, develop a set of policies Pj. For example, create poli­\ncies for the following members of R:\nE-mail and Web access (SNMP, DNS, NTP, WWW, NNTP, SMTP)\n• \nHardware access (logon passwords/usernames)\n• \nDatabases (file access/data backup).\n• \nWireless devices (access point logon/authentication/access control)\n• \nLaptops’ use and connection to organization’s network\n• \nRemote access (Telnet, FTP)\n• \nFor each element si of S, develop a set of responsibilities Ni. For example create \nresponsibilities for the following members of S:\nWho distributes system resources access rights/remote access/wireless access?\n• \nWho creates accounts/remote access accounts?\n• \nWho talks to the press?\n• \nWho calls law enforcement?\n• \nWho informs management of incidents and at what level?\n• \nWho releases and what data?\n• \nWho follows on a detected security incident?\n• \nOnce all access rights and responsibilities have been assigned, the matrix M is \nfully filled and the policy is now slowly taking shape. Up to this point, an entry in \nM = {[si, rj]} means that user from group si can use any of the rights in group rj for \nthe resource j. See Fig. 7.2.\nResource\nR1\nR2\nR3\nUser\nS1\n[s1,r1]\n[s1,r2]\n[s1,r3]\nS2\n[s2,r1]\n[s2,r2]\n[s2,r3]\nFig. 7.2  Security policy access rights matrix M\n" }, { "page_number": 168, "text": "150\b\n7  Security Assessment, Analysis, and Assurance\nA structure L = {S, R}, similar to M, for responsibilities can also be constructed. \nAfter constructing these two structures, the security policy is now taking shape, but \nit is far from done. Several other security issues need to be taken care of, including \nthose described in the following sections [3]:\n7.3.1.1  Logical Access Restriction to the System Resources\nLogical access restriction to system resources involves the following:\nRestricting access to equipment and network segments using\n• \nPreventive controls that uniquely identify every authorized user (via \n• \nestablished access control mechanisms such as passwords) and deny others\nDetective controls that log and report activities of users, both authorized and \n• \nintruders. This may employ the use of intrusion detection systems, system \nscans, and other activity loggers. The logs are reported to a responsible party \nfor action.\nCreating boundaries between network segments:\n• \nTo control the flow of traffic between different cabled segments such as \n• \nsubnets by using IP address filters to deny access of specific subnets by IP \naddresses from nontrusted hosts.\nPermit or deny access based on subnet addresses, if possible.\n• \nSelecting a suitable routing policy to determine how traffic is controlled between \n• \nsubnets.\n7.3.1.2  Physical Security of Resources and Site Environment\nEstablish physical security of all system resources by\nSafeguarding physical infrastructure including media and path of physical \n• \ncabling. Make sure that intruders cannot eavesdrop between lines by using \ndetectors such as time domain reflectometer for coaxial cable and optical splitter \nusing an optical time domain reflectometer for fiber optics.\nSafeguarding site environment. Make sure it is as safe as you can make it from \n• \nsecurity threats due to\nFire (prevention/protection/detection)\n• \nWater\n• \nElectric power surges\n• \nTemperature/humidity\n• \nNatural disasters\n• \nMagnetic fields\n• \n" }, { "page_number": 169, "text": "7.3  Building a Security Policy\b\n151\n7.3.1.3  Cryptographic Restrictions\nWe have defined a computer and also a network system as consisting of hardware, \nsoftware, and users. The security of an organization’s network, therefore, does not \nstop only at securing software such as the application software like browsers on the \nnetwork hosts. It also includes data in storage in the network, that is, at network \nservers, and also data in motion within the network.\nEnsuring this kind of software and data requires strong cryptographic techniques. \nSo an organization’s security policy must include a consideration of cryptographic \nalgorithms to ensure data integrity. The best way to ensure as best as possible that \ntraffic on the network is valid is through the following:\nSupported services, such as firewalls, relying on the TCP, UDP, ICMP, and IP \n• \nheaders, and TCP and UDP source and destination port numbers of individual \npackets to allow or deny the packet.\nAuthentication of all data traversing the network, including traffic specific to the \n• \noperations of a secure network infrastructure such as updating of routing tables\nChecksum to protect against the injection of spurious packets from an intruder, and \n• \nin combination with sequence number techniques, protects against replay attacks.\nSoftware not related to work will not be used on any computer that is part of the \n• \nnetwork.\nAll software images and operating systems should use a checksum verification \n• \nscheme before installation to confirm their integrity between sending and \nreceiving devices.\nEncryption of routing tables and all data that pose the greatest risk based on the \n• \noutcome of the risk assessment procedure in which data is classified according to \nits security sensitivity. For example, in an enterprise, consider the following:\nAll data dealing with employee salary and benefits.\n• \nAll data on product development\n• \nAll data on sales, etc.\n• \nAlso pay attention to the local Network Address Translation (NAT) – a system \n• \nused to help Network administrators with large pools of hosts from renumbering \nthem when they all come on the Internet.\nEncrypt the backups making sure that they will be decrypted when needed.\n• \n7.3.2  Policy and Procedures\nNo security policy is complete without a section on policy and procedures. In fact, \nseveral issues are covered under policy and procedures. Among the items in this \nsection are a list of common attacks for technicians to be aware of; education of \nusers; equipment use; equipment acquisition; software standards and acquisition; \nand incident handling and reporting:\n" }, { "page_number": 170, "text": "152\b\n7  Security Assessment, Analysis, and Assurance\n7.3.2.1  Common Attacks and Possible Deterrents\nSome of the most common deterrents to common attacks include the following:\nDeveloping a policy to insulate internal hosts (hosts behind a firewall) from a list \n• \nof common attacks.\nDeveloping a policy to safeguard Web servers, FTP servers and e-mail servers, \n• \nwhich of these are at most risk because even though they are behind a firewall, any \nhost, even those inside the network, can misuse them. You are generally better of \nputting those exposed service providers on a demilitarized zone (DMZ) network.\nInstalling a honey port.\n• \nThe following list provides an example of some items in an infrastructure and \ndata integrity security policy:\n 7.3.2.2  Staff\nRecruit employees for positions in the implementation and operation of the network \n• \ninfrastructure who are capable and whose background has been checked.\nHave all personnel involved in the implementation and supporting the network \n• \ninfrastructure must attend a security seminar for awareness.\nInstruct all employees concerned to store all backups in a dedicated locked area.\n• \n 7.3.2.3  Equipment Certification\nTo be sure that quality equipments are used, make every effort to ensure that\nAll new equipment to be added to the infrastructure should adhere to specified \n• \nsecurity requirements.\nEach site of the infrastructure should decide which security features and \n• \nfunctionalities are necessary to support the security policy.\nThe following are good guidelines:\n• \nAll infrastructure equipment must pass the acquisition certification process \n• \nbefore purchase.\nAll new images and configurations must be modeled in a test facility before \n• \ndeployment.\nAll major scheduled network outages and interruptions of services must be \n• \nannounced to those who will be affected well ahead of time.\nUse of Portable Tools\n• \nSince use of portable tools such as laptops always pose some security risks, \n• \ndevelop guidelines for the kinds of data allowed to reside on hard drives of \nportable tools and how that data should be protected.\n" }, { "page_number": 171, "text": "7.3  Building a Security Policy\b\n153\n7.3.2.4  Audit Trails and Legal Evidence\nPrepare for possible legal action by\nKeeping logs of traffic patterns and noting any deviations from normal behavior \n• \nfound. Such deviations are the first clues to security problems.\nKeeping the collected data locally to the resource until an event is finished, after \n• \nwhich it may be taken, according to established means involving encryption, to \na secure location.\nSecuring audit data on location and in backups.\n• \n7.3.2.5  Privacy Concerns\nThere are two areas of concern with audit trail logs:\nPrivacy issue of the data collected on users.\n• \nKnowledge of an intrusive behavior of others including employees of the \n• \norganization.\n7.3.2.6  Security Awareness Training\nThe strength of a security policy lies in its emphasis on both employee and user \ntraining. The policy must stress that\nUsers of computers and computer networks must be made aware of the security \n• \nramifications caused by certain actions. The training should be provided to all \npersonnel.\nTraining should be focused and involve all types of security that are needed \n• \nin the organization, the internal control techniques that will meet the security \nrequirements of the organization, and how to maintain the security attained.\nEmployees with network security responsibilities must be taught security techniques \n• \nprobably beyond those of the general public, methodologies for evaluating threats \nand vulnerabilities to be able to use them to defend the organization’s security, the \nability to select and implement security controls, and a thorough understanding of \nthe importance of what is at risk if security is not maintained.\nBefore connecting to a LAN to the organization’s backbone, provide those \n• \nresponsible for the organization’s security with documentation on network \ninfrastructure layout, rules, and guidelines on controlled software downloads. \nPay attention to the training given to those who will be in charge of issuing \npasswords.\nSocial engineering.\n• \nTrain employees not to believe anyone who calls/e-mails them to do something \n• \nthat might compromise security.\nBefore giving any information, employees must positively identify who they \n• \nare dealing with.\n" }, { "page_number": 172, "text": "154\b\n7  Security Assessment, Analysis, and Assurance\n7.3.2.7  Incident Handling\nThe security of an organization’s network depends on what the security plan says \nshould be done to handle a security incident. If the response is fast and effective, the \nlosses may be none to minimum. However, if the response is bungled and slow, the \nlosses may be heavy. Make sure that the security plan is clear and effective.\nBuild an incident response team as a centralized core group, whose members \n• \nare drawn from across the organization, who must be knowledgeable, and well \nrounded with a correct mix of technical, communication, and political skills. \nThe team should be the main contact point in case of a security incident and \nresponsible for keeping up-to-date with the latest threats and incidents, notifying \nothers of the incident, assessing the damage and impact of the incident, finding \nout how to minimize the loss, avoid further exploitation of the same vulnerability, \nand making plans and efforts to recover from the incident.\nDetect incidents by looking for signs of a security breach in the usual suspects and \n• \nbeyond. Look for abnormal signs from accounting reports, focus on signs of data \nmodification and deletion, check out complaints of poor system performance, \npay attention to strange traffic patterns, and unusual times of system use, and \npick interest in large numbers of failed login attempts.\nAssess the damage by checking and analyzing all traffic logs for abnormal \n• \nbehavior, especially on network perimeter access points such as internet access \nor dial-in access. Pay particular attention when verifying infrastructure device \nchecksum or operating system checksum on critical servers to see whether \noperating system software has been compromised or if configuration changes \nin infrastructure devices such as servers have occurred to ensure that no one has \ntampered with them. Make sure to check the sensitive data to see whether it has \nbeen assessed or changed and traffic logs for unusually large traffic streams from \na single source or streams going to a single destination, passwords on critical \nsystems to ensure that they have not been modified, and any new or unknown \ndevices on the network for abnormal activities.\nReport and Alert\n• \nEstablish a systematic approach for reporting incidents and subsequently \n• \nnotifying affected areas.\nEssential communication mechanisms include a monitored central phone, \n• \ne-mail, pager, or other quick communication devices.\nEstablish clearly whom to alert first and who should be on the list of people \n• \nto alert next.\nDecide on how much information to give each member on the list.\n• \nFind ways to minimize negative exposure, especially where it requires \n• \nworking with agents to protect evidence.\nRespond to the incident to try to restore the system back to its pre-incident status. \n• \nSometimes it may require shutting down the system; if this is necessary, then do \nso but keep accurate documentation and a log book of all activities during the \nincident so that that data can be used later to analyze any causes and effects.\n" }, { "page_number": 173, "text": "7.4  Security Requirements Specification\b\n155\nRecover from an incident\n• \nMake a post-mortem analysis of what happened, how it happened, and what \n• \nsteps need to be taken to prevent similar incidents in the future.\nDevelop a formal report with proper chronological sequence of events to be \n• \npresented to management.\nMake sure not to overreact by turning your system into a fortress.\n• \n7.4  Security Requirements Specification\nSecurity requirements specification derives directly from the security policy docu­\nment. The specifications are details of the security characteristics of every indi­\nvidual and system resource involved. For details on individual users and system \nresources, see the security access matrix. These security requirements are estab­\nlished after a process of careful study of the proposed system that starts with a \nbrainstorming session to establish and maintain a skeleton basis of a basket of core \nsecurity requirements by all users. The brainstorming session is then followed by \nestablishing a common understanding and agreement on the core requirements for \nall involved. For each requirement in the core, we then determine what we need and \nhow far to go to acquire and maintain it, and finally for each core requirement, we \nestimate the time and cost for its implementation.\nFrom the security policy access right matrix, two main entries in the matrix, the user \nand the resources, determine the security requirements specifications as follows [4]:\nFor the user: Include user name, location, and phone number of the responsible \n• \nsystem owner, and data/application owner. Also determine the range of security \nclearance levels, the set of formal access approvals, and the need-to-know of \nusers of the system.\nPersonnel security levels: Set the range of security clearance levels, the set of \n• \nformal access approvals, and the need-to-know of users of the system\nFor the resources: Include the resource type, document any special physical \n• \nprotection requirements that are unique to the system, and brief description of a \nsecure operating system environment in use. If the resource is data, then include \nthe following also:\nclassification level: top secret, secret, and confidential; and categories of data: \n• \nrestricted and formally restricted\nany special access programs for the data\n• \nany special formal access approval necessary for access to the data\n• \nany special handling instructions\n• \nany need-to-know restrictions on users\n• \nany sensitive classification or lack of.\n• \nAfter the generation of the security requirements for each user and system \nresource in the security policy access matrix, a new security requirements matrix, \nTable 7.2, is drawn.\n" }, { "page_number": 174, "text": "156\b\n7  Security Assessment, Analysis, and Assurance\n7.5  Threat Identification\nTo understand system threats and deal with them, we first need to be able to identify \nthem. Threat identification is a process that defines and points out the source of the \nthreat and categorizes it as either a person or an event. For each system component \nwhose security requirements have been identified, as shown in Fig. 4.4, also iden­\ntify the security threats to it. The security threats to any system component can be \ndeliberate or nondeliberate. A threat is deliberate if the act is done with the intention \nto breach the security of an object. There are many types of threats under this cate­\ngory, as we saw in Chapter 3. Nondeliberate threats, however, are acts and situations \nthat, although they have the potential to cause harm to an object, were not intended. \nAs we saw in Chapter 3, the sources of threats are many and varied including human \nfactors, natural disasters, and infrastructure failures.\n7.5.1  Human Factors\nHuman factors are those acts that result from human perception and physical capa­\nbilities and may contribute increased risks to the system. Among such factors are \nthe following [5]:\nCommunication – Communication between system users and personnel may \n• \npresent risk challenges based on understanding of policies and user guidelines, \nterminology used by the system, and interpersonal communication skills, and \nlanguages.\nHuman-machine interface – Many users may find a challenge in some of the \n• \nsystem interfaces. How the individual using such interfaces handles and uses \nthem may cause a security threat to the system. The problem is more so when \nthere is a degree of automation in the interface.\nData design, analysis, and interpretation – Whenever there is more than one \n• \nperson, there is always a chance of misinterpretation of data and designs. So if \nthere is any system data that needs to be analyzed and interpreted, there is always \na likelihood of someone misinterpreting it or using a wrong design.\nTable 7.2  Listing of System Security Requirements\nSystem Components (Resources and content)\t\nSecurity requirements\nNetwork client\t\n-sign-on and authentication of user\n\t\n-secure directory for user ID and passwords\n\t\n-secure client software\n\t\n-secure session manager to manage the session\nNetwork server\t\n-secure software to access the server\n\t\n-secure client software to access the server\nContent/data\t\n-data authentication\n\t\n-secure data on server\n\t\n-secure data on client\n" }, { "page_number": 175, "text": "7.5  Threat Identification\b\n157\nNew tools and technologies – Whenever a system has tools and technologies \n• \nthat are new to users, there is always a risk in the use of those tools and \ntechnologies. Also long term exposure to such tools may cause significant \nneuro-musculo-skeletal adaptation with significant consequences on their use.\nWorkload and user capacity – Users in many systems become victims of the \n• \nworkload and job capacity; this may, if not adjusted, cause risk to systems. \nAttributes of the task such as event rate, noise, uncertainty, criticality, and \ncomplexity that affect human mental and physical behavior may have an effect \non the effort required for users to complete their assigned tasks.\nWork environment – As many workers know, the work environment greatly \n• \naffects the human mental and physical capacity in areas of perception, judgment, \nand endurance. The factors that affect the working environment include things \nsuch as lighting, noise, workstations, and spatial configuration.\nTraining – Training of system personnel and also users creates a safer user \n• \nenvironment than that of systems with untrained users and personnel. Trained \nusers will know when and how certain equipment and technologies can be used \nsafely.\nPerformance – A safe system is a system where the users and personnel get \n• \nmaximum performance from the system and from the personnel. Efficient and \nsuccessful completion of all critical tasks on the system hinges on the system \npersonnel and users maintaining required physical, perceptual, and social \ncapabilities.\n7.5.2  Natural Disasters\nThere is a long list of natural acts that are sources of security threats. These include \nearthquakes, fires, floods, hurricanes, tornados, lightning and many others. Although \nnatural disasters cannot be anticipated, we can plan for them. There are several ways \nto plan for the natural disaster threats. These include creating up-to-date backups \nstored at different locations that can be quickly retrieved and set up and having a \ncomprehensive recovery plan. Recovery plans should be implemented rapidly.\n7.5.3  Infrastructure Failures\nSystem infrastructures are composed of hardware, software, and humanware. Any \nof these may fail the system anytime without warning.\n7.5.3.1  Hardware Failures\nThe long time computers have been in use has resulted in more reliable products \nthan ever before. But still, hardware failures are common due to wear and tear and \n" }, { "page_number": 176, "text": "158\b\n7  Security Assessment, Analysis, and Assurance\nage. The operating environment also contributes greatly to hardware failures. For \nexample, a hostile environment due to high temperatures and moisture and dust \nalways results in hardware failures. There are several approaches to overcome hard­\nware threats, including redundancy, where there is always a standby similar ­system \nto kick in whenever there is an unplanned stoppage of the functioning system. \nAnother way of overcoming hardware failure threats is to have a monitoring system \nwhere two or more hardware units constantly monitor each other and report to oth­\ners whenever one of them fails. In addition, advances in hardware technology have \nled to the development of self-healing hardware units whenever a system detects \nits component performance, and if one component shows signs of failure, the unit \nquickly disables the component and re-routes or re-assigns the functions of the fail­\ning component and also reports the failing component to all others in the unit.\n7.5.3.2  Software Failures\nProbably the greatest security threat, when everything is considered, is from soft­\nware. The history of computing is littered with examples of costly catastrophes \nof failed software projects and potential software failures and errors such as the \n­millennium scare. Failure or poor performance of a software product can be attrib­\nuted to a variety of causes, most notably human error, the nature of software itself, \nand the environment in which software is produced and used.\nBoth software professionals and nonprofessionals who use software know the \ndifferences between software programming and hardware engineering. It is in these \ndifferences that lie many of the causes of software failure and poor performance. \nConsider the following [6]:\nComplexity\n• \n: Unlike hardwired programming in which it is easy to exhaust the \npossible outcomes on a given set of input sequences, in software programming \na similar program may present billions of possible outcomes on the same input \nsequence. Therefore, in software programming, one can never be sure of all the \npossibilities on any given input sequence.\nDifficult testing\n• \n: There will never be a complete set of test programs to check \nsoftware exhaustively for all bugs for a given input sequence.\nEase of programming\n• \n: The fact that software programming is easy to learn \nencourages many people with little formal training and education in the field \nto start developing programs, but many are not knowledgeable about good \nprogramming practices or able to check for errors.\nMisunderstanding of basic design specifications\n• \n: This affects the subsequent \ndesign phases including coding, documenting, and testing. It also results in \nimproper and ambiguous specifications of major components of the software \nand in ill-chosen and poorly defined internal program structures.\nSoftware evolution\n• \n: It is almost an accepted practice in software development \nthat software products that grow out from one version or release to another \nare made by just additions of new features without an overhaul of the original \n" }, { "page_number": 177, "text": "7.6  Threat Analysis\b\n159\nversion for errors and bugs. This is always a problem because there are many \nincompatibilities that can cause problems, including different programmers \nwith different design styles from those of the original programmers, different \nsoftware modules, usually newer, that may have differing interfaces, and different \nexpectations and applications that may far exceed the capabilities of the original \nversion. All these have led to major flaws in software that can be exploited and \nhave been exploited by hackers.\nChanging management styles\n• \n: Quite often organizations change management \nand the new management comes in with a different focus and different agenda \nthat may require changes that may affect the software used by the organization in \norder to accommodate the new changes. Because of time and cost considerations, \nmany times the changes are made in-house. Introducing such changes into \nexisting software may introduce new flaws and bugs or may re-activate existing \nbut dormant errors.\n7.5.3.3  Humanware Failures\nThe human component in the computer systems is considerable and plays a vital \nrole in the security of the system. While inputs to and sometimes outputs from \nhardware components can be predicted, and also in many cases software bugs once \nfound can be fixed and the problem forgiven, the human component in a computer \nsystem is so unpredictable and so unreliable that the inputs to the system from \nthe human component may never be trusted, a major source of system threat. The \nhuman link in the computing system has been known to be a source of many mali­\ncious acts that directly affect the security of the system. Such malicious acts include \nhacking into systems and creating software that threaten the security of systems. In \nlater chapters, we will talk more about these activities.\n7.6  Threat Analysis\nA working computer system with numerous resources is always a target of many \nsecurity threats. A threat is the combination of an asset such as a system resource, a \nvulnerability, or an exploit that can be used by a hacker to gain access to the system. \nAlthough every system resource has value, there are those with more intrinsic value \nthan others. Such resources, given a system vulnerability that can let in an intruder, \nattract system intruders more than their counterparts with limited intrinsic value. \nSecurity threat analysis is a technique used to identify these resources and to focus \non them. In general, system security threat analysis is a process that involves ongo­\ning testing and evaluation of the security of a system’s resources to continuously \nand critically evaluate their security from the perspective of a malicious intruder \nand then use the information from these evaluations to increase the overall system’s \nsecurity.\n" }, { "page_number": 178, "text": "160\b\n7  Security Assessment, Analysis, and Assurance\nThe process of security threat analysis involves the following:\nDetermining those resources with higher intrinsic value, prioritizing them, and \n• \nfocusing on that list as defense mechanisms are being considered\nDocumenting why the chosen resources need to be protected in the hierarchy \n• \nthey are put in\nDetermining who causes what threat to whose resources\n• \nIdentifying known and plausible vulnerabilities for each identified resource in \n• \nthe system. Known vulnerabilities, of course, are much easier to deal with than \nvulnerabilities that are purely speculative\nIdentifying necessary security services/mechanisms to counter the vulnerability\n• \nIncreasing the overall system security by focusing on identified resources\n• \n7.6.1  Approaches to Security Threat Analysis\nThere are several approaches to security threat analysis, but we will consider two \nof them here: the simple threat analysis by calculating annualized loss expectancies \n(ALEs) and attack trees.\n7.6.1.1  Threat Analysis by Annualized Loss Expectancies\nBefore we define annualized loss expectancies, let us define the terms from which \nALE is derived. For a resource identified as having a high threat risk, the cost of \nreplacing or restoring that resource if it is attacked is its single loss expectancy cost. \nThe security threat is a resource’s vulnerability. So if the vulnerability is likely to \noccur a certain number of times (based on past occurrences), then the vulnerability’s \nexpected annual rate of occurrence (EAO) can be computed.\nThen multiplying these two terms gives us the vulnerability’s annualized loss \nexpectancy as [7]:\nAnnualized loss expectancy (ALE for a resource)  = single loss expectancy (cost) ×\n(expected) annual rate of occurrences.\nThe reader is referred to a good example in Mich Bauer’s paper: “Paranoid \n­Penguin: Practical Threat Analysis and Risk Management.” Linux Journal, Issue \n93, March 2003.\n7.6.1.2  Schneier’s Attack Tree Method\nSchneier approaches the calculation of risk analysis using a tree model he called \nan attack tree. An attack tree is a visual representation of possible attacks against a \ngiven target. The root of the attack forms the goal of the attack. The internal node \nfrom the leaves form the necessary subgoals an attacker must take in order to reach \nthe goal, in this case the root.\n" }, { "page_number": 179, "text": "7.7  Vulnerability Identification and Assessment\b\n161\nThe attack tree then grows as subgoals necessary to reach the root node are added \ndepending on the attack goal. This step is repeated as necessary to achieve the level \nof detail and complexity with which you wish to examine the attack. If the attacker \nmust pass through several subgoals in order to reach the goal, then the path in the \ntree from the leaves to the root is long and probably more complex.\nEach leaf and corresponding subgoals are quantified with a cost estimate that \nmay represent the cost of achieving that leaf’s goal via the subgoals. The cheapest \npath in the tree from a leaf to the root determines the most likely attack path and \nprobably the riskiest.\n7.7  Vulnerability Identification and Assessment\nA security vulnerability is a weakness in the system that may result in creating a \nsecurity condition that may lead to a threat. The condition may be an absence of \nor inadequate security procedures and physical and security controls in the system. \nAlthough vulnerabilities are difficult to predict, no system is secure unless its vulner­\nabilities are known. Therefore, in order to protect a computer system, we need to be \nable to identify the vulnerabilities in the system and assess the dangers faced as a \nresult of these vulnerabilities. No system can face any security threat unless it has a \nvulnerability from which a security incident may originate. However, it is extremely \n­difficult to identify all system vulnerabilities before a security incident occurs. In fact, \nmany system vulnerabilities are known only after a security incident has occurred. \nHowever, once one vulnerability has been identified, it is common to find it in many \nother components of the system. The search for system vulnerabilities should focus \non system hardware, software, and also humanware as we have seen so far. In addi­\ntion, system vulnerabilities also exist in system security policies and procedures.\n7.7.1  Hardware\nAlthough hardware may not be the main culprit in sourcing system vulnerabilities, \nit boasts a number of them originating mainly from design flows, imbedded pro­\ngrams, and assembling of systems. Modern computer and telecommunication sys­\ntems carry an impressive amount of microprograms imbedded in the system. These \nprograms control many functions in the hardware component.\nHowever, hardware vulnerabilities are very difficult to identify and even after \nthey are identified, they are very difficult to fix for a number of reasons. One rea­\nson is cost; it may be very expensive to fix imbedded micro programs in a hard­\nware component. Second, even if a vulnerability is inexpensive and easy to fix, the \nexpertise to fix it may not be there. Third, it may be easy to fix but the component \nrequired to fix it may not be compatible and interoperable with the bigger hardware. \nFourth, even if it is cheap, easy to fix, and compatible enough, it may not be of \npriority because of the effort it takes to fix.\n" }, { "page_number": 180, "text": "162\b\n7  Security Assessment, Analysis, and Assurance\n7.7.2  Software\nVulnerabilities in software can be found in a variety of areas in the system. In par­\nticular, vulnerabilities can be found in system software, application software, and \ncontrol software.\n7.7.2.1  System Software\nSystem software includes most of the software used by the system to function. \nAmong such software is the operating system that is at the core of the running of the \ncomputer system. In fact the vulnerabilities found in operating systems are the most \nserious vulnerabilities in computer systems. Most of the major operating systems \nhave suffered from vulnerabilities, and intruders always target operating systems \nas they search for vulnerabilities. This is due to the complexity of the software \nused to develop operating systems and also the growing multitude of functions the \noperating system must perform. As we will discuss later, since the operating system \ncontrols all the major functions of the system, access to the system through the oper­\nating system gives the intruders unlimited access to the system. The more popular \nan operating system gets, the greater the number of attacks directed to it. All the \nrecent operating systems such as Unix, Linux, Mac OS, Windows, and especially \nWindows NT have been major targets for intruders to exploit an ever growing list \nof vulnerabilities that are found daily.\n7.7.2.2  Application Software\nProbably, the largest number of vulnerabilities is thought to be sourced from appli­\ncation software. There are several reasons for this. First, application software can \nbe and indeed has been written by anybody with a minimum understanding of \nprogramming etiquettes. In fact, most of the application software on the market is \nwritten by people without formal training in software development. Second, most \nof the application software is never fully tested before it is uploaded on the mar­\nket, making it a potential security threat. Finally, because software produced by \nindependent producers is usually small and targeted, many system managers and \nsecurity chiefs do not pay enough attention to the dangers produced by this type \nof software in terms of interface compatibility and interoperability. By ignoring \nsuch potential sources of system vulnerabilities, the system managers are exposing \ntheir systems to dangers of this software. Also security technologies are develop­\ning a lot faster than the rate at which independent software producers can include \nthem in their software. In addition, since software is usually used for several years \nduring that period, new developments in API and security tools tend to make the \nsoftware more of a security threat. And as more re-usable software becomes com­\nmonly used, more flaws in the libraries of such code are propagated into more user \n" }, { "page_number": 181, "text": "7.7  Vulnerability Identification and Assessment\b\n163\ncode. Unfortunately more and more software producers are outsourcing modules \nfrom independent sources, which adds to the flaws in software because the testing \nof these outsourced modules is not uniform.\n7.7.2.3  Control Software\nAmong the control software are system and communication protocols and device \ndrivers. Communication control protocols are at the core of digital and analog \ndevices. Any weaknesses in these protocols expose the data in the communication \nchannels of the network infrastructure. In fact, the open architecture policies of the \nmajor communication protocol models have been a major source of vulnerabilities \nin computer communication. Most of the recent attacks on the Internet and other \ncommunication networks have been a result of the vulnerabilities in these com­\nmunication protocols. Once identified, these vulnerabilities have proven difficult \nto fix for a number of reasons. First, it has been expensive in some quarters to fix \nthese vulnerabilities because of lack of expertise. Second, although patches have \non many occasions been issued immediately after a vulnerability has been identi­\nfied, in most cases, the patching of the vulnerability has not been at the rate the \nvulnerabilities have been discovered, leading to a situation where most of the cur­\nrent network attacks are using the same vulnerabilities that have been discovered, \nsometimes years back and patches issued. Third, because of the open nature of the \ncommunication protocols, and as new functional modules are added onto the exist­\ning infrastructure, the interoperability has been far from desirable.\n7.7.3  Humanware\nIn Section 4.5.1, we discussed the human role in the security of computer systems. \nWe want to add to that list the role social engineering plays in system security. \nSocial engineering, as we saw in Chapter 3, is the ability of one to achieve one’s \nstated goal, legally or otherwise, through the use of persuasion or misrepresenta­\ntion. Because there are many ways of doing this, it is extremely difficult to prepare \npeople not to fall for sweet-talkers and masqueraders. Among the many approaches \nto social engineering are techniques that play on people’s vulnerability to sympathy, \nempathy, admiration, and intimidation. Hackers and intruders using social engineer­\ning exploit people’s politeness and willingness to help.\n7.7.4  Policies, Procedures, and Practices\nThe starting point for organization security is a sound security policy and a set of \nsecurity procedures. Policies are written descriptions of the security precautions \nthat everyone using the system must follow. They have been called the building \n" }, { "page_number": 182, "text": "164\b\n7  Security Assessment, Analysis, and Assurance\nblocks of an organization’s security. Procedures on the other hand are definitions \nspelling out how to implement the policies for a specific system or technology. \nFinally, practices are day-to-day operations to implement the procedures. Practices \nare implemented based on the environment, resources, and capabilities available at \nthe site.\nMany organizations do not have written policies or procedures or anything that \nis directly related to information security. In addition to security policies and pro­\ncedures, security concerns can also be found in personnel policies; physical secu­\nrity procedures; for example, the protocols for accessing buildings and intellectual \nproperty statements.\nThe effectiveness of an organization’s security policies and procedures must be \nmeasured against those in the same industry. Security policies and procedures are \nuseless if they are applied to an industry where they are ineffective. When compared \nto a similar industry, weaknesses should be noted in quality, conformity, and com­\nprehensiveness.\n7.7.4.1  Quality\nA policy or procedure has quality if it addresses all industry issues it is supposed to \naddress. In addition to addressing all issues, policies and procedures are also tested \non their applicability, that is, they are being specific enough in order to be effective. \nThey are judged effective if they address all issues and protect system information\n7.7.4.2  Conformity\nConformity is a measure of the level of compliance based on the security policies \nand procedures. The measure includes how the policies or procedures are being \ninterpreted, implemented, and followed. If the level is not good then a security \nthreat exists in the system. Besides measuring the level of compliancy, conformity \nalso measures the effectiveness of the policies and procedures in all areas of the \norganization. A policy presents a security threat if it is not fully implemented or not \nimplemented at all or not observed in certain parts of the organization.\n7.7.4.3  Comprehensiveness\nIf the organization’s security is required in a variety of forms such as physical and \nelectronic, then the organization’s security policy and procedures must effectively \naddress all of them. In addition, all phases of security must be addressed includ­\ning inspection, protection, detection, reaction, and reflection. If one phase is not \neffectively addressed or not addressed at all, then a security threat may exist in \nthe system. Comprehensiveness also means that the policies and procedures must \nbe widely and equitably applied to all parts of the system. And the policies and \n" }, { "page_number": 183, "text": "7.8  Security Certification\b\n165\nprocedures must address all known sources of threats which may include physical, \nnatural, or human.\n7.8  Security Certification\nCertification is the technical evaluation of the effectiveness of a system or an indi­\nvidual for security features. The defenses of a system are not dependent solely on \nsecure technology in use, but they also depend on the effectiveness of staffing and \ntraining. A well trained and proficient human component makes a good complement \nto the security of the system and the system as a whole can withstand and react to \nintrusion and malicious code. Certification of a system or an individual attempts to \nachieve the following objectives, that the system [5]\nEmploys a set of structured verification techniques and verification procedures \n• \nduring the system life cycle\nDemonstrates that the security controls of the system are implemented correctly \n• \nand effectively\nIdentifies risks to confidentiality, integrity, and availability of information and \n• \nresources\n7.8.1  Phases of a Certification Process\nFor the certification process to run smoothly, the following phases must be under­\ntaken [5]:\nDeveloping a security plan to provide an overview of the system security \n• \nrequirements. The plan, as we have seen above, describes existing or planned \nsecurity requirements and ways to meet them. In addition, the plan delineates \nresponsibilities and expected behavior of individuals who access the system. The \nplan should be updated as frequently as possible.\nTesting and evaluation must be done, and it includes the verification and \n• \nverification procedures to demonstrate that the implementation of the network \nmeets the security requirements specified in the security plan.\nRisk assessment to determine threats and vulnerabilities in the system, propose \n• \nand evaluate the effectiveness of various security controls, calculate trade-offs \nassociated with the security controls, and determine the residual risk associated \nwith a candidate set of security controls.\nCertification to evaluate and verify that the system has been implemented as \n• \ndescribed in the security policy and that the specified security controls are in place \nand operating properly. This provides an overview of the security status of the \nsystem and brings together all of the information necessary for the organization \nto make an informed and risk-conscious decision.\n" }, { "page_number": 184, "text": "166\b\n7  Security Assessment, Analysis, and Assurance\n7.8.2  Benefits of Security Certification\nIn security, certification is important and has several benefits including\nConsistency and comparability\n• \nAvailability of complete and reliable technical information leading to \n• \nbetter understanding of complex systems and associated security risks and \nvulnerabilities\n7.9  Security Monitoring and Auditing\nSecurity monitoring is an essential step in security assurance for a system. To set \nup continuous security monitoring, controls are put in place to monitor whether a \nsecure system environment is maintained. The security personnel and sometimes \nmanagement then use these controls to determine whether any more steps need to \nbe taken to secure the systems. The focus of the monitoring controls may depend \non the system manager and what is deemed important for the system security, but \nin general control focuses on violation and exception reports that help the security \npersonnel to determine quickly the status of security in the system and what needs \nto be done if the system is being or has been compromised.\nAlthough monitoring decisions are made by the security administrator, what \nshould be monitored and the amount of information logged are usually determined \nby either management or the security administrator. Also what should be included \nin the report and the details to be included to convey the best overall understanding \nof the security status of the system must be decided by the security administrator. \nIt is not good and in fact it is resource wasting to log too much information without \nbeing able to analyze it properly. Let us now focus on tools used to monitor, type of \ndata gathered, and information analyzed from the data.\n7.9.1  Monitoring Tools\nThere are several tools that can be used to monitor the performance of a system. The \nmonitoring tool, once selected and installed, should be able to gather vital information \non system statistics, analyze it, and display it graphically or otherwise. In more mod­\nern systems, especially in intrusion detection tools, the monitor can also be config­\nured to alert system administrators when certain events occur. Most modern operating \nsystems such as Microsoft Windows, Unix, Linux, Mac OS, and others have built-in \nperformance monitors. In addition, there is a long list of independent security and sys­\ntem performance monitors that monitor, among other things, real-time performance \nmonitoring and warehousing of Event Logs, real-time or delayed alerts to manage­\nment, and customized performance reports that may include the history of the event \nand customized formats to allow quick legal proceedings and forensics analysis.\n" }, { "page_number": 185, "text": "7.9  Security Monitoring and Auditing\b\n167\nA variety of system monitoring tools are available, the majority of which fall into \none of the following categories:\nSystem performance: This category includes most operating system performance \n• \nloggers.\nNetwork security: This includes all IDS, firewalls and other types of event \n• \nloggers.\nNetwork performance and diagnosis: These are for monitoring all network \n• \nperformance activities.\nNetworking links: To monitor the wiring in a network.\n• \nDynamic IP and DNS event logger.\n• \nRemote control and file sharing applications event logger.\n• \nFile transfer tools.\n• \n7.9.2  Type of Data Gathered\nBecause of the large number of events that take place in a computer system, the choice \nof what event to monitor can be difficult. Most event loggers are preset to monitor \nevents based on the set conditions. For example, for workstations and servers, the \nmonitor observes system performance, including CPU performance, memory usage, \ndisk usage, applications, system, security, DNS Server, Directory Service, and File \nReplication Service. In addition, the monitor may also receive syslog messages from \nother computers, routers, and firewalls on a network. In a network environment, the \nlogger may generate notifications that include e-mail, network popup, pager, syslog \nforwarding, or broadcast messages, to users or system administrator in real-time fol­\nlowing preset specified criteria. Further, the logger may support real-time registra­\ntion of new logs, edit existing log registrations, and delete log registrations.\n7.9.3  Analyzed Information\nThe purpose of a system monitoring tool is to capture vital system data, analyze it, \nand present it to the user in a timely manner and in a form in which it makes sense. \nThe logged data is then formatted and put into a form that the user can utilize. Sev­\neral of these report formats are as follows:\nAlert is a critical security control that helps in reporting monitored system data \n• \nin real-time. Real-time actually depends on a specified time frame. Time frames \nvary from, say, once a week to a few seconds. Once the alerts are selected and \ncriteria to monitor are set, the alert tools track certain events and warn systems \nadministrators when they occur.\nChart is a graphic object that correlates performance to a selected object within \n• \na time frame. Most modern operating systems have Event Viewer that draws \ncharts of the collected data.\nLog is the opposite of alerting in that it allows the system to capture data in a file \n• \n" }, { "page_number": 186, "text": "168\b\n7  Security Assessment, Analysis, and Assurance\nand save it for later viewing and analysis. However, alerting generates a signal that it \nsends to the administrator based on the alert time intervals. Log information may also \nbe used in a chart. Again most modern operating systems have Log View tools.\nReport is a more detailed and inclusive form of system logs. Log Reports provide \n• \nstatistics about the system’s resources and how each of the selected system \nresource is being used and by whom. This information also includes how many \nprocesses are using each resource, who owns the process, and when he or she is \nusing the resource. The timing of the generation of the report can be set and the \nrecipients of the report can also be listed.\n7.9.4  Auditing\nAuditing is another tool in the security assessment and assurance of a computer sys­\ntem and network. Unlike monitoring, auditing is more durable and not ongoing, and \ntherefore, it is expensive and time consuming. Like monitoring, auditing measures the \nsystem against a predefined set of criteria, noting any changes that occur. The criteria \nare chosen in such a way that changes should indicate possible security breaches.\nA full and comprehensive audit should include the following steps:\nReview all aspects of the system’s stated criteria.\n• \nReview all threats identified.\n• \nChoose a frequency of audits whether daily, weekly, or monthly.\n• \nReview practices to ensure compliance to written guidelines.\n• \n7.10  Products and Services\nA number of products and services are on the market for security assessment and \naudit. Hundreds of companies are competing for a market share with a multitude of \nproducts. These products fall under the following categories:\nAuditing tools\n• \nVulnerability assessment\n• \nPenetration testing tools\n• \nForensics tools\n• \nLog analysis tools\n• \nOther assessment toolkits\n• \nExercises\n  1.\t What is security assessment? Why is it important?\n  2.\t Discuss the necessary steps in analyzing the security state of an enterprise.\n" }, { "page_number": 187, "text": "Additional References\b\n169\n  3.\t What is security assurance? How does it help in enterprise security?\n  4.\t What is security monitoring? Is it essential for enterprise security?\n  5.\t What is security auditing? Why is it necessary for system security?\n  6.\t What are the differences between security monitoring and auditing? Which is \nbetter?\n  7.\t What is risk? What is the purpose of calculating risk when analyzing ­security?\n  8.\t Give two ways in which risk can be calculated. Which is better?\n  9.\t What is social engineering? Why do security experts worry about social \n­engineering? What is the best way to deal with social engineering?\n10.\t Humanware is a cause of security threat. Discuss why this is so.\nAdvanced Exercises\n1.\t Discuss any security surveillance system.\n2.\t Discuss a good security auditing system.\n3.\t Compare or discuss the differences between any two security systems.\n4.\t Discuss human error or human factors as a major security threat.\n5.\t What is the best way to deal with the security threat due to human factors?\nReferences\n\t 1.\t Jamsa, Kris. Hacker Proof: The Ultimate Guide to Network Security. Second Edition. Albany, \nNY: Onword Press, 2002.\n\t 2.\t Holden, Greg. Guide to Firewalls and Network Security: Intrusion Detection and VPNs. \n­Boston, MA: Delmar Thomson Learning, 2004.\n\t 3.\t Kaeo, Merike. Designing Network Security: A Practical Guide to Creating Secure Network \nInfrastructure. Indianapolis, IN: Macmillan Technical Publishing, 1999.\n\t 4.\t Guidelines for the development of security plans for classified computer systems. http://cio.\ndoe.gov/ITReform/sqse/download/secplngd.doc.\n\t 5.\t Ross, Ron. The Development of Standardized Certification and Accreditation Guidelines and \nProvider Organizations. http://csrc.nist.gov/sec-cert/CA-workshop-fiac2002-bw.pdf.\n\t 6.\t Kizza, Joseph Migga. Ethical and Social Issues in the Information Age. Second Edition. New \nYork, Springer, 2002.\n\t 7.\t Bauer. Mich. Paranoid Penguin: Practical Threat Analysis and Risk Management, Linux \n­Journal, 93. March, 2003.\nAdditional References\n\t 1.\t Security architecture and patterns, KPMG, http://www.issa-oc.org/html/1.\n\t 2.\t Threat Analysis and Vulnerability Assessments. http://www.primatech.com/consulting/\nservices/threat_analysis_and_vulnerability_assessments.htm.\n" }, { "page_number": 188, "text": "Part III\nDealing with Network \nSecurity Challenges\n" }, { "page_number": 189, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_8, © Springer-Verlag London Limited 2009\n\b\n173\nChapter 8\nDisaster Management\n8.1  Introduction\nWebster’s Dictionary defines disaster as a sudden misfortune, a catastrophe that \naffects society [1]. It is the effect of a hazardous event caused by either man or \nnature. Man-made disasters are those disasters that involve a human element like \nintent, error, or negligence. Natural disasters are those caused by the forces of \nnature like hurricanes, tornados, and tsunamis. Disasters, natural or man-made, \nmay cause great devastation to society and the environment. For example, the \n2006 tsunami in Southeast Asia caused both huge human losses and environment \ndestruction. The effects of a disaster may be short lived, or long lasting. Most \ndisasters, both man-made and natural, have long lasting effects. To mitigate disas­\nter effects on society and businesses, disaster management skills are needed.\nIn information technology, disaster situations are big security problems to the \nEnterprise information systems that must be handled with skills just like other secu­\nrity problems we have discussed so far in this book. To understand how this is a very \nbig security problem for a modern society, one has to understand the working of a \nmodern business entity. Modern businesses have moved away from the typewriter \nand manila folders to desktops and large databases to process and store business \nday-to-day data and transactions. This growing use of computers in businesses, the \never increasing speed of data transmission and the forces of globalization all have \nforced businesses into a new digitized global corner that demands high-speed data \naccess to meet the demands of the technology savvy customers in a highly competi­\ntive global environment. In response, high volume and high-speed databases have \nbeen set up.\nFor the business to remain competitive and probably ahead of the competitors, \nall business systems must remain online and in service 24/ 7. No modern business \ncan afford a disaster to happen to its online systems. Failing to achieve that level \nof service would mean the failure of the business. Thousands of businesses close \nor lose millions of dollars every year depending on the level of attention they give \nto their online systems and failing to protect them against disasters like fire, power \noutage, theft, equipment failure, viruses, hackers, and human errors. No busi­\nness can succeed in today’s environment without plans to deal with disasters. The ­\n" }, { "page_number": 190, "text": "174\b\n8  Disaster Management\nSeptember 11, 2002 attack on New York financial district was an eye-opener to \nmany businesses to be prepared for disasters. For a quick recovery of these enter­\nprises, good disaster management principles are needed.\nAlso as company databases grew in size and complexity and the demand for their \nonline fast access grew, the need for the protection of business-critical and irre­\nplaceable company data is also growing in tandem. These developments are forcing \nbusiness information system managers to focus on disaster prevention, response, \nand recovery. The importance of disaster planning and recovery can be born by the \nfact that nninety-three percent of the companies that did not have their data backed \nup properly when a disaster struck went out of business, according to DataSafe, \nInc., one of the leading companies involved with data backup services [2].\nThe goal of chapter, therefore, is to treat disaster management as a major infor­\nmation systems’ security problem and start a discussion of ways, tools, and best \npractices of dealing with disasters and mitigating their long-term effects on business \ninformation systems. We will break the discussion into three parts: disaster preven­\ntion, response, and recovery.\n8.1.1  Categories of Disasters\nBefore we do that, however, let us look at the categories of disasters that can affect \nbusiness information systems [3].\n8.1.1.1  Natural Disasters – Due to Forces of Nature\nTsunami\n• \nTornados\n• \nHurricanes (same as Tsunami)\n• \nCyclone (same as Tsunami)\n• \nFlood\n• \nSnowstorm\n• \nLandslides\n• \nDrought\n• \nEarthquake\n• \nElectrical storms\n• \nSnowslides\n• \nFire\n• \n8.1.1.2  Human-Caused Disasters\nTerrorism\n• \nSabotage\n• \n" }, { "page_number": 191, "text": "Theft\n• \nViruses\n• \nWorms\n• \nHostile code\n• \nWar\n• \nTheft\n• \nArson\n• \nLoss of\n• \nPower supply (both electric and gas). This can result in a large number of \n• \nrelated failures like cooling system and machines.\nCommunications links\n• \nData\n• \nCyber crime (many types).\n• \n8.2  Disaster Prevention\nDisaster prevention is a proactive process consisting of a set of control strategies \nto ensure that a disaster does not happen. The controls may be people, mechani­\ncal, or digital sensing devices. Times and technology have improved both disaster \nprevention and recovery strategies. The elements of effective disaster prevention \nare the early detection of abnormal conditions and notification of persons capable \nof dealing with the pending crisis, for example, if you have a temperature detector \nto report on an air-conditioning failure as soon as the temperature starts to rise or a \nfire detector to gracefully power-down all computing equipment before fire systems \ndischarge. By detecting and treating minor problems early, major problems can be \navoided.\nEvery system, big and small, needs a disaster prevention plan because the cost \nof not having one is overwhelming. According to Intra Computer, Inc., a disaster \nprevention and recovery company, in one of its surveys, 16% of those responding \nto the survey reported that a system-stopping event caused by environmental condi­\ntions occurred at least six times annually, and 12% of respondents put the minimum \nestimated dollar cost of each of these incidents at over $50,000 [4]. Also accord­\ning to DataSafe, Inc. [2], thousands of businesses lose millions of dollars worth of \ninformation due to disasters like fire, power outage, theft, equipment failure, and \neven simple operator mistakes.\nIn past years, system disaster prevention depended entirely upon an on-site \nperson’s ability to detect and diagnose irregular conditions based on experience. \nThis experience was based on one’s knowledge and ability to analyze conditions \ncreated by unusual events such as high-temperature, presence of smoke, water, \nand interruption in power to equipment that could lead to the likely corruption \nor destruction of the Enterprise’s information system’s resources including active \ndata files.\n8.2  Disaster Prevention\b\n175\n" }, { "page_number": 192, "text": "176\b\n8  Disaster Management\nTechnology has, however, through intelligent monitoring devices, helped and \nimproved the process of disaster prevention. Monitoring devices nowadays are \ncapable of quickly responding to unusual and irregular conditions caused by a \ndisaster event. The monitoring devices, in case of an Enterprise information system, \nmonitor a variety of conditions from a given list. The list includes [4]\nTemperature\n• \nHumidity\n• \nWater\n• \nSmoke/fire\n• \nAir flow\n• \nAC power quality\n• \nUPS AC/battery mode\n• \nPersonnel access security\n• \nHalon triggering state\n• \nState of in-place security/alarm systems\n• \nHidden conditions undetectable by security personnel\n• \nIn air-conditioning ducts\n• \nUnder raised floors\n• \nInside computer chassis\n• \nDuring the monitoring process, if and when an event occurs that meets any one \nof the conditions being monitored, an immediate action is triggered. The choice of \naction taken is also predetermined by the system manager and is selected from a \nlong list that includes [4]\nActivating local or remote alarms indicators like sirens, bells, light signals, and \n• \nsynthesized voice.\nTaking over control of the affected resource to isolate it, cut it off from the supply \n• \nline, or maintain the declining supply line. The supply line may be power, water, \nfuel, and a number of other things.\nInterfacing with existing or cutting off from existing security system as dictated \n• \nby the event.\nSending a signal to designated personnel. Among the designated personnel are [4]\n• \nSystem users\n• \nSite managers\n• \nSecurity personnel\n• \nMaintenance personnel\n• \nService bureaus and Alarm Co. central offices\n• \nAuthorities at remote sites\n• \nGracefully degrading the system by terminating normal operations, closing and \n• \nprotecting data files, and disconnecting AC Power from protected equipment.\nAfter one or more of the actions above have been taken, the system will then \nwait for a response. The response usually comes from the human component. Let us \ndiscuss this in the next section.\n" }, { "page_number": 193, "text": "8.3  Disaster Response\nAs we pointed out earlier in this chapter, the rapid development in computer and \ninformation technology and the ever growing society’s dependence on comput­\ners and computer technology have created an environment where business-critical \ninformation, irreplaceable business data and transactions are totally dependant and \nare stored on computer systems. This being the case makes a response to a disaster \nvital and of critical importance. Disaster response is a set strategies used to respond \nto both the short-term and long-term needs of the affected community. In dealing \nwith business information system disasters, the strategies involve quick and timely \nresponse to the Disaster Prevention System (DPS) signals with directed action. The \nessential steps in disaster response include\nrestoring services\n• \nidentifying high risk system resources\n• \nSix factors govern a quick disaster response. According to Walter Guerry \nGreen [5],\nNature and extent of the destruction or risk in case the disaster occurs. This is \n• \nbased on either prior or a quick assessment of the situation.\nThe environment of the disaster. The environment determines the kind of \n• \nresponse needed. Take a quick inventory of what is in the room or rooms where \nthe systems are. Make a note of how the chosen action to meet the needs is going \nto be carried out successfully.\nMake note of the available resources. The degree and effectiveness of the response \n• \nto the disaster is going to depend on the available resources on the ground that \ncan be used to increase and enhance the success rate of the chosen response.\nTime available to carry out the chosen response action. Time is so important in \n• \nthe operation that it determines how much action can be taken and how much \neffort is needed to control the disaster.\nUnderstanding of the effective policy. Every chosen action taken must fall within \n• \nthe jurisdiction of the company policy.\nThe degree of success in observing this success determines the effectives of the \ndisaster recovery efforts. \n8.4  Disaster Recovery\nThe value of a good disaster recovery plan is its ability to react to the threat shifty \nand efficiently. In order for this to happen, there must be an informed staff, disaster \nsuppliers, and planned procedures. These three make up the disaster recovery plan. \nEvery system manager must be aware and confident with the system’s disaster recov­\nery plan. The plan must not only be on the books and shelved but must be rehearsed \nseveral times a year. For example, since the September 11, 2001 attack on the World \n8.4  Disaster Recovery\b\n177\n" }, { "page_number": 194, "text": "178\b\n8  Disaster Management\nTrade Center, companies learned the value of off-site storage of data. And since then, \nrehearsed procedures for retrieving archived data media from off-site facilities are \ncommon. There several other outsourced options to disaster recovery in addition to \nthe in-house one we have so far discussed. These include maintenance contracts and \nservices that offer from routine planned disaster testing to full extended warranty \nservices, Stand-by services that usually do only the storage and recovery of your data \nservices and delivery very quickly, and distributed architectures that are companies \nthat sell you the software that stores your data on their network and you can move \nit back and forth at a moment’s notice. All these, when used effectively help to con­\ntinue business as usual in the hours during and immediately following a disaster.\n8.4.1  Planning for a Disaster Recovery\nDisaster recovery planning is a delicate process that must be handled with care. \nIt involves risk assessment, developing, documenting, implementing, testing and \nmaintaining a disaster recovery plan [6]. For starters, the plan must be teamwork \nof several chosen people that form a committee – The Disaster Recovery Commit­\ntee. The committee should include at least on person from management, informa­\ntion technology, record management, and building maintenance. This committee is \ncharged with deciding on the what, how, when, and who are needed to provide a \ngood solid recovery that your company will be proud of. Such a plan must sustain \ncritical business functions. The planning process, therefore, must start with steps \nthat identify and document those functions and other key elements in the recovery \nprocess. According to [7], these steps include\nidentifying and prioritizing the disaster,\n• \nidentifying and prioritizing business-critical systems and functions,\n• \nidentifying business-critical resources and performing impact analysis,\n• \ndeveloping a notification plan,\n• \ndeveloping a damage assessment plan,\n• \ndesignating a disaster recovery site,\n• \ndeveloping a plan to recover critical functions at the disaster recovery site,\n• \nidentifying and documenting security controls, and\n• \ndesignating responsibilities.\n• \nBecause disasters do not happen at a particular time in a given month of a known \nyear, they are unplanned and unpredictable. This makes disaster recovery planning \nan ongoing, dynamic process that continues throughout the information system’s \nlifecycle.\n8.4.1.1  Disaster Recovery Committee\nThis committee is responsible for developing the disaster recovery plan. The com­\nmittee must represent every function or unit of the business to ensure that all essen­\ntial business information and resources are not left out. Before the committee starts \n" }, { "page_number": 195, "text": "its job, members must all be trained in disaster recovery. Each member of this com­\nmittee is assigned responsibilities for key activities identified and duties outlined \nwithin their departments and as defined within the Disaster Recovery Plan. The \ncommittee is also responsible for bringing awareness to the rest of the employees.\n8.4.2  Procedures of Recovery\nThe procedures followed in disaster management and recovery are systematic steps \ntaken in order to mitigate the damage due to the disaster. These steps are followed \nbased on the rankings (above) of the nature of the disaster and the critical value of \nthe items involved.\n8.4.2.1  Identifying and Prioritizing the Disaster\nThese may be put in three levels: low, medium, and high\nLow-level disasters may be local accidents like\n• \nHuman errors,\n• \nHigh temperature in room, and\n• \nServer failure.\n• \nMedium-level disasters may be less local including\n• \nVirus attack,\n• \nLong power failures – may be a day long, and\n• \nServer crush (Web, mail).\n• \nHigh-level disasters – this level includes the most devastating disasters like\n• \nEarthquakes,\n• \nHurricanes,\n• \nBig fire, and\n• \nTerrorism.\n• \n8.4.2.2  Identifying Critical Resources\nThe ranking of critical assets may be based on the dollar amount spent on acquiring the \nitem or on the utility of the item. Some examples of these critical assets include [2]\nServers, workstations, and peripherals,\n• \nApplications and data,\n• \nMedia and output,\n• \nTelecommunications connections,\n• \nPhysical infrastructure (e.g., electrical power, environmental controls), and\n• \nPersonnel.\n• \n8.4  Disaster Recovery\b\n179\n" }, { "page_number": 196, "text": "180\b\n8  Disaster Management\nRank them in three levels:\nLow level – these include\n• \nPrinter paper, printer cartridges, media\n• \nPens, chairs, etc.\n• \nMedium level – these include relatively costly items,\n• \nAll peripherals\n• \nSwitches\n• \nWorkstations\n• \nPhysical infrastructures\n• \nHigh level – these include valued items and high ticket items like\n• \nServers\n• \nDisks (RAID)/application data\n• \nWorkstations\n• \nPersonnel\n• \n8.4.2.3  Developing a Notification Plan\nThis requires identifications of all those to be informed. This can also be done based \non the previous levels of the disaster and the level of critical resources. This plan is \nrepresented into a matrix form below.\n\t\nLow Level – Disaster\t\nMedium Level – Disaster\t High Level – Disaster\nLevel 1 – Critical \nassets\nSystem Adm.\nSystem Adm.\nSystem Adm, \nManagement, law \nenforcement, the \nmedia\nLevel 2 – Critical \nassets\nSystem Adm.\nSystem Adm., \nmanagement\nSystem Adm., \nmanagement, law \nenforcement, the \nmedia\nLevel 3 – Critical \nassets\nSystem Adm.\nSystem Adm., \nmanagement\nSystem Adm., \nmanagement, law \nenforcement, the \nmedia\nFor each cell in the matrix chose an acceptable method of transmitting the infor­\nmation to be transmitted. Determine how much information needs to be transmitted \nand when it should be transmitted. For each group of people to be informed, choose \na representative person. For example, for management, who should be informed, \nthe Vice President for Information or the Chief Executive Officer?\n" }, { "page_number": 197, "text": "Keep in mind that prompt notification can reduce the disaster’s effects on the \ninformation system because it gives you time to take mitigating actions.\n8.4.2.4  Training of Employees\nSince disaster handling is a continuous process in the life cycle of an Enterprise \nsystem, the training of employees about possible disasters and what each one has to \ndo is desirable. However, the training of the select people on the Disaster Recovery \nCommittee is critical. Plan ahead of time how this training is to be carried out. There \nare several ways of doing this in a work environment depending on the number of \npeople to be trained:\nProfessional seminars for all those on the Disaster Recovery Committee,\n• \nSpecial-in-house education programs for all those on the Disaster Recovery \n• \nCommittee, heads of departments, and everybody else.\nThe choice of the type of training also requires to determine who will be con­\nducting the training and who is responsible for the arranging the training. Some­\nbody responsible for training could be somebody well trained to do that if available \nin-house or by using a vendor to come on the component premises or by sending \npeople to vendor-designated sites.\n8.4.2.5  Priorities for the Restoration of Essential Functions\nOne of the most critical and vital bit of information in disaster planning is to pri­\noritize the order of critical resources as they come back online. Follow that order \nbecause it was chosen by the Disaster Recovery Committee for a reason.\n8.5  Make Your Business Disaster Ready\nIn the introduction, we talked about the importance of a recovery plan for a busi­\nness. In fact, the statistics we quoted indicate that almost 90% of all companies that \ndid not have a disaster recovery plan did not survive are indicative of the impor­\ntance of a business disaster recovery plan. Also in the introduction, we indicated \nthat disaster planning is an ongoing process that never ends. This means that for \nyour company to remain in business and remain competitive, the disaster recovery \nplan must be in place and must keep changing to meet the developing new tech­\nnologies. Among the things to help you refresh your evolving business disaster plan \nare being disaster ready all the time, making periodic drills of checking the storage \nmedia to make sure that they are already ready for the big one to happen, for those \nworking with databases, working with a base-function script for the capability of \nyour interfaces, and always periodically doing a risk assessment for the disaster.\n8.5  Make Your Business Disaster Ready\b\n181\n" }, { "page_number": 198, "text": "182\b\n8  Disaster Management\n8.5.1  Always Be Ready for a Disaster\nBecause disasters can happen at any time and while some customers will under­\nstand, the majority will not wait for you to learn the tricks of handling a disaster. \nThey will move to your competitor. Do always be prepared for the big one by doing \nthe following [3]:\nPeriodically check and test your backup and recovery procedures thoroughly \n• \nto ensure that you have the required backups to recover from various failures, \nthat your procedures are clearly defined and documented, and that they can be \nexecuted smoothly and quickly by any qualified operator.\nAlways secure, keep and periodically check and review all system logs and \n• \ntransaction logs. This will help you to backtrack if you have to and to find \nanything you might have missed out.\n8.5.2  Always Backup Media\nThere is no better way to deal with a disaster than having a backup. We have been \ntold since we were kids to keep a copy of everything important. You are doing the \nsame thing here. In a computing system environment, also consider\na schedule to revisit the saved materials,\n• \nwhether to store at a location but in a different place or in a different location\n• \na chart of which data needs to be stored, where it is to be stored, and for how \n• \nlong\n8.5.3  Risk Assessment\nUsing a matrix model consisting of all types of disasters that can happen to your \nsystem in a row and the all system resources that you think has value in the column. \nEach entry in the matrix cell is potential risks to the organization which could result \nto the resource in case of the disaster in that row occurring. This matrix model must \nhave been done by the Disaster Planning Committee. There are tools on the market \nto help you achieve this, including COBRA [3].\n8.6  Resources for Disaster Planning and Recovery\nAs businesses begin to see disasters as a huge security problem to the day-to-day \nrunning of the business, there is going to be a high demand for tools and services \nfrom vendors to manage disasters. These resources fallow into two categories: ­public \nagency-based and vendor-based resources. Also whether public or private-based \n" }, { "page_number": 199, "text": "resources, these resources can be obtained quickly because they are local or they \nmay take time because they are some distance off. Always start with local resources \nwhen you need them.\n8.6.1  Local Disaster Resources\nThese resources can be freely obtained locally:\nPolice\n• \nCivil defense\n• \nFire department\n• \nAmbulatory services\n• \nThese resources can be obtained on the business premises:\nPaper\n• \nFire extinguisher\n• \nSmall capacity tapes and disks\n• \nThese resources can be obtained from vendors (online or offline):\nSpecialized computer equipment\n• \nSpecialized software tools like COBRA.\n• \nExercises\n1.\t List as many of the emergency agencies in your community.\n2.\t Of these listed in (1) above which are dealing with information security.\n3.\t We pointed out that the development of a good disaster recovery plan requires \nrisk assessment. Design a matrix for the risk assessment of your security lab.\n4.\t Using your security lab as your fictitious company, develop a disaster plan for \nthe lab.\n5.\t Study vendor tools in disaster recovery. Write about five of them, listing their \nmerits and costs involved.\n6.\t Study and develop a list of companies offering disaster recovery services in your \narea or state. Write about five, listing their merits and fees charged.\n7.\t Based on your plan in (4) above, develop a rescue plan for the lab by developing \na list of tools needed by the lab for disaster recovery, when needed.\nAdvanced Exercises – Case Studies\n1.\t Check to see if your university has a disaster plan (http:// http://palimpsest.\nstanford.edu/bytopic/disasters/plans/). Prepare a disaster plan for your univer­\nsity. Note that it should have the major headings as follows: (1) Introduction, \nAdvanced Exercises – Case Studies\b\n183\n" }, { "page_number": 200, "text": "184\b\n8  Disaster Management\n(2) Emergency Procedures, (3) Response Plan, (4) Recovery Procedures, (5) \nOther Emergencies, (6) Local Supplies.\n2.\t Form a committee, whose size depends on the size of your college. Empower the \ncommittee to develop a disaster recovery plan for the college.\n3.\t Consider the following company. HHR is a company involved with retail adver­\ntising. Major national chains use it to host their online catalogs. Every day, HHR \ngets about 5000 hits. It has four departments (Human Resources, Accounting, \nAdvertising, IT), and employs about 2000 people nationally. The company is just \ngetting started with disaster recovery and they have hired you to do it for them. \nWrite a 2-page document to the CEO of HHR selling your services.\n4.\t Draw a plan of how you will go about setting up a Disaster Recovery Commit­\ntee, indicating who will be in it and why. Also send a memo to the committee \nmembers telling them about the first organizing meeting and list out the items to \nbe discussed by the committee.\n5.\t Develop a Disaster Recovery Plan for HHR.\nReferences\n1.\t John Gage Allee (Ed.). Webster’s Dictinary. 1998. Literary Press, 1958.\n2.\t DataSafe, Inc., What is Disaster Planning?, http://www.amarillodatasafe.com/abstracts.htm.\n3.\t The Disaster Recovery Guide, 2002. http://www.disaster-recovery-guide.com/risk.htm.\n4.\t Intra Computer, Inc., Elements of an Effective Disaster Prevention System, http://www.\nintracomp.com/page5.html.\n5.\t Walter Guerry Green. Command and Control of Disaster Operations, Universal Publishers, \nInc/uPUBLISH.com, 2001.\n6.\t Erbschloe, Michael. Guide to Disaster Recovery. Course Technology, Boston, 2003.\n7.\t USAID. Disaster Recovery Planning Procedures and Guidelines. http://www.usaid.gov/\npolicy/ads/500/545mal.pdf.\n" }, { "page_number": 201, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_9, © Springer-Verlag London Limited 2009\n\b\n185\nChapter 9\nAccess Control and Authorization\n9.1  Definitions\nAccess control is a process to determine “Who does what to what,” based on a \npolicy.\nOne of the system administrator’s biggest problems, which can soon turn into a \nnightmare if it is not well handled, is controlling access of who gets in and out of the \nsystem and who uses what resources, when, and in what amounts. Access control \nis restricting this access to a system or system resources based on something other \nthan the identity of the user. For example, we can allow or deny access to a system’s \nresources based on the name or address of the machine requesting a document.\nAccess control is one of the major cornerstones of system security. It is essential \nto determine how access control protection can be provided to each of the system \nresources. To do this, a good access control and access protection policy is needed. \nAccording to Raymond Panko, such a policy has benefits including the following [1]:\nIt focuses the organization’s attention on security issues, and probably, this \n• \nattention results in resource allocation toward system security.\nIt helps in configuring appropriate security for each system resource based on \n• \nrole and importance in the system.\nIt allows system auditing and testing.\n• \nAs cyberspace expands and the forces of globalization push e-commerce to the \nforefront of business and commercial transactions, the need for secure transactions \nhas propelled access control to a position among the top security requirements, which \nalso include authorization and authentication. In this chapter, we are going to discuss \naccess control and authorization; authentication will be discussed in the next chapter.\n9.2  Access Rights\nTo provide authorization, and later as we will see authentication, system \nadministrators must manage a large number of system user accounts and permis­\nsions associated with those accounts. The permissions control user access to each \n" }, { "page_number": 202, "text": "186\b\n9  Access Control and Authorization\nsystem resource. So, user A who wants to access resource R must have permission \nto access that resource based on any one of the following modes: read, write, \nmodify, update, append, and delete. Access control regimes and programs, through \nvalidation of passwords and access mode permissions, let system users get access \nto the needed system resources in a specified access mode.\nAccess control consists of four elements: subjects, objects, operations, and a ref­\nerence monitor. In the normal operation, seen in Fig. 9.1, the subject, for example, \na user, initiates an access request for a specified system resource, usually a passive \nobject in the system such as a Web resource. The request goes to the reference \nmonitor. The job of the reference monitor is to check on the hierarchy of rules that \nspecify certain restrictions. A set of such rules is called an access control list (ACL). \nThe access control hierarchy is based on the URL path for Web access or the file \npath for a file access such as in a directory. When a request for access is made, the \nmonitor or server goes in turn through each ACL rule, continuing until it encounters \na rule that prevents it from continuing and results in a request rejection or comes to \nthe last rule for that resource, resulting into access right being it granted.\nSubjects are system users and groups of users, while objects are files and \nresources such as memory, printers, and scanners including computers in a network. \nAn access operation comes in many forms including Web access, server access, \nmemory access, and method calls. Whenever a subject requests to access an object, \nan access mode must be specified. There are two access modes: observe and alter. \nIn the observe mode, the subject may only look at the content of the object; in the \nalter mode, the subject may change the content of the object. The observe mode is \nthe typical read in which a client process may request a server to read from a file.\nAccess rights refer to the user’s ability to access a system resource. There are four \naccess rights: execute, read, append, and write. The user is cautioned not to confuse \naccess rights and access modes. The difference lies in the fact that you can perform any \naccess right within each access mode. Figure 9.2 shows how this can be done. Note \nthat according to the last column in Fig. 9.2, there are X marks in both rows because \nin order to write, one must observe first before altering. This prevents the operating \nsystem from opening the file twice, one for the read and another for a write.\nAccess rights can be set individually on each system resource for each individual \nuser and group. It is possible for a user to belong to a few groups and enjoy those \ngroups’ rights. However, user access rights always take precedence over group \naccess rights regardless of where the group rights are applied. If there are inherited \ngroup access rights, they take precedence over user default access rights. A user \nhas default rights when the user has no assigned individual or group rights from \nroot down to the folder in question. In the cascading of access rights application, \nAccess request\nGranted access request\nSubject\nReference\nMonitor\nObject\nFig. 9.1  Access Control \nAdministration\n" }, { "page_number": 203, "text": "user access rights that are closest to the resource being checked by the monitor take \nprecedence over access rights assignments that are farther away.\nWe have so far discussed access rights to resources. The question that still \nremains to be answered is, Who sets these rights? The owner of the resource sets the \naccess rights to the resource. In a global system, the operating systems own all sys­\ntem resources and therefore set the access rights to those resources. However, the \noperating system allows folders and file owners to set and revoke access rights.\n9.2.1  Access Control Techniques and Technologies\nBecause a system, especially a network system, may have thousands of users and \nresources, the management of access rights for every user per every object may \nbecome complex. Several control techniques and technologies have been devel­\noped to deal with this problem; they include access control matrix, capability tables, \naccess control lists, role-based access control, rule-based access control, restricted \ninterfaces, and content-dependent access control.\nMany of the techniques and technologies we are going to discuss below are new \nin response to the growth of cyberspace and the widespread use of networking. These \nnew techniques and technologies have necessitated new approaches to system access \ncontrol. For a long time, access control was used with user- or group-based access \ncontrol lists, normally based in operating systems. However, with Web-based network \napplications, this approach is no longer flexible enough because it does not scale in \nthe new environment. Thus, most Web-based systems employ newer techniques and \ntechnologies such as role-based and rule-based access control, where access rights \nare based on specific user attributes such as their role, rank, or organization unit.\n9.2.1.1  Access Control Matrix\nAll the information needed for access control administration can be put into a matrix \nwith rows representing the subjects or groups of subjects and columns represent­\ning the objects. The access that the subject or a group of subjects is permitted to \nthe object is shown in the body of the matrix. For example, in the matrix shown in \nFig. 9.2, user A has permission to write in file R4. One feature of the access con­\ntrol matrix is its sparseness. Because the matrix is so sparse, storage consideration \nbecomes an issue, and it is better to store the matrix as a list.\nFig. 9.2  Access Modes and Access Rights.1\nGollman, Dieter. Computer Security. New York, John Wiley & Sons, 2000.\nexecute\nappend\nread\nwrite\nobserve\nX\nX\nalter\nX\nX\n9.2  Access Rights\b\n187\n" }, { "page_number": 204, "text": "188\b\n9  Access Control and Authorization\n9.2.1.2  Access Control Lists\nIn the access control lists (ACLs), groups with access rights to an object are stored \nin association to the object. If you look at the access matrix shown in Fig. 9.2, \neach object has a list of access rights associated with it. In this case, each object is \nassociated with all the access rights in the column. For example, the ACL for the \nmatrix shown in Fig. 9.3 is shown in Fig. 9.4.\nACLs are very fitting for operating systems as they manage access to objects [2].\n9.2.1.3  Access Control Capability\nA capability specifies that “the subject may do operation O on object X.”\nUnlike the ACLs, where the storage of access rights between objects and subjects \nis based on columns in the access control matrix, capabilities access control storage \nis based on the rows. This means that every subject is given a capability, a forgery-\nproof token that specifies the subject’s access rights [2].\nFrom the access matrix shown in Fig. 9.3, we can construct a capability as shown \nin Fig. 9.5.\nObjects → \nSubjects/groups\n \nV\nR1\nR2\nR3\nR4\nA\nW\nR\nR\nW\nB\nR\nGroup G1\nW\nGroup G2\nW\nC\nR\nFig. 9.3  Access Matrix\nFig. 9.4  Access Control List (ACL)\nObject\nAccess rights\nSubjects\nR1\nW\nA\nR\nB\nW\nGroup G1\nR2\nR\nA\nW\nGroup G2\nR3\nR\nA\nR4\nR\nA\nR\nC\n" }, { "page_number": 205, "text": "9.2.1.4  Role-Based Access Control\nThe changing size and technology of computer and communication networks are \ncreating complex and challenging problems in the security management of these \nlarge networked systems. Such administration is not only becoming complex \nas technology changes and more people join the networks, it is also becoming \nextremely costly and prone to error when it is solely based on access control lists \nfor each user on the system individually.\nSystem security in role-based access control (RBAC) is based on roles assigned \nto each user in an organization. For example, one can take on a role as a chief \nexecutive officer, a chief information officer, or chief security officer. A user may \nbe assigned one or more roles, and each role is assigned one or more privileges \nthat are permitted to users in that role. Access decisions are then based on the roles \nthat individual users have as part of an organization. The process of defining roles \nshould be based on a thorough analysis of how an organization operates and should \ninclude input from a wide spectrum of users in an organization.\nAccess rights are grouped by role name, and the use of resources is restricted to \nindividuals authorized to assume the associated role. A good example to illustrate \nthe role names and system users who may assume more than one role and play \nthose roles while observing an organization’s security policy is the following given \nin the NIST/ITL Bulletin, of December 1995. “Within a hospital system the role of \ndoctor can include operations to perform diagnosis, prescribe medication, and order \nlaboratory tests, and the role of researcher can be limited to gathering anonymous \nclinical information for studies” [3].\nAccordingly, users are granted membership into roles based on their competen­\ncies and responsibilities in the organization. The types of operations that a user is \npermitted to perform in the role he or she assumes are based on that user’s role. \nUser roles are constantly changing as the user changes responsibilities and func­\ntions in the organizations, and these roles can be revoked. Role associations can be \nestablished when new operations are instituted, and old operations can be deleted as \norganizational functions change and evolve. This simplifies the administration and \nmanagement of privileges; roles can be updated without updating the privileges for \nevery user on an individual basis.\nSubject\nObject 1/Access\nObject 2/Access\nObject 3 /Access\nObject 4/Access\nA\nR1/W\nR2/R\nR3/R\nR4/R\nB\nR1/R\nGroup G1\nR1/W\nGroup G2\nR2/W\nC\nR4/R\nFig. 9.5  Access control capability lists\n9.2  Access Rights\b\n189\n" }, { "page_number": 206, "text": "190\b\n9  Access Control and Authorization\nLike other types of access control, RBAC is also based on the concept of least \nprivilege that requires identifying the user’s job functions, determining the mini­\nmum set of privileges required to perform that function, and restricting the user to a \ndomain with those privileges and nothing more. When a user is assigned a role, that \nuser becomes associated with that role, which means that user can perform a certain \nand specific number of privileges in that role. Although the role may be associated \nwith many privileges, individual users associated with that role may not be given \nmore privileges than are necessary to perform their jobs.\nAlthough this is a new technology, it is becoming very popular and attracting \nincreasing attention, particularly for commercial applications, because of its poten­\ntial for reducing the complexity and cost of security administration in large net­\nworked applications.\n9.2.1.5  Rule-Based Access Control\nLike other access control regimes, rule-based access control (RBAC), also known \nas policy-based access control (PBAC), is based on the least privileged concept. It \nis also based on policies that can be algorithmically expressed. RBAC is a multi-\npart process, where one process assigns roles to users just like in the role-based \naccess control techniques discussed above. The second process assigns privileges to \nthe assigned roles based on a predefined policy. Another process is used to identify \nand authenticate the users allowed to access the resources.\nIt is based on a set of rules that determine users’ access rights to resources within \nan organization’s system. For example, organizations routinely set policies on the \naccess to the organizations Web sites on the organizations’ Intranet or Internet. \nMany organizations, for example, limit the scope and amount, sometimes the times, \nemployees, based on their ranks and roles, can retrieve from the site. Such limits \nmay be specified based on the number of documents that can be downloaded by an \nemployee during a certain time period and on the limit of which part of the Web site \nsuch an employee can access.\nThe role of ACLs has been diminishing because ACLs are ineffective in \nenforcing policy. When using ACLs to enforce a policy, there is usually no dis­\ntinction between the policy description and the enforcement mechanism (the \npolicy is essentially defined by the set of ACLs associated with all the resources \non the network). Having a policy being implicitly defined by a set of ACLs \nmakes the management of the policy inefficient, error prone, and hardly scalable \nup to large enterprises with large numbers of employees and resources. In par­\nticular, every time an employee leaves a company, or even just changes his/her \nrole within the company, an exhaustive search of all ACLs must be performed on \nall servers, so that user privileges are modified accordingly.\nIn contrast with ACLs, policy-based access control makes a strict distinction \nbetween the formal statement of the policy and its enforcement. It makes rules \nexplicit and instead of concealing them in ACLs, it makes the policy easier to man­\nage and modify. Its advantage is based on the fact that it administers the concept of \n" }, { "page_number": 207, "text": "least privilege justly because each user can be tied to a role which in turn can be tied \nto a well defined list of privileges required for specific tasks in the role. In addition, \nthe roles can be moved around easily and delegated without explicitly de-allocating \na user’s access privileges [4].\n9.2.1.6  Restricted Interfaces\nAs the commercial Internet grows in popularity, more and more organizations and \nindividuals are putting their data into organization and individual databases and \nrestricting access to it. It is estimated that 88% of all cyberspace data is restricted \ndata or what is called hidden data [5].\nFor the user to get access to restricted data, the user has to go via an interface. \nAny outside party access to restricted data requires a special access request, which \nmany times requires filling in an online form. The interfaces restrict the amount and \nquality of data that can be retrieved based on filter and retrieval rules. In many cases, \nthe restrictions and filters are instituted by content owners to protect the integrity \nand proprietary aspects of their data. The Web site itself and the browser must work \nin cooperation to overcome the over-restriction of some interfaces. Where this is \nimpossible, hidden data is never retrievable.\n9.2.1.7  Content-Dependent Access Control\nIn content-dependent access control, the decision is based on the value of the \nattribute of the object under consideration. Content-dependent access control is \nvery expensive to administer because it involves a great deal of overhead result­\ning from the need to scan the resource when access is to be determined. The \nhigher the level of granularity, the more expensive it gets. It is also extremely \nlabor intensive.\n9.2.1.8  Other Access Control Techniques and Technologies\nOther access control techniques and technologies include those by the U.S. Depart­\nment of Defense (DoD) that include discretionary access control (DAC), mandatory \naccess control (MAC), context-based access control (CBAC), view-based access \ncontrol (VBAC), and user-based access control (UBAC).\nDAC permits the granting and revoking of access control privileges to be left \nto the discretion of the individual users. A DAC mechanism departs a little bit \nfrom many traditional access control mechanisms where the users do not own the \ninformation to which they are allowed access. In DAC, users own the informa­\ntion and are allowed to grant or revoke access to any of the objects under their \ncontrol.\n9.2  Access Rights\b\n191\n" }, { "page_number": 208, "text": "192\b\n9  Access Control and Authorization\nMandatory access control (MAC), according to DoD, is “a means of restricting \naccess to objects based on the sensitivity (as represented by a label) of the informa­\ntion contained in the objects and the formal authorization (i.e., clearance) of sub­\njects to access information of such sensitivity.” [3].\nContext-based access control (CBAC) makes a decision to allow access to a system \nresource based not only on who the user is, which resource it is, and its content, but also \non its history, which involves the sequence of events that preceded the access attempt.\nView-Based Access Control (VBAC), unlike other notions of access control \nwhich usually relate to tangible objects such as files, directories and printers; \nVBAC takes the system resource itself as a collection of sub-resources, which are \nthe views. This allows all users to access the same resource based on the view they \nhave of the resource. It makes an assumption that the authentication of the source \nhas been done by the authentication module.\nUser-based access control (UBAC), also known as identity-based access control \n(IBAC), is a technique that requires a system administrator to define permissions for \neach user based on the individual’s needs. For a system with many users, this tech­\nnique may become labor intensive because the administrator is supposed to know pre­\ncisely what access each and every user needs and configure and update permissions.\n9.3  Access Control Systems\nIn Section 2.3.1, we briefly discussed system access control as part of the survey of sys­\ntem security services. The discussion then was centered on both hardware and software \naccess control regimes. Let us now look at these services in a more detailed form.\n9.3.1  Physical Access Control\nAlthough most accesses to an organization systems are expected to originate from \nremote sites and therefore access the system via the network access points, in a \nlimited number of cases, system access can come from intruders physically gain­\ning access to the system itself, where they can install password cracking programs. \nStudies have shown that a great majority of system break-ins originate from inside \nthe organization. Access to this group of users who have access to the physical \npremises of the system must be appropriate.\n9.3.2  Access Cards\nCards as access control devices have been in use for sometime now. Access cards \nare perhaps the most widely used form of access control system worldwide. ­Initially, \n" }, { "page_number": 209, "text": "cards were used exclusively for visual identification of the bearer. However, with \nadvanced digital technology, cards with magnetic strips and later with embedded \nmicrochips are now very common identification devices. Many companies require \ntheir employees to carry identity cards or identity badges with a photograph of the \ncard holder or a magnetic strip for quick identification. Most hotels now have done \naway with metal keys in favor of magnet stripe keys. Access cards are used in most \ne-commerce transactions, payment systems, and in services such as health and \neducation. These types of identification are also known as electronic keys.\nAccess control systems based on an embedded microprocessor, known as \nsmart cards, have a number of advantages including the ability to do more \nadvanced and sophisticated authentication because of the added processing \npower, storing large quantities of data, usually personal data, and smaller sizes. \nSmart cards also have exceptional reliability and extended life cycle because \nthe chip is usually encased in tamper-resistant materials like stainless steel. The \ncards, in addition, may have built-in unique security identifier numbers called \npersonal identification numbers (PINs) to prevent information falsification and \nimitations.\nA cousin of the smart card is the proximity card. Proximity cards are modern, \nprestigious, and easy-to-use personal identifiers. Like magnetic and smart cards, \nproximity cards also have a great deal of embedded personal information. How­\never, proximity cards have advantages the other cards do not have. They can be \nused in extreme conditions and still last long periods of time. They can also be read \nfrom a distance such as in parking lots where drivers can flash the card toward the \nreader while in a car and the reader still reads the card through the car window \nglass.\n9.3.3  Electronic Surveillance\nElectronic surveillance consists of a number of captures such as video recordings, \nsystem logs, keystroke and application monitors, screen-capture software com­\nmonly known as activity monitors, and network packet sniffers.\nVideo recordings capture the activities at selected access points. Increasingly \nthese video cameras are now connected to computers and actually a Web, a process \ncommonly now referred to as webcam surveillance. Webcam surveillance consists \nof a mounted video camera, sometimes very small and embedded into some object, \ncamera software, and an Internet connection to form a closed-circuit monitoring \nsystem. Many of these cameras are now motion-activated and they record video \nfootage shot from vantage points at the selected points. For access control, the \nselected points are system access points. The video footage can be viewed live or \nstored for later viewing. These captures can also be broadcast over the Internet or \ntransmitted to a dedicated location or sent by e-mail.\nKeystroke monitors are software or hardware products that record every char­\nacter typed on keyboards. Software-based keystroke monitors capture the signals \n9.3  Access Control Systems\b\n193\n" }, { "page_number": 210, "text": "194\b\n9  Access Control and Authorization\nthat move between keyboard and computer as they are generated by all human–\ncomputer interaction activities that include the applications ran, chats, and e-mails \nsent and received. The captures are then sent live onto a closed-circuit recording \nsystem that stores them to a file for future review or sends them by e-mail to a \nremote location or user. Trojan horse spyware such as Back Orifice and Netbus are \ngood examples of software-based monitoring tools [6].\nPacket sniffers work at a network level to sniff at network packets as they move \nbetween nodes. Depending on the motives for setting them, they can motive all \npackets, selected packets, or node-originating and node-bound traffic. Based on the \nanalysis, they can monitor e-mail messages, Web browser usage, node usage, traf­\nfic into a node, nature of traffic, and how often a user accesses a particular server, \napplication, or network [6].\n9.3.4  Biometrics\nBiometric technology, based on human attributes, something you are, aims to \nconfirm a person’s identity by scanning a physical characteristic such as a finger­\nprint, voice, eye movement, facial recognition, and others. Biometrics came into \nuse because we tend to forget something we have. We forget passwords, keys, \nand cards. Biometric has been and continues to be a catch-all and buzz word \nfor all security control techniques that involve human attributes. It has probably \nbeen one of the oldest access control techniques. However, during the past several \nyears and with heightened security, biometric technology has become increas­\ningly popular. The technology, which can be used to permit access to a network \nor a building, has become an increasingly reliable, convenient, and cost-effective \nmeans of security.\nCurrent technology has made biometric access control much more practical than \nit has ever been in the past. Now, a new generation of low-cost yet accurate finger­\nprint readers is available for most mobile applications so that screening stations can \nbe put up in a few minutes. Although biometrics is one of those security control \ntechniques that have been in use the longest, it does not have standards as yet. \nThere is an array of services on the market for biometric devices to fit every form \nof security access control.\nTechnological advances have resulted in smaller, high-quality, more accurate, \nand more reliable devices. Improvements in biometrics are essential because bad \nbiometric security can lull system and network administrators into a false sense of \nsafety. In addition, it can also lock out a legitimate user and admit an intruder. So, \ncare must be taken when procuring biometric devices.\nBefore a biometric technique can be used as an access control technique for \nthe system, each user of the system first has his or her biometric data scanned by \na biometric reader, processed to extract a few critical features, and then those few \nfeatures stored in a database as the user’s template. When a user requests access to \na system resource and that user must be authenticated, the biometric readers verify \n" }, { "page_number": 211, "text": "customers’ identities by scanning their physical attributes, such as fingerprints, \nagain. A match is sought by checking them against prints of the same attributes \npreviously registered and stored in the database.\nOne of the advantages that has made biometrics increasingly popular is that while \nother methods of access control such as authentication and encryption are crucial to \nnetwork security and provide a secure way to exchange information, they are still \nexpensive and difficult to design for a comprehensive security system. Other access \ncontrol techniques such as passwords, while inexpensive to implement, are easy to \nforget and easy to guess by unauthorized people if they are simple, and too complex \nto be of any use if they are complex.\n9.3.4.1  Fingerprint Readers\nFingerprint recognition technology is perhaps one of the oldest biometric technolo­\ngies. Fingerprint readers have been around for probably hundreds of years. These \nreaders fall into two categories: mice with embedded sensors and standalone units. \nMice are the latest 3D imaging developments and are threatening the standalone \nbecause they can play a dual role; they can be used on a desktop and also as network \nauthentication stations. This is leading to the bundling of fingerprint recognition \ndevices with smart cards or some other security token.\nAlthough fingerprint technology is improving with current technology, making it \npossible to make a positive identification in a few seconds, fingerprint identification \nis susceptible to precision problems. Many fingerprints can result in false positives \ndue to oil and skin problems on the subject’s finger. Also, many of the latest finger­\nprint readers can be defeated by photos of fingerprints and 3D fingers from latent \nprints such as prints left on glass and other objects [1].\n9.3.4.2  Voice Recognition\nAlthough voice recognition technology is a biometric that is supposed to \nauthenticate the user based on what the use is, voice imprint is based on some­\nthing the user does, itself based on who the user is. Voice recognition has been \naround for years; however, its real life application has been slow because of \nthe difficulties in deployment. In order for voiceprint technology to work \nsuccessfully, it needs to be deployed by first developing the front end to capture \nthe input voice and connect it to the back-end systems which process the input \nand do the recognition.\nThe front-end of the voiceprint authentication technology works much the same \nas other biometric technologies, by creating a digital representation of a person’s \nvoice using a set of sophisticated algorithms. Those attributes are stored in a data­\nbase, part of the back end, which is prompted to make a match against the user’s \nvoice when the online system is accessed.\n9.3  Access Control Systems\b\n195\n" }, { "page_number": 212, "text": "196\b\n9  Access Control and Authorization\nTo set it up initially, each user is required to record and leave his or her voice­\nprint, which is stored in the system’s database to be activated whenever the user \nrequests access to the protected facility through a physical system input. The \nuser is then prompted to speak into a computer’s microphone to verify his or her \nidentity.\nCurrent systems use two servers to perform these functions. The first server runs \nthe front-end system and the second server then stores the database and does the \nprocessing for a recognition from the input server.\nVoice recognition is not a safe authentication technique because it can be fooled \nby the types of recording.\n9.3.4.3  Hand Geometry\nHand geometry is an authentication technology that uses the geometric shape of \nthe hand to identify a user. The technique works by measuring and then analyzing \nthe shape and physical features of a user’s hand, such as finger length and width \nand palm width. Like fingerprints, this technique also uses a reader. To initiate the \ndevice, all users’ hands are read and measured and the statistics are stored in a data­\nbase for future recognition. To activate the system, the user places the palm of his \nor her hand on the surface of the reader. Readers usually have features that guide \nthe user’s hand on the surface. Once on the surface, the hand, guided by the guid­\ning features, is properly aligned for the reader to read off the hand’s attributes. The \nreader is hooked to a computer, usually a server, with an application that provides a \nlive visual feedback of the top view and the side view of the hand. Hand features are \nthen taken as the defining feature vector of the user’s hand and are then compared \nwith the user features stored in the database.\nAlthough hand geometry is simple, human hands are not unique; therefore, indi­\nvidual hand features are not descriptive enough for proper identification. The tech­\nnique must be used in conjunction with other methods of authentication.\n9.3.4.4  Iris Scan\nThe human iris is the colored part of the human eye and is far more complex and \nprobably more precise than a human fingerprint; thus, it is a good candidate for \nauthentication. According to Panko, iris authentication is the gold standard of all \nbiometric authentications [1]. Iris scan technology, unlike the retinal scan, does not \nhave a long history. In fact, the idea of using iris patterns for personal identification \nwas first mooted in 1936 by ophthalmologist Frank Burch. By the mid-1980s, the \nidea was still a science fiction appearing only in James Bond films. The technology \ncame into full use in the 1990s [7].\nIris technology is an authentication technology that uses either regular or infra­\nred light into the eye of the user to scan and analyze the features that exist in the \ncolored tissue surrounding the pupil of the user’s eye. Like the previous biometric \n" }, { "page_number": 213, "text": "9.4  Authorization\b\n197\ntechnologies, iris technology also starts off by taking samples of the user eye fea­\ntures using a conventional closed circuit digital (CCD) or video camera that can \nwork through glasses and contacts. The camera scans the tissue around the pupils \nfor analysis features. Close to 200 features can be extracted from this tissue sur­\nrounding the pupil and used in the analysis. The tissue gives the appearance of \ndividing the iris in a radial fashion. The most important of these characteristics in \nthe tissue is the trabecular meshwork visible characteristic. Other extracted visible \ncharacteristics include rings, furrows, freckles, and the corona.\nThe first readings are stored in a database. Whenever a user wants access to a \nsecure system, he or she looks in an iris reader. Modern iris readers can read a user’s \neye up to 2 feet away. Verification time is short and it is getting shorter. Currently \nit stands at about 5 s, although the user will need to look into the device only for \na couple moments. Like in other eye scans, precautions must be taken to prevent \na wrong person’s eyes from fooling the system. This is done by varying the light \nshone into the eye and then pupil dilations are recorded.\nThe use of iris scans for authentication is becoming popular, although it is a \nyoung technology. Its potential application areas include law enforcement agencies \nand probably border patrol and airports. There is also potential use in the financial \nsector, especially in banking.\n9.3.5  Event Monitoring\nEvent monitoring is a cousin of electronic monitoring in which the focus is on \nspecific events of interest. Activities of interest can be monitored by video cam­\nera, webcam, digital or serial sensors, or a human eye. All products we discussed \nin Sections 9.3.3 and 9.3.4.2 can be used to capture screenshots, monitor Internet \nactivity, and report a computer’s use, keystroke by keystroke, and human voice, \nincluding human movement. The activities recorded based on selected events \ncan be stored, broadcast on the Internet, or sent by e-mail to a selected remote \nlocation or user.\n9.4  Authorization\nThis is the determination of whether a user has permission to access, read, modify, \ninsert, or delete certain data, or to execute certain programs. In particular, it is a set \nof access rights and access privileges granted to a user to benefit from a particular \nsystem resource. Authorization is also commonly referred to as access permissions, \nand it determines the privileges a user has on a system and what the user should be \nallowed to do to the resource. Access permissions are normally specified by a list of \npossibilities. For example, UNIX allows the list {read, write, execute} as the list of \npossibilities for a user or group of users on a UNIX file.\n" }, { "page_number": 214, "text": "198\b\n9  Access Control and Authorization\nWe have seen above that access control consists of defining an access policy for \neach system resource. The enforcement of each one of these access policies is what \nis called authorization. It is one thing to have a policy in place, but however good a \npolicy is, without good enforcement, the policy serves no purpose. The implementa­\ntion of mechanisms to control access to system resources is, therefore, a must for an \neffective access control regime.\nThe process of authorization itself has traditionally been composed of two sepa­\nrate processes: authentication, which we are going to discuss in the next chapter, and \naccess control. To get a good picture, let us put them together. In brief authentica­\ntion deals with ascertaining that the user is who he or she claims he or she is. Access \ncontrol then deals with a more refined problem of being able to find out “what a \nspecific user can do to a certain resource.” So authorization techniques such as the \ntraditional centralized access control use ACL as a dominant mechanism to create \nuser lists and user access rights to the requested resource. However, in more modern \nand distributed system environments, authorization takes a different approach from \nthis. In fact, the traditional separation of authorization process into authentication \nand access control also does not apply [8].\nAs with access control, authorization has three components: a set of objects we will \ndesignate as O, a set of subjects designed as S, and a set of access permissions designated \nas S. The authorization rule is a function f that takes the triple (s, o, a), where s ∈ S, o ∈ O, \na ∈ A and maps then into a binary-value T, where T = {true, false} as f: S × O × A → (True, \nFalse). When the value of the function f is true, this signals that the request for subject s \nto gain access to object o has been granted at authorization level a.\nThe modern authentication process is decentralized to allow more system inde­\npendence and to give network services providers more control over system resource \naccess. This is also the case in yet more distributed systems, since in such systems, \nit is hard and sometimes impossible to manage all users and resources in one central \nlocation. In addition, many servers actually do not need to know who the user is in \norder to provide services.\nThe capability mechanism so central in the traditional process, however, still \nplays a central role here, providing for decentralization of authorization through \nproviding credentials to users or applications whenever it receives requests for \nresource access. Each user or application keeps a collection of capabilities, one \nfor each resource they have access to, which they must present in order to use the \nrequested resource. Since every recourse maintains its own access control policy \nand complete proof of compliance between the policy and credentials collected \nfrom the user or application, the server receiving the request need not consult a \ncentralized ACL for authorization [8].\n9.4.1  Authorization Mechanisms\nAuthorization mechanisms, especially those in database management systems \n(DBMSs), can be classified into two main categories: discretionary and mandatory.\n" }, { "page_number": 215, "text": "9.5  Types of Authorization Systems\b\n199\n9.4.1.1  Discretionary Authorization\nThis is a mechanism that grants access privileges to users based on control policies \nthat govern the access of subjects to objects using the subjects’ identity and autho­\nrization rules, discussed in Section 9.4 above. These mechanisms are discretionary \nin that they allow subjects to grant other users authorization to access the data. \nThey are highly flexible, making them suitable for a large variety of application \ndomains.\nHowever, the same characteristics that make them flexible also make them vul­\nnerable to malicious attacks, such as Trojan Horses embedded in application pro­\ngrams. The reason is that discretionary authorization models do not impose any \ncontrol on how information is propagated, and once used, they have been accessed \nby users authorized to do so.\nBut in many practical situations, discretionary policies are preferred since they \noffer a better trade-off between security and applicability. For this reason, in this \nchapter, we focus on discretionary access control mechanisms. We refer the reader \nto [4] for details on mandatory access control mechanisms.\n9.4.1.2  Mandatory Access Control\nMandatory policies, unlike the discretionary ones seen above, ensure a high degree \nof protection in that they prevent any illegal flow of information through the enforce­\nment of multilevel security by classifying the data and users into various security \nclasses. They are, therefore, suitable for contexts that require structured but graded \nlevels of security such as the military. However, mandatory policies have the draw­\nback of being too rigid in that they require a strict classification of subjects and objects \nin security levels and are therefore applicable only to very few environments [4].\n9.5  Types of Authorization Systems\nBefore the creation of decentralized authorization systems, authorization was con­\ntrolled from one central location. Operating system authorization, for example, was \ncentrally controlled before the advent of network operating systems (NOSs). The \nbirth of computer networks and therefore NOS created the decentralized authoriza­\ntion systems.\n9.5.1  Centralized\nTraditionally, every resource used to do its own local authorizations and main­\ntained its own authorization database to associate authorizations to users. But this \nled to several implementation problems. For example, different resources and \n" }, { "page_number": 216, "text": "200\b\n9  Access Control and Authorization\n­different software applied different rules to determine authorization for the same \nsubject on an object. This led to the centralized authorization policy. In centralized \nauthorization, only one central authorization unit grants and delegates access to \nsystem resources. This means that any process or program that needs access to any \nsystem resource has to request from the one omniscient central authority. Central­\nized authorization services allow you to set up generalized policies that control \nwho gets access to resources across multiple platforms. For example, it is possible \nto set an authorization to a company’s Web portal in such a way that authorization \nis based on either functions or titles. Those with such functions could control their \norganization’s specially designated component of the portal, while others without \nfunctions access the general portal. This system is very easy and inexpensive to \noperate. A single database available to all applications gives a better more and \nconsistent view of security. It also simplifies the process of adding, modifying, \nand deleting authorizations. All original operating systems have been using this \nauthorization approach.\n9.5.2  Decentralized\nThis differs from the centralized system in that the subjects own the objects they have \ncreated and are therefore responsible for their security, which is locally maintained. \nThis means that each system resource maintains its own authorization process and \nmaintains its own database of authorizations associated with all subjects authorized \nto access the resource. Each subject also possesses all possible rights to access every \nresource associated with it. Each subject may, however, delegate access rights to its \nobjects to another subject. Because of these characteristics, decentralized authoriza­\ntion is found to be very flexible and easily adaptable to particular requirements of \nindividual subjects. However, this access rights delegation may lead to the problem \nof cascading, and cyclic authorization may arise.\n9.5.3  Implicit\nIn implicit authorization, the subject is authorized to use a requested system \nresource indirectly because the objects in the system are referenced in terms of \nother objects. That means that in order for a subject to access a requested object, the \naccess must go through an access of a primary object. Using the mathematical set \ntheoretical representation we presented earlier, in a given set of sets (s,o,a), a user s \nis implicitly given a type a authorization on all the objects of o. Take, for example, \na request to use a Web page; the page may have links connected to other documents. \nThe user who requests for authorization to use the Web has also indirect authoriza­\ntion to access all the pages linked to the authorized original page. This is, therefore, \na level of authorization called granularity. We are going to discuss this later. Notice \nthat a single authorization here enables a number of privileges.\n" }, { "page_number": 217, "text": "9.6  Authorization Principles\b\n201\n9.5.4  Explicit\nExplicit authorization is the opposite of the implicit. It explicitly stores all authoriza­\ntions for all system objects whose access has been requested. Again in a mathematical \nrepresentation seen earlier, for every request for access to object o from subject s \nthat is grantable, the triple set (s,o,a) is stored. All others are not stored. Recall \nfrom the last chapter that one of the problems of access control was to store a large \nbut sparse matrix of access rights. This technique of storing only authorized triples \ngreatly reduces the storage requirements. However, although simple, the technique \nstill stores authorizations whether needed or not, which wastes storage.\n9.6  Authorization Principles\nThe prime object of authorization is system security achieved through the controlled \naccess to the system resources. The authorization process, together with access con­\ntrol discussed earlier, through the use of authorization data structures, clearly define \nwho uses what system resources and what resources can and cannot be used. The \nauthorization process, therefore, offers undeniable security to the system through the \nprotection of its resources. System resources are protected through principles such \nas least privileges and separation of duties, which eventually results in increased \naccountability that leads to increased system security.\n9.6.1  Least Privileges\nThe principle of least privileges requires that the subject be granted authorizations \nbased on its needs. Least privileges principle is itself based on two other principles: \nless rights and less risk. The basic idea behind these principles is that security is \nimproved if subjects using system resources are given no more privileges than the \nminimum they require to perform the tasks that they are intended to perform, and \nin the minimum amount of time required to perform the tasks. The least privileges \nprinciple has the ability, if followed, to reduce the risks of unauthorized accesses to \nthe system.\n9.6.2  Separation of Duties\nThe principle of separation of duties breaks down the process of authorization into \nbasic steps and requires that for every request for authorization from a subject to \na system resource, each step be given different privileges. It requires that each \ndifferent key step in a process requires different privileges for different individual \nsubjects. This division of labor, not only in the authorization process of one \n" }, { "page_number": 218, "text": "202\b\n9  Access Control and Authorization\nindividual request but also between individual subjects, stipulates not only that one \nsubject should never be given a blanket authorization to do all the requested func­\ntions but also that no one individual request to an object should be granted blanket \naccess rights to an object. This hierarchical or granular authorization distributes \nresponsibilities and creates accountability because no one subject is responsible for \nlarge processes where responsibility and accountability may slack. For example, \nauthorization to administer a Web server or a e-mail server can be granted to one \nperson without granting him or her administrative rights to other parts of the orga­\nnization system.\n9.7  Authorization Granularity\nWe have used the concept of granularity in the last section without officially defin­\ning it. Let us do so here. Granularity in access authorization means the level of \ndetails an authorizing process requires to limit and separate privileges. Because a \nsingle authorization may enable a number of privileges or a privilege may require \nmultiple authorizations, when requests come into the authorizing process from \nsubjects requiring access to system resources, the authorizing authority must pay \nattention and separate these two authorization privileges. These two issues may \ncomplicate the authorization process. Granularity, therefore, should be defined on \nfunctions [9].\n9.7.1  Fine Grain Authorization\nAs we discussed above, granularity of authorizations should not be based on either \nauthorization requests or on granted privileges but on functions performed. Fine \ngrain granularity defines very specific functions that individually define specific \ntasks. This means that each authorization request is broken up into small but spe­\ncific tasks and each one of these tasks is assigned a function.\n9.7.2  Coarse Grain Authorization\nCoarse grain granularity is different from fine grain granularity in that here only \nthe basic ability to interact with resources is focused on. Then all lower detail tasks \nwithin the large functions are ignored. These abilities can be enforced by the oper­\nating system without concern for the applications. In fact, it is this type of autho­\nrization that is enforced by most operating systems. For example, most operating \nsystems have the following abilities or functions: delete, modify, read, write, and \ncreate. Subject requests for access authorization must then be put into one of these \nmajor functions or abilities.\n" }, { "page_number": 219, "text": "Exercises\b\n203\n9.8  Web Access and Authorization\nThe growth of the Internet and e-commerce has made Web application the fastest \ngrowing client–server application model and the mainstay of the Internet. Accord­\ningly, Web servers have also become the main targets for intruder break-ins. So, \ncontrolling access to Web-based resources has naturally become an administrative \nnightmare.\nThe Web infrastructure supports a distributed authorizing structure based on \nnode-naming structures, where each node is known by an URL and information \nto be retrieved from it is accessible through protocols such as HTTP. Under this \nstructure, authorization is based on an Access Control List (ACL). In a distributed \nenvironment such as this, each server node needs to either know its potential clients \nor there must be an authorizing server that other servers must consult before request \nauthorization. However, both of these approaches present problems for the Web \nauthorization because the former approach presents a client administration problem \nwhen the client population changes at a fast rate. The latter approach presents a \npotential performance bottleneck as the processing of a node request depends on the \nperformance and availability of the authorization server [10].\nIn a fully functioning distributed Web authorization process, a coordinated autho­\nrization approach is required that grants access not only to requested document but \nalso to all other documents linked to it. But by this writing this is not the case.\nWhether using the present authorization model or those to come, every effort \nmust be used to control access to Web servers and minimize unauthorized access \nto them. In addition to access control and authorization, here are the other tips for \nsecuring servers [11]:\nWeb servers should not run any other services with the exception of a carefully \n• \nconfigured anonymous FTP.\nPeriodic security scans by a trusted third party should be scheduled to identify \n• \nsystem security weaknesses.\nMinimize system risk by never running the Web server as “root” or “administrator.” \n• \nServer processes should be run from a new account with no other privileges on \nthe machine.\nFor shared file system such as AFS or NFS, give the Web server only “read only” \n• \naccess, or separately mount a “read only” data disk.\nExercises\n  1.\t Differentiate between access and authorization.\n  2.\t What are the benefits of authorization?\n  3.\t Why is it difficult to implement distributed authorization?\n  4.\t Discuss the merits and demerits of centralized and decentralized authorization.\n" }, { "page_number": 220, "text": "204\b\n9  Access Control and Authorization\n  5.\t \u0007Compare the authorization model used by the Network Operating Systems \n(NOS) to that used by the old standalone operating systems.\n  6.\t List and discuss the most common access privileges in a computing system.\n  7.\t Discuss the three components of a global access model.\n  8.\t \u0007Physical access to resources is essential and must be the most restricted. \nWhy?\n  9.\t Discuss four access methods, giving the weaknesses of each.\n10.\t Discuss the many ways in which access can be abused.\nAdvanced Exercises\n1.\t Is it possible to implement full distributed authorization? What will be \ninvolved?\n2.\t Web authorization is central to the security of all Web applications. What is \nthe best way to safeguard all Web applications and at the same time make Web \naccess reliable and fast?\n3.\t Consider an environment where each that server does its own authorization. If \nan access request is made to a document that has extended links and one of the \nlinks request is denied, should the whole document request be denied? Why or \nwhy not?\n4.\t Discuss the benefits and problems resulting from the “least privileged” principle \noften used in access control.\n5.\t Discuss the concept of global privilege. Does it work well in a distributed autho­\nrization or centralized authorization?\n6.\t With the principle of “least privileged,” is it possible to have too much authoriza­\ntion? What happens when there is too much authorization?\nReferences\n1.\t Panko, Raymond. R. Corporate Computer and Network Security. Upper Saddle River, NJ: \nPrentice-Hall, 2004.\n2.\t Gollman, Dieter. Computer Security. New York: John Wiley & Sons, 2000.\n3.\t An Introduction to Role-based Access Control. NIST/ITL Bulletin, December, 1995. http://\ncsrc.nist.gov/rbac/NIST-ITL-RBAC-bulletin.html\n4.\t Differentiating Between Access Control Terms. http://secinf.net/uplarticle/2/Access_Control_\nWP.pdf.\n5.\t Byers, Simon, Juliana Freire, and Cláudio Silva. Efficient Acquisition of Web Data through \nRestricted Query Interfaces. AT&T Labs-Research, http://www10.org/cdrom/posters/p1051/.\n6.\t Bannan, Karen. Watching You, Watching Me PCs are turning informant. Whose side are they \non? PC Magazine: July 1, 2002, http://www.pcmag.com/article2/0,4149,342208,00.asp)\n7.\t Iris scan. http://ctl.ncsc.dni.us/biomet%20web/BMIris.html.\n" }, { "page_number": 221, "text": "References\b\n205\n  8.\t NASA World Wide Web Best Practices 2000–2001 Draft Version 2.0. http://nasa-wbp.larc.\nnasa.gov/devel/4.0/4_4.html.\n  9.\t Pipkin, Donald. Information Security: Protecting the Global Enterprise. Upper Saddle River, \nNJ: Prentice-Hall, 2000.\n10.\t Kahan, Jose. A Distributed Authorization Model for WWW. May, 1995. http://www.isoc.org/\nHMP/PAPER/107/html/paper.html., 5/6/2003.\n11.\t NASA World Wide Web Best Practices 2000–2001 Draft Version 2.0. 8/20/2000. http://nasa-\nwbp.larc.nasa.gov/devel/4.0/4_4.html, 5/6/2003.\n" }, { "page_number": 222, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_10, © Springer-Verlag London Limited 2009\n\b\n207\nChapter 10\nAuthentication\n10.1  Definition\nAuthentication is the process of validating the identity of someone or something. \nIt uses information provided to the authenticator to determine whether someone or \nsomething is in fact who or what it is declared to be. In private and public comput­\ning systems, for example, in computer networks, the process of authentication com­\nmonly involves someone, usually the user, using a password provided by the system \nadministrator to logon. The user’s possession of a password is meant to guarantee \nthat the user is authentic. It means that at some previous time, the user requested, \nfrom the system administrator, and the administrator assigned and or registered a \nself-selected password.\nThe user presents this password to the logon to prove that he or she knows some­\nthing no one else could know.\nGenerally, authentication requires the presentation of credentials or items of \nvalue to really prove the claim of who you are. The items of value or credential \nare based on several unique factors that show something you know, something you \nhave, or something you are [1]:\nSomething you know\n• \n: This may be something you mentally possess. This could \nbe a password, a secret word known by the user and the authenticator. Although \nthis is inexpensive administratively, it is prone to people’s memory lapses and \nother weaknesses including secure storage of the password files by the system \nadministrators. The user may use the same password on all system logons or \nmay change it periodically, which is recommended. Examples using this factor \ninclude passwords, passphrases, and personal identification numbers (PINs).\nSomething you have:\n• \n This may be any form of issued or acquired self identification \nsuch as\nSecurID\n• \nCryptoCard\n• \nActivCard\n• \nSafeWord and\n• \nmany other forms of cards and tags.\n• \n" }, { "page_number": 223, "text": "208\b\n10  Authentication\nThis form is slightly safer than something you know because it is hard to abuse \nindividual physical identifications. For example, it is harder to lose a smart card \nthan to remember the card number.\nSomething you are:\n• \n This being a naturally acquired physical characteristic such \nas voice, fingerprint, iris pattern, and other biometrics discussed in Chapter 7. \nAlthough biometrics are very easy to use, this ease of use can be offset by the \nexpenses of purchasing biometric readers. Examples of items used in this factor \ninclude fingerprints, retinal patterns, DNA patterns, and hand geometry.\nIn addition to the top three factors, another factor, though indirect, also plays a \npart in authentication.\nSomewhere you are:\n• \n This usually is based on either physical or logical location \nof the user. The use, for example, may be on a terminal that can be used to access \ncertain resources.\nIn general, authentication takes one of the following three forms [2]:\nBasic authentication\n• \n involving a server. The server maintains a user file of \neither passwords and user names or some other useful piece of authenticating \ninformation. This information is always examined before authorization is \ngranted. This is the most common way computer network systems authenticate \nusers. It has several weaknesses though, including forgetting and misplacing \nauthenticating information such as passwords.\nChallenge-response,\n• \n in which the server or any other authenticating system \ngenerates a challenge to the host requesting for authentication and expects a \nresponse. We will discuss challenge-response in Section 10.5.1.3.\nCentralized authentication,\n• \n in which a central server authenticates users on the \nnetwork and in addition also authorizes and audits them. These three processes \nare done based on server action. If the authentication process is successful, the \nclient seeking authentication is then authorized to use the requested system \nresources. However, if the authentication process fails, the authorization is \ndenied. The process of auditing is done by the server to record all information \nfrom these activities and store it for future use.\n10.2  Multiple Factors and Effectiveness of Authentication\nFor an authentication mechanism to be considered effective, it must uniquely and \nin a forgery-proof manner identify an individual. The factors above do so in vary­\ning degrees depending on how they are combined. Each factor, if used alone to \nauthenticate users is effective enough to authenticate a user; however, these sys­\ntems’ authentication may be more vulnerable to compromise of the authenticator. \nFor example, both factors “authentication by knowledge” and “authentication by \nownership” in factors 1 and 2 above require a person to be associated with some­\nthing by knowledge or acquisition.\n" }, { "page_number": 224, "text": "10.2  Multiple Factors and Effectiveness of Authentication\b\n209\nNotice that the user is not required to be physically attached to the authenti­\ncation information. Possession of something that is not physically attached to the \nuser can result in that authentication information getting lost, stolen, or otherwise \ncompromised. For example, information by knowledge can be duplicated through \nuser negligence or somebody else learning it without the user knowing. It can also \nbe acquired through possible guessing, repeated attempts, or through brute force by \nusing automated mathematical exhaustive search techniques.\nSimilarly “authentication by ownership” suffers from a set of problems that \nmake it not so effective. For example, although items in this category have their \nmajor strength in the difficulty of duplication, such objects also require more effort \nto guard from theft; they can be made using special equipment or procedures [3].\nAlthough the third factor, “authentication by characteristic,” is much stronger \nthan the first two, it suffers from high costs incurred to acquire and build effective \nperipherals that can obtain a complete enough sample of a characteristic to entirely \ndistinguish one individual from another. It requires readers with more advanced \nfeatures and functions to read, analyze, and completely discriminate and distinguish \none’s physical features. Readers with these functions are often very expensive and \nrequire highly trained personnel and other operating expenses.\nAs the Internet becomes widely used in everyday transactions including e-com­\nmerce, a stronger form of authentication that differs from the traditional username \npassword authentication is needed to safeguard system resources from the poten­\ntially hostile environment of the “bad” Internet. The “bad” Internet consists of wide \narray of “untrusted” public and private clients, including civic networks and public \nkiosks and cafes. In addition to these, it also includes commonly available software \nthat allows an intruder to easily sniff, snoop, and steal network logon passwords as \nthey are exchanged in the traditional authentication schemes.\nTo address this, an effective authentication scheme with multiple methods is \npreferred. Systems using two or more methods can result in greater system security. \nFor better assurance, combinations may be made of the form:\nThis process of piggy backing authentication factors is one of the popular strate­\ngies now used widely for overcoming the limitations of a specific authentication \nfactor by supplementing it with another factor. This technique of improving authen­\ntication assurance is referred to as multi-factor authentication.\nAlthough it is common to combine two or more authentication items from two \nor more factors as shown in Fig. 10.1, it is also possible to combine two or more \nitems from the same authentication factor class. For example, one can combine \nan iris pattern and a fingerprint. There are generally two motives for taking this \naction [4]:\n\t\n1\n\t\n1\t\n2\t\n3\n12\t\n13\t\n23\t\n123\nFigure 10.1  Authentication \nFactor Combinations\n" }, { "page_number": 225, "text": "210\b\n10  Authentication\nThe need to improve usability and accuracy. Combining items from different \n• \nauthenticating factors improves the accuracy of the authentication process. It \nmay also lead to reduction in the false rejection rate of legitimate users.\nTo improve the authentication process’ integrity by reducing the effect of \n• \ncertain items in some factors that are prone to vulnerabilities that weaken it. The \ncombining technique, therefore, reduces the risk of false negatives where, for \nexample, an impersonating user can succeed in accessing the system.\nThe discussion above provides one very important element of authentication: \nthat different mechanisms provide different levels of authentication effectiveness. \nChoosing the most effective authentication, therefore, depends on the technology \nused and also on the degree of trust placed on that technology. Generally, trust is a \nfirm belief or confidence one has in someone or something. Trust is manifested in \nattributes such as honesty, reliability, integrity, justice, and others. Since authoriza­\ntion comes after approval of identity, that is, after authentication, a organizational \nframework spelling out an authorization policy based on authentication is a trust \nmodel. Organizations use trust model to create authentication groups. For example, \na group of company executives may be put in a different authentication process than \na group consisting of parking attendants. These authentication and authorization \ngroupings are based on the company’s trust model.\n10.3  Authentication Elements\nAn authentication process as described above is based on five different elements: \nthe person or group of people seeking authentication, distinguishing characteris­\ntics from that person or group presented for authentication, the authenticator, the \nauthenticating mechanism to verify the presence of the authenticating characteris­\ntics, and the access control mechanism to accept or deny authentication.\n10.3.1  Person or Group Seeking Authentication\nThese are usually users who seek access to a system either individually or as a group. \nIf individually, they must be prepared to present to the authenticator evidence to sup­\nport the claim that they are actually authorized to use the requested system resource. \nThey may present any one of the basic factors discussed in Section 10.1. Similarly \nas a group, the group again must present to the authenticator evidence that any one \nmember of the group is authorized to use the system based on a trust model.\n10.3.2  Distinguishing Characteristics for Authentication\nThe second authentication element is the distinguishing characteristics from the \nuser to the authenticator. In Section 10.1, we already discussed these characteristics \n" }, { "page_number": 226, "text": "10.3  Authentication Elements\b\n211\nand grouped them into four factors that include something you know, something \nyou have, something you are, and a weaker one somewhere you are. In each of these \nfactors, there are items that a user can present to the authenticator for authorization \nto use the system. Some of these items may not completely authenticate the user, \nand we have pointed out in Section 10.2 that a combination of items from differ­\nent factors and trust may be used to strengthen the authentication and create better \nassurances.\n10.3.3  The Authenticator\nThe job of the authenticator is to positively and sometimes automatically identify \nthe user and indicate whether that user is authorized to access the requested system \nresource. The authenticator achieves application for authentication by prompting \nfor user credentials when an authentication request is issued. The authenticator then \ncollects the information and passes it over to the authentication mechanism.\nThe authenticator can be a user-designated server, a virtual private network \n(VPN), firewall, a local area network (LAN) server, an enterprise-wide dedicated \nserver, independent authentication service, or some other form of global identity \nservice. Whatever is being used as an authenticator must perform an authentication \nprocess that must result in some outcome value such as a token that is used in the \nauthentication process to determine information about the authenticated user at a \nlater time. A note of caution to the reader is that some authors call this token the \nauthenticator. Because there is no standard on these tokens adhered to by all authen­\nticating schemes, the format of the token varies from vendor to vendor.\n10.3.4  The Authentication Mechanism\nThe authentication mechanism consists of three parts that work together to verify \nthe presence of the authenticating characteristics provided by the user. The three \nparts are the input, the transportation system, and the verifier. They are linked with \nthe appropriate technologies. An input component acts as the interface between the \nuser and the authentication system. In a distributed environment, this could be a \ncomputer keyboard, card reader, video camera, telephone, or similar device. The \ncaptured user-identifying items need to be taken to a place where they are scruti­\nnized, analyzed, and accepted or rejected. But in order for these items to reach this \npoint, they have to be transported. The transport portion of the system is, therefore, \nresponsible for passing data between the input component and the element that can \nconfirm a person’s identity. In modern day authenticating systems, this information \nis transported over a network, where it can be protected by protocols like Kerberos \nor sent in plaintext [4].\nThe last component of the authentication system is the verification component, \nwhich is actually the access control mechanism in the next section.\n" }, { "page_number": 227, "text": "212\b\n10  Authentication\n10.3.5  Access Control Mechanism\nWe discussed access control and the working of the access control mechanism in \nChapter 8. Let us briefly review the role of the access control mechanism in the \nauthentication process. User-identifying and authenticating information is passed \nto access control from the transport component. Here, this information must be \nvalidated against the information in its database. The database may reside on a \ndedicated authentication server, if the system operates in a network, or stored in a \nfile on a local medium. The access control mechanism then cross-checks the two \npieces of information for a match. If a match is detected, the access control system \nthen issues temporary credentials authorizing the user to access the desired system \nresource.\n10.4  Types of Authentication\nIn Section 10.1, we identified three factors that are used in the positive authentica­\ntion of a user. We also pointed out in the previous section that while these factors \nare in themselves good, there are items in some that suffer from vulnerabilities. \nTable 10.1 illustrates the shortcomings of user identity characteristics from the fac­\ntors that suffer from these vulnerabilities.\nFrom Table 10.1, one can put the factors into two categories: nonrepudiable and \nrepudiable authentication. Other types of authentication include user, client, and \nsession authentication.\n10.4.1  Nonrepudiable Authentication\nNonrepudiable authentication involves all items in factor 3. Recall that factor \nthree consists of items that involve some type of characteristics and whose proof \nof origin cannot be denied. The biometrics used in factor 3, which include iris pat­\nterns, retinal images, and hand geometry, have these characteristics. Biometrics can \npositively verify the identity of the individual. In our discussion of biometrics in \nChapter 8, we pointed out that biometric characteristics cannot be forgotten, lost, \nTable 10.1  Authentication factors and their vulnerabilities1\nNumber\t Factor\t\nExamples\t\nVulnerabilities\n1\t\nWhat you know\t\nPassword, PIN\t\nCan be forgotten, guessed, duplicated\n2\t\nWhat you have\t\nToken, ID Card, Keys\t\nCan be lost, stolen, duplicated\n3\t\nWhat you are\t\nIris, voiceprint, fingerprint\t\nNonrepudiable\n1\u0003Ratha, Nalini K., Jonathan H. Connell and Ruud M. Bolle. “Secure Fingerprint-based Authentica­\ntion for Lotus Notes.” http://www.research.ibm.com/ecvg/pubs/ratha-notes.pdf.\n" }, { "page_number": 228, "text": "10.5  Authentication Methods\b\n213\nstolen, guessed, or modified by an intruder. They, therefore, present a very reliable \nform of access control and authorization. It is also important to note that contempo­\nrary applications of biometric authorization are automated, which further eliminates \nhuman errors in verification. As technology improves and our understanding of the \nhuman anatomy increases, newer and more sensitive and accurate biometrics will \nbe developed.\nNext to biometrics as nonrepudiable authentication items are undeniable and \nconfirmer digital signatures. These signatures, developed by Chaum and van \nAntwerpen, are signatures that cannot be verified without the help of a signer and \ncannot with non-negligible probability be denied by the signer. Signer legitimacy is \nestablished through a confirmation or denial protocol [5]. Many undeniable digital \nsignatures are based on Rivest, Shamir and Adleman (RSA) structure and technol­\nogy, which gives them provable security that makes the forgery of undeniable sig­\nnatures as hard as forging standard RSA signatures.\nConfirmer signatures [6, 7] are a type of undeniable signatures, where signatures \nmay also be further verified by an entity called the confirmer designated by the \nsigner.\nLastly, there are chameleon signatures, a type of undeniable signatures in which \nthe validity of the content is based on the trust of the signer’s commitment to the \ncontents of the signed document. But in addition, they do not allow the recipient of \nthe signature to disclose the contents of the signed information to any third party \nwithout the signer’s consent [5].\n10.4.2  Repudiable Authentication\nIn our discussion of authentication factors in Section 10.2 we pointed out that the \nfirst two factors, “what you know” and “what you have,” are factors that can present \nproblems to the authenticator because the information presented can be unreliable. \nIt can be unreliable because such factors suffer from several well-known problems \nincluding the fact that possessions can be lost, forged, or easily duplicated. Also \nknowledge can be forgotten and taken together, knowledge and possessions can be \nshared or stolen. Repudiation is, therefore, easy. Before the development of items \nin factor 3, in particular the biometrics, authorization, and authentication methods \nrelied only on possessions and knowledge.\n10.5  Authentication Methods\nDifferent authentication methods are used based on different authentication algo­\nrithms. These authentication methods can be combined or used separately, depend­\ning on the level of functionality and security needed. Among such methods are \npassword authentication, public-key authentication, Anonymous authentication, \nremote and certificate-based authentication.\n" }, { "page_number": 229, "text": "214\b\n10  Authentication\n10.5.1  Password Authentication\nThe password authentication methods are the oldest and the easiest to implement. \nThey are usually set up by default in many systems. Sometimes, these methods \ncan be interactive using the newer keyboard-interactive authentication. Pass­\nword authentication includes reusable passwords, one-time passwords, challenge \nresponse passwords, and combined approach passwords.\n10.5.1.1  Reusable Passwords\nThere are two types of authentication in reusable password authentication: user and \nclient authentication.\nUser authentication\n• \n. This is the most commonly used type of authentication, \nand it is probably the most familiar to most users. It is always initiated by the \nuser, who sends a request to the server for authentication and authorization for \nuse of a specified system resource. On receipt of the request, the server prompts \nthe user for a user name and password. On submission of these, the server checks \nfor a match against copies in its database. Based on the match, authorization is \ngranted.\nClient authentication.\n• \n Normally, the user requests for authentication and then \nauthorization by the server to use a system or a specified number of system \nresources. Authenticating users does not mean the user is free to use any system \nresource the user wants. Authentication must establish user authorization to \nuse the requested resources in the amount requested and no more. This type of \nauthentication is called client authentication. It establishes users’ identities and \ncontrolled access to system resources.\nBecause these types of authentication are the most widely used authentication meth­\nods, they are the most abused. They are also very unreliable because users forget \nthem, they write them down, they let others use them, and most importantly, they \nare easy to guess because users choose simple passwords. They are also susceptible \nto cracking and snooping. In addition, they fall prey to today’s powerful computers, \nwhich can crack them with brute force through exhaustive search.\n10.5.1.2  One-Time Passwords\nOne-time password authentication is also known as session authentication. Unlike \nreusable passwords that can be used over extended periods of time, one-time pass­\nwords are used once and disposed of. They are randomly generated using powerful \nrandom number generators. This reduces the chances of their being guessed. In \nmany cases they are encrypted, then issued to reduce their being intercepted if they \nare sent in the clear. There are several schemes of one-time passwords. The most \ncommon of these schemes are S/Key and token.\n" }, { "page_number": 230, "text": "10.5  Authentication Methods\b\n215\nS/Key password\n• \n is a one-time password generation scheme defined in RFC \n1760 and is based on MD4 and MD5 encryption algorithms. It was designed to \nfight against replay attacks where, for example, in a login session, an intruder \neavesdrops on the network login session and gets the password and user-ID for \nthe legitimate user. Its protocol is based on a client-server model in which the \nclient initiates the S/Key exchange by sending the first packet to which the server \nresponds with an ACK and a sequence number. Refer to Chapter 1 for this. The \nclient then responds to the server by generating a one-time password and passes \nit to the server for verification. The server verifies the password by passing it \nthrough a hash function and compares the hash digest to the stored value for a \nmatch.\nToken password\n• \n is a password generation scheme that requires the use of a \nspecial card such as a smart card. According to Kaeo, the scheme is based on two \nschemes: challenge-response and time-synchronous [8]. We are going to discuss \nchallenge-response in Section 10.5.1.3. In a time-synchronous scheme, an \nalgorithm executes both in the token and on the server and outputs are compared \nfor a match. These numbers, however, change with time.\nAlthough they are generally safer, one-time passwords have several difficulties \nincluding synchronization problems that may be caused by lapsed time between the \ntimestamp in the password and the system time. Once these two times are out of \nphase, the password cannot be used. Also synchronization problems may arise when \nthe one-time password is issued based on either a system or user. If it is based on the \nuser, the user must be contacted before use to activate the password.\n10.5.1.3  Challenge-Response Passwords\nIn Section 10.1, we briefly talked about challenge-response authentication as \nanother form of relatively common form of authentication. Challenge-response, as a \npassword authentication process, is a handshake authentication process in which the \nauthenticator issues a challenge to the user seeking authentication. The user must \nprovide a correct response in order to be authenticated. The challenge may take \nmany forms depending on the system. In some systems, it is in the form of a mes­\nsage indicating “unauthorized access” and requesting a password. In other systems, \nit may be a simple request for a password, a number, a digest, or a nonce (a server-\nspecified data string that may be uniquely generated each time a server generates \na 401 server error). The person seeking authentication must respond to the system \nchallenge. Nowadays, responses are by a one-way function using a password token, \ncommonly referred to as asynchronous tokens. When the server receives the user \nresponse, it checks to be sure the password is correct. If so, the user is authenticated. \nIf not or if for any other reason the network does not want to accept the password, \nthe request is denied.\nChallenge-response authentication is used mostly in distributed systems. Though \nbecoming popular, challenge-response authentication is facing challenges as a result \nof weaknesses that include user interaction and trial-and-error attacks. The problem \n" }, { "page_number": 231, "text": "216\b\n10  Authentication\nwith user interaction involves the ability of the user to locate the challenge over usu­\nally clattered screens. The user then must quickly type in a response. If a longer than \nanticipated time elapses, the request may be denied. Based on the degree of security \nneeded, sometimes the user has to remember the long response or sometimes is \nforced to write it down, and finally the user must transcribe the response and type \nit in. This is potentially error prone. Some vendors have tried to cushion the user \nfrom remembering and typing long strings by automating most of the process either \nby cut-and-paste of the challenge and responses or through a low-level automated \nprocess where the user response is limited to minimum yes/no responses.\nIn trial-and-error attacks, the intruders may respond to the challenge with a spir­\nited barrage of trial responses hoping to hit the correct response. With powerful \ncomputers set to automatically generate responses in a given time frame, it is poten­\ntially possible for the intruder to hit on a correct response within the given time \nframe.\nAlso of interest is to remember that in its simplest form, challenge-responses that \nuse passwords can be abused because passwords are comparatively easy to steal. \nAnd if transmitted in the clear, passwords can also be intercepted. However, this \nsituation is slightly better in the nonce or digest authentication, the more sophisti­\ncated of the two forms of scheme, because the password is not sent in the clear over \nthe network. It is encrypted which enhances security, although not fully hack-proof \nprotection.\n10.5.1.4  Combined Approach Authentication\nAlthough basic authentication which uses either names or names and passwords is the \nmost widely used authentication scheme, it is prudent not to rely on just basic authen­\ntication. Passwords are often transmitted in the clear from the user to the authentica­\ntion agent, which leaves the passwords open to interception by hackers. To enhance \nthe security of authentication, it is better sometimes to combine several schemes. \nOne of the most secure authentication methods is to use a random challenge-response \nexchange using digital signatures. When the user attempts to make a connection, the \nauthentication system, a server or a firewall, sends a random string back as a chal­\nlenge. The random string is signed using the user’s private key, and sent back as a \nresponse. The authenticating server or firewall can then use the user’s public key to \nverify that the user is indeed the holder of the associated private key [9].\n10.5.2  Public-Key Authentication\nAs we discussed in Section 2.3.2 and we will later see in the next chapter, the pro­\ncess of public-key authentication requires each user of the scheme to first generate \na pair of keys and store each in a file. Each key is usually between 1024 and 2048 \nbits in length. Public–private keys pairs are typically created using a key ­generation \n" }, { "page_number": 232, "text": "10.5  Authentication Methods\b\n217\nutility. As we will discuss next chapter, the pair will consist of a user’s public and \nprivate key pair. The server knows the user’s public key because it is published \nwidely. However, only the user has the private key.\nPubic key systems are used by authentication systems to enhance system secu­\nrity. The centralized authentication server, commonly known as the access control \nserver (ACS), is in charge of authentication that uses public key systems. When a \nuser tries to access an ACS, it looks up the user’s public keys and uses it to send a \nchallenge to the user. The server expects a response to the challenge where the user \nmust use his or her private key. If the user then signs the response using his or her \nprivate key, he or she is authenticated as legitimate.\nTo enhance public key security, the private key never leaves the user’s machine, \nand therefore, cannot be stolen or guessed like a password can. In addition, the \nprivate key has a passphrase associated with it; so even if the private key is sto­\nlen, the attacker must still guess the passphrase in order to gain access. The ACS \nis used in several authentication schemes including SSL, Kerberos, and MD5 \nauthentication.\n10.5.2.1  Secure Sockets Layer (SSL) Authentication\nSecure Sockets Layer (SSL) is an industry standard protocol designed by Netscape \nCommunications Corporation for securing network connections. SSL provides \nauthentication, encryption, and data integrity using public key infrastructure (PKI). \nSSL authentication, being cryptographic-based, uses a public/private key pair that \nmust be generated before the process can begin. Communicating elements acquire \nverification certificates from a certificate authority (CA).\nA certificate authority is a trusted third party, between any two communicating elements \nsuch as network servers, that certifies that the other two or more entities involved in the \nintercommunication, including individual users, databases, administrators, clients, serv­\ners, are who they say they are. The certificate authority certifies each user by verifying each \nuser’s identity and grants a certificate, signing it with the certificate authority’s private key. \nUpon verification, the certificate authority then publishes its own certificate which includes \nits public key. Each network entity, server, database, and others gets a list of certificates \nfrom all the trusted CAs and it consults this list every time there is a communicating user \nentity that needs authentication. With the CA’s issued certificate, the CA guarantees that \nanything digitally signed using that certificate is legal. As we will see in the next chapter, \nsometimes it is possible to also get a private key along with the certificate, if the user does \nnot want to generate the corresponding private key from the certificate. As e-commerce \npicks up momentum, there is an increasing need for a number of creditable companies to \nsign up as CAs. And indeed many are signing up. If the trend continues, it is likely that the \nuse of digital certificates issued and verified by a CA as part of a public key infrastructure \n(PKI) is likely to become a standard for future e-commerce.\nThese certificates are signed by calculating a checksum over the certificate and \nencrypting the checksum and other information using the private key of a signing \ncertificate. User certificates can be created and signed by a signing certificate which \n" }, { "page_number": 233, "text": "218\b\n10  Authentication\ncan be used in the SSL protocol for authentication purposes. The following steps are \nneeded for an SSL authentication [10]:\nThe user initiates a connection to the server by using SSL.\n• \nSSL performs the handshake between client and server.\n• \nIf the handshake is successful, the server verifies that the user has the appropriate \n• \nauthorization to access the resource.\nThe SSL handshake consists of the following steps [10]:\nThe client and server establish which authenticating algorithm to use.\n• \nThe server sends its certificate to the client. The client verifies that the server’s \n• \ncertificate was signed by a trusted CA. Similarly, if client authentication is \nrequired, the client sends its own certificate to the server. The server verifies that \nthe client’s certificate was signed by a trusted CA.\nThe client and server exchange key material using public key cryptography (see \n• \nmore of this in the next chapter), and from this material, they each generate a \nsession key. All subsequent communication between client and server is encrypted \nand decrypted by using this set of session keys and the negotiated cipher suite.\nIt is also possible to authenticate using a two-way SSL authentication, a form of \nmutual authentication. In two-way SSL authentication, both the client and server \nmust present a certificate before the connection is established between them.\n10.5.2.2  Kerberos Authentication\nKerberos is a network authentication protocol developed at the Massachusetts Insti­\ntute of Technology (MIT) and designed to provide strong authentication for client/\nserver applications by using PKI technology. See RFC 1510 for more details on \nKerberos. It was designed to authenticate users’ requests to the server.\nIn his paper “The Moron’s Guide to Kerberos,” Brian Tung, using satire, com­\npares the authentication by Kerberos to that of an individual using a driver’s license \nissued by the Department of Motor Vehicles (DMV). He observes that in each case, \npersonal identity consists of a name and an address and some other information, \nsuch as a birth date. In addition, there may be some restrictions on what the named \nperson can do; for instance, he or she may be required to wear corrective lenses \nwhile driving. Finally, the identification has a limited lifetime, represented by the \nexpiration date on the card.\nHe compares this real-life case to the working of Kerberos. Kerberos typically \nis used when a user on a network is attempting to make use of a network service \nand the service wants assurance that the user is who he says he is. To that end, just \nlike a merchant would want you to present your driver’s license issued by the DMV \nbefore he or she issues you with a ticket for the needed service, the Kerberos user \ngets a ticket that is issued by the Kerberos authentication server (AS). The service \nthen examines the ticket to verify the identity of the user. If all checks out, then the \nuser is issued an access ticket [11].\n" }, { "page_number": 234, "text": "10.5  Authentication Methods\b\n219\nAccording to Barkley [12], there are five players involved in the Kerberos \nauthentication process: the user, the client who acts on behalf of the user, the \nkey-distribution-center, the ticket-granting-service, and the server providing the \nrequested service. The role of the key-distribution center, as we will see in the com­\ning chapter and also Chapter 16, is to play a trusted third party between the two \ncommunicating elements, the client and the server. The server, commonly known \nas the “Kerberos server” is actually the Key Distribution Center, or the KDC for \nshort. The KDC implements the Authentication Service (AS) and the Ticket Grant­\ning Service (TGS).\nWhen a user wants a service, the user provides the client with a password. The \nclient then talks to the Authentication Service to get a Ticket Granting Ticket. This \nticket is encrypted with the user’s password or with a session key provided by the \nAS. The client then uses this ticket to talk to the Ticket Granting Service to verify \nthe user’s identity using the Ticket Granting Ticket. The TGS then issues a ticket \nfor the desired service.\nThe ticket consists of the\nrequested servername,\n• \nclientname,\n• \naddress of the client,\n• \ntime the ticket was issued,\n• \nlifetime of the ticket,\n• \nsession key to be used between the client and the server, and\n• \nsome other fields.\n• \nThe ticket is encrypted using the server’s secret key, and thus cannot be correctly \ndecrypted by the user.\nIn addition to the ticket, the user must also present to the server an authenticator \nwhich consists of the\nclientname,\n• \naddress,\n• \ncurrent time, and\n• \nsome other fields.\n• \nThe authenticator is encrypted by the client using the session key shared with the \nserver. The authenticator provides a time-validation for the credentials.\nA user seeking server authentication must then present to the server both the \nticket and the authenticator. If the server can properly decrypt both the ticket, when \nit is presented by the client, and the client’s authenticator encrypted using the ses­\nsion key contained in the ticket, the server can have confidence that the user is who \nhe claims to be [12].\nThe KDC has a copy of every password and/or secret key associated with every \nuser and server and it issues Ticket Granting Tickets so users do not have to enter \nin their passwords every time they wish to connect to a Kerberized service or keep \na copy of their password around. If the Ticket Granting Ticket is compromised, an \nattacker can only masquerade as a user until the ticket expires [13].\n" }, { "page_number": 235, "text": "220\b\n10  Authentication\nSince the KDC stores all user and server secret keys and passwords, it must be \nwell secured and must have stringent access control mechanism. If the secret key \ndatabase is penetrated, a great deal of damage can occur.\n10.5.2.3  MD5 for Authentication\nIn the previous chapter, we discussed MD5 as one of the standard encryption algo­\nrithms in use today. Beyond encryption, MD5 can be used in authentication. In \nfact, the authentication process using MD5 is very simple. Each user has a file \ncontaining a set of keys that are used as input into an MD5 hash. The information \nbeing supplied to the authenticating server, such as passwords, has its MD5 check­\nsum calculated using these keys and is then transferred to the authenticating server \nalong with the MD5 hash result. The authenticating server then gets user identity \ninformation such as password, obtains the user’s set of keys from a key file, and \nthen calculates the MD5 hash value. If the two are in agreement, authentication is \nsuccessful [11].\n10.5.3  Remote Authentication\nRemote authentication is used to authenticate users who dial in to the ACS from \na remote host. This can be done in several ways, including using Secure Remote \nProcedure Call (RPC), Dail-up, and Remote Authentication Dail-In User Services \n(RADIUS) authentication.\n10.5.3.1  Secure RPC Authentication\nThere are many services, especially Internet services, in which the client may not \nwant to identify itself to the server, and the server may not require any identifica­\ntion from the client. Services falling in this category, like the Network File System \n(NFS), require stronger security than the other services. Remote Procedure Call \n(RPC) authentication provides that degree of security. Since the RPC authentication \nsubsystem package is open-ended, different forms and multiple types of authentica­\ntion can be used by RPC including\nNULL Authentication\n• \nUNIX Authentication\n• \nData Encryption Standard (DES) Authentication\n• \nDES Authentication Protocol\n• \nDiffie-Hellman Encryption\n• \nServers providing the call services require that users be authenticated for every \nRPC call keys to servers and clients using any encryption standard.\n" }, { "page_number": 236, "text": "10.5  Authentication Methods\b\n221\n10.5.3.2  Dial-in Authentication\nAs in remote calls, passwords are required in dial-in connections. Point-to-point \n(PPP) is the most common of all dial-in connections, usually over serial lines or \nISDN. An authentication process must precede any successful login. Dial-in authen­\ntication services authenticate the peer device, not the user of the device. There are \nseveral dial-in authentication mechanisms. PPP authentication mechanisms, for \nexample, include the Password Authentication Protocol (PAP), the Challenge Hand­\nshake Protocol (CHAP), and the Extensible Authentication Protocol (EAP) [8].\nThe PAP authentication protocol allows the peer to establish identity to the \n• \nauthenticator in a two-way handshake to establish the link. The link is used to \nsend to the authenticator an initial packet containing the peer name and password. \nThe authenticator responds with authenticate-ACK if everything checks out and \nthe authentication process is complete. PAP is a simple authentication process \nthat sends the peer authentication credentials to the authenticator in the clear, \nwhere they can be intercepted by the eavesdropper.\nThe CHAP authentication protocol is employed periodically to verify any user \n• \nwho uses a three-way handshake. Like PAP, it uses the handshake to initialize a \nlink. After establishing the link. CHAP requires the peer seeking authentication \nand the authenticator share a secret text that is never actually sent over the links. \nThe secret is established through a challenge-response. The authenticator first \nsends a challenge consisting of an identifier, a random number, and a host name \nof the peer or user. The peer responds to the challenge by using a one-way hash \nto calculate a value; the secret is the input to the hash. The peer then sends to \nthe authenticator an encrypted identification, the output of the hash, the random \nnumber, and the peer name or user name. The authenticator verifies these by \nperforming the same encryption and authenticates the peer, if everything checks \nout. It is possible for a relay attack on a CHAP authentication. So steps must be \ntaken to safeguard the passing of the passwords.\nExtensible protocol supports multiple authentication mechanisms. Like all other \n• \nPPP authentication mechanisms, a link is first established. The authenticator \nthen first sends a request or requests, with a type field to indicate what is being \nrequested, to the peer seeking authentication. The peer responds with a packet, \nwith a type field, as requested. The authenticator then verifies the content of the \npacket and grants or denies authentication. EAP is more flexible as it provides a \ncapability for new technologies to be tried.\n10.5.3.3  RADIUS\nRemote authentication Dail-in User Services (RADIUS) is a common user protocol \nthat provides user dial-up to the ACS which does the user authentication. Because \nall information from the remote host travels in the clear, RADIUS is considered to \nbe vulnerable to attacks and therefore not secure. We will discuss RADIUS in detail \nin Chapter 17.\n" }, { "page_number": 237, "text": "222\b\n10  Authentication\n10.5.4  Anonymous Authentication\nNot all users who seek authentication to use system resources always want to use \noperations that modify entries or access protected attributes or entries that generally \nrequire client authentication. Clients who do not intend to perform any of these oper­\nations typically use anonymous authentication. Mostly these users are not indigenous \nusers in a sense that they do not have membership to the system they want access to. \nIn order to give them access to some system resources, for example, to a company \nWeb site, these users, usually customers, are given access to the resources via a spe­\ncial “anonymous” account. System services that are used by many users who are not \nindigenous, such as the World Wide Web service or the FTP service, must include an \nanonymous account to process anonymous requests. For example, Windows Internet \nInformation Services (IIS) creates the anonymous account for Web services, IUSR_\nmachinename, during its setup. By default, all Web client requests use this account, \nand clients are given access to Web content when they use it. You can enable both \nanonymous logon access and authenticated access at the same time [14].\n10.5.5  Digital Signature-Based Authentication\nDigital signature-based authentication is yet another authentication technique that \ndoes not require passwords and user names. A digital signature is a cryptographic \nscheme used by the message recipient and any third party to verify the sender’s \nidentity and/or message on authenticity. It consists of an electronic signature that \nuses public key infrastructure (PKI) to verify the identity of the sender of a message \nor of the signer of a document. The scheme may include a number of algorithms and \nfunctions including the Digital Signature Algorithm (DSA), Elliptic Curve Digital \nSignature and Algorithm (ECDSA), account authority digital signature, authentica­\ntion function, and signing function [6, 7].\nThe idea of a digital signature is basically the same as that of a handwritten \nsignature, to authenticate the signer. It is used to authenticate the fact that what has \nbeen promised by a signature can’t be taken back later. Like a paper signature, the \ndigital signature creates a legal and psychological link between the signer of the \nmessage and the message.\nAs we will discuss in detail in the next chapter, since digital signatures use PKI, \nboth a public key and a private key must be acquired in order to use the scheme. \nThe private key is kept and used by the signer to sign documents. The person who \nverifies the document then uses the signer’s corresponding public key to make sure \nthe signer is who he or she claims to be. With keys, the user sends the authentica­\ntion request to the ACS. Upon receipt of the request, the server uses its private key \nto decrypt the request. Again, as we will discuss in Chapter 10, both these keys are \nonly mathematically related, so knowing the public key to verify the signer’s signa­\nture does not require knowledge of the signer’s private key. Many times, it is very \ndifficult to compute one key from the knowledge of the other.\n" }, { "page_number": 238, "text": "10.6  Developing an Authentication Policy\b\n223\n10.5.6  Wireless Authentication\nBecause of the growing use of wireless technology and its current low security, there \nis a growing need for wireless network authentication for mobile devices as they \nconnect to fixed network as well as mobile networks. The IEEE 802.1X, through \nits Extensible Authentication Protocol (WEP), has built in authentication for mobile \nunit users. This authentication requires Wi-Fi mobile units to authenticate with net­\nwork operating systems such as Windows XP.\n10.6  Developing an Authentication Policy\nAlthough in many organizations, the type of authentication used is not part of the \nsecurity policy, which means that the rank and file of the users in the organization \ndo not have a say in what authentication policy is used, it is becoming increasingly \npopular nowadays to involve as wide a spectrum of users as possible in as much \ndetail of security as possible.\nThis means an early involvement of most users in the development of the authen­\ntication policy. Sometimes it even requires input from business and IT representa­\ntive communities that do business with the organization. This is sometimes key to \nensuring acceptance and compliance by those communities. Paul Brooke lists the \nfollowing steps as necessary for a good authentication policy [15]:\nList and categorize the resources that need to be accessed, whether these \n• \nresources are data or systems. Categorize them by their business sensitivity and \ncriticality.\nDefine the requirements for access to each of the above categories taking into \n• \naccount both the value of the resource in the category as well as the method \nof access (such as LAN, Internet, or dial-up). For example, as Brooke notes, \ncommon internal resources, such as e-mail or file and print systems, might \nrequire that the single-factor authentication included in the operating system is \nsufficient as long as the access is via the internal LAN.\nSet requirements for passwords and IDs. Every authentication policy should \n• \nclearly state requirements for the following:\nID format:\n• \n Authentication policies should strive to employ as universal an \nID format as possible to make the management of IDs and passwords much \neasier\nComplexity:\n• \n whether to require nonalphabetic characters or not in the \npasswords\nLength:\n• \n stating the minimum and maximum password lengths\nAging:\n• \n stating the frequency in changing passwords\nReuse:\n• \n how frequently a password can be reused\nAdministrative access:\n• \n whether there will be special requirements for \nsuperuser passwords\n" }, { "page_number": 239, "text": "224\b\n10  Authentication\nDefaults:\n• \n to allow default passwords for vendors and other special interest \nusers\nGuest and shared accounts:\n• \n to decide if guest accounts will be used. If so, are \nthere any special administration, password, or authentication requirements?\nStorage:\n• \n required storage for passwords. This is important for the storage of \nencrypted or hashed passwords.\nTransmission:\n• \n to decide on the requirements for transmission of passwords; \nis clear-text transmission of passwords during authentication or is encryption \nrequired?\nReplication:\n• \n to decide on the requirements for replication of password \ndatabases; how often must it occur, and are there any special requirements \nfor transmission?\nCreate and implement processes for the management of authentication systems.\n• \nCommunicate policies and procedures to all concerned in the organizations \n• \nand outside it. The creation of policies and procedures has no value unless the \ncommunity regulated by them is made aware. Compliance cannot be expected if \npeople are not conscious of the requirements.\nExercises\n  1.\t Authentication is based on three factors. List the factors and discuss why each \none determines which type of authentication to use.\n  2.\t Making an authentication policy must be a well kept secret to ensure the secu­\nrity of the intended system. Why then is it so important that a security policy \ninclude an authentication policy that involves as many as possible? What kind \nof people must be left out?\n  3.\t In RPC authentication, why it is necessary that each client request that server \nservices be authenticated by the authentication server?\n  4.\t The Kerberos authentication process actually involves two tickets. Explain the \nneed for each ticket and why only one ticket cannot be used.\n  5.\t Discuss in detail the role played by each one of the five players in a Kerberos \nauthentication process.\n  6.\t There are many compelling reasons why a system that must implement secu­\nrity to the maximum must give anonymous authentication to a class of users. \nDetail five of these reasons.\n  7.\t Does anonymous authentication compromise the security of systems for the \nadvantages of a few services?\n  8.\t Discuss the role of certificate authentication in e-commerce.\n  9.\t Many predict that the future of e-commerce is pegged on the successful imple­\nmentation of authentication. Discuss.\n10.\t Discuss the role of public key authentication in the growth of e-commerce.\n" }, { "page_number": 240, "text": "References\b\n225\nAdvanced Exercises\n  1.\t Research and discuss the much talked about role of public key authentication \nin the future of e-commerce. Is the role of PKI in authentication exagger­\nated?\n  2.\t Study the dial-in authentication mechanisms. What mechanisms (discuss five) \ncan be used in EAP?\n  3.\t Discuss the benefits of enhancement of basic authentication with a crypto­\ngraphic scheme such as Kerberos, SSL, and others. Give specific ­examples.\n  4.\t Authentication using certificates, although considered safe, suffers from weak­\nnesses. Discuss these weaknesses using specific examples.\n  5.\t Kerberos and SSL are additional layers to enhance authentication. Detail how \nthese enhancements are achieved in both cases.\nReferences\n  1.\t Pipkin, Donald, L. Information Security: Protecting the Global Enterprise. Upper Saddle \nRiver, NJ: Prentice Hall, 2000.\n  2.\t Holden, Greg. Guide to Firewalls and Network Security: Intrusion Detection and VPNs. \n­Boston, MA: Thomason Learning, 2004.\n  3.\t The Rainbow Books, National Computer Security Center, http://www.fas.org/irp/nsa/\nrainbow/tg017.htm)\n  4.\t Marshall, Bruce. Consider Your Options for Authentication. http://www.ins.com/downloads/\npublications/bMarshall_issa_password_article_062002.pdf\n  5.\t Cryptography Research Group – Projects. http://www.research.ibm.com/security/projects.\nhtml\n  6.\t Galbraith, Steven, and Wenbo Mao. Invisibility and Anonymity of Undeniable and Consumer \nSignatures. http://www-uk.hpl.hp.com/people/wm/papers/InAnRSA.pdf\n  7.\t Glossary of terms. http://www.asuretee.com/developers/authentication-terms.shtm\n  8.\t Kaeo, Merike. Designing Network Security: A Practical Guide to Creating a Secure Network \nInfrastructure. Indianapolis: Cisco Press, 1999.\n  9.\t Digital Signature Authentication. http://www.cequrux.com/support/firewall/node29.htm\n10.\t Configuring SSL Authentication. Oracle Advance Security Administrator’s Guide Release \n8.1.5. A677–01 http://www.csee.umbc.edu/help/oracle8/network.815/a67766/09_ssl.htm\n11.\t Brian Tung. The Moron’s Guide to Kerberos. http://www.isi.edu/˜brian/security/kerberos.\nhtml.\n12.\t Barkley, John. Robust Authentication Procedures. http://csrc.nist.gov/publications/nistpubs/\n800–7/node166.html\n13.\t General Information on Kerberos. http://www.cmf.nrl.navy.mil/CCS/people/kenh/kerberos-faq.\nhtml#tgttgs\n14.\t Certificate Authentication. http://www.ssh.com/support/documentation/online/ssh/adminguide\n/32/Certificate_Authentication-2.html\n15.\t Paul Brooke. Setting The Stage For Authentication Network Computing. http://www.\nnetworkcomputing.com/1211/1211ws22.html.\n" }, { "page_number": 241, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_11, © Springer-Verlag London Limited 2009\n\b\n227\nChapter 11\nCryptography\n11.1  Definition\nSo much has been said and so much has been gained; thousands of lives have been \nlost, and empires have fallen because a secret was not kept. Efforts to keep secrets \nhave been made by humans probably since the beginning of humanity itself. Long \nago, humans discovered the essence of secrecy. The art of keeping secrets resulted \nin victories in wars and in growth of mighty empires. Powerful rulers learned to \nkeep secrets and pass information without interception; that was the beginning of \ncryptography. Although the basic concepts of cryptography predate the Greeks, the \npresent word cryptography, used to describe the art of secret communication, comes \nfrom the Greek meaning “secret writing.” From its rather simple beginnings, cryp­\ntography has grown in tandem with technology and its importance has also similarly \ngrown. Just as in its early days, good cryptographic prowess still wins wars.\nAs we get dragged more and more into the new information society, the kind \nof face-to-face and paper-traceable communication that characterized the nondigi­\ntal communication before the information revolution, the kind of communication \nthat guaranteed personal privacy and security, is increasingly becoming redefined \ninto the new information society where faceless digital communication regimes are \nguaranteeing neither information and personal security nor personal privacy. Cen­\nturies old and trusted global transactions and commercial systems that guaranteed \nbusiness exchange and payment systems are being eroded and replaced with dif­\nficult to trust and easily counterfeitable electronic systems. The technological and \ncommunication revolution has further resulted in massive global surveillance of \nmillions of individuals and many times innocent ones by either their governments \nor private companies; the fight for personal privacy has never been any more fierce, \nand the integrity and confidentiality of data have become more urgent than ever \nbefore. The security and trust of digital transaction systems have become of critical \nimportance as more and more organizations and businesses join the e-commerce \ntrain. The very future of global commerce is at stake in this new information society \nunless and until the security of e-commerce can be guaranteed.\nCryptography is being increasingly used to fight off this massive invasion of \nindividual privacy and security, to guarantee data integrity and confidentiality, and \n" }, { "page_number": 242, "text": "228\b\n11  Cryptography\nto bring trust in global e-commerce. Cryptography has become the main tool for \nproviding the needed digital security in the modern digital communication medium \nthat far exceeds the kind of security that was offered by any medium before it. \nIt guarantees authorization, authentication, integrity, confidentiality, and nonrepu­\ndiation in all communications and data exchanges in the new information society. \nTable 11.1shows how cryptography guarantees these security services through five \nbasic mechanisms that include symmetric and public key encryption, hashing, digi­\ntal signatures, and certificates.\nA cryptographic system consists of four essential components [1]:\nPlaintext – the original message to be sent.\n• \nCryptographic system (cryptosystem) or a cipher – consisting of mathematical \n• \nencryption and decryption algorithms.\nCiphertext – the result of applying an encryption algorithm to the original \n• \nmessage before it is sent to the recipient.\nKey – a string of bits used by the two mathematical algorithms in encrypting and \n• \ndecrypting processes.\nA cipher or a cryptosystem is a pair of invertible functions, one for encrypting \nor enciphering and the other for decryption or deciphering. The word cipher has its \norigin in an Arabic word sifr, meaning empty or zero. The encryption process uses \nthe cryptographic algorithm, known as the encryption algorithm, and a selected key \nto transform the plaintext data into an encrypted form called ciphertext, usually \nunintelligible form. The ciphertext can then be transmitted across the communica­\ntion channels to the intended destination.\nA cipher can either be a stream cipher or a block cipher. Stream ciphers rely on a \nkey derivation function to generate a key stream. The key and an algorithm are then \napplied to each bit, one at a time. Even though stream ciphers are faster and smaller \nto implement, they have an important security gap. If the same key stream is used, \ncertain types of attacks may cause the information to be revealed. Block ciphers, on \nthe other hand, break a message up into chunks and combine a key with each chunk, \nfor example, 64 or 128 bits of text. Since most modern ciphers are block ciphers, let \nus look at those in more details.\nTable 11.1  Modern cryptographic security services\nSecurity Services\nCryptographic Mechanism to Achieve the Service\nConfidentiality\nSymmetric encryption\nAuthentication\nDigital signatures and digital certificates\nIntegrity\nDecryption of digital signature with a public key to obtain the \nmessage digest. The message is hashed to create a second digest. If \nthe digests are identical, the message is authentic and the signer’s \nidentity is proven.\nNonrepudiation\nDigital signatures of a hashed message then encrypting the result \nwith the private key of the sender, thus binding the digital signature \nto the message being sent.\nNonreplay\nEncryption, hashing, and digital signature\n" }, { "page_number": 243, "text": "11.1  Definition\b\n229\n11.1.1  Block Ciphers\nBlock ciphers operate on combinations of blocks of plaintext and ciphertext. \nThe block size is usually 64 bits, but operating on blocks of 64 bits (8 bytes) is not \nalways useful and may be vulnerable to simple cryptanalysis attacks. This is so \nbecause the same plaintext always produces the same ciphertext. Such block encryp­\ntion is especially vulnerable to replay attacks. To solve this problem, it is common \nto apply the ciphertext from the previous encrypted block to the next block in a \nsequence into a combination resulting into a final ciphertext stream. Also to prevent \nidentical messages encrypted on the same day from producing identical ciphertext, \nan initialization vector derived from a random number generator is combined with \nthe text in the first block and the key. This ensures that all subsequent blocks result \nin ciphertext that doesn’t match that of the first encrypting.\nSeveral block cipher combination modes of operation are in use today. The most \ncommon ones are described below [2]:\nElectronic Codebook (ECB) mode – this is the simplest block cipher mode of \n• \noperation in which one block of plaintext always produces the same block of \nciphertext. This weakness makes it easy for the crypt-analysts to break the code \nand easily decrypt that ciphertext block whenever it appears in a message. This \nvulnerability is greatest at the beginning and end of messages, where well-defined \nheaders and footers contain common information about the sender, receiver, and \ndate.\nBlock Chaining (CBC) mode is a mode of operation for a block cipher that uses \n• \nwhat is known as an initialization vector (IV) of a certain length. One of its key \ncharacteristics is that it uses a chaining mechanism that causes the decryption \nof a block of ciphertext to depend on all the preceding ciphertext blocks. As a \nresult, the entire validity of all preceding blocks is contained in the immediately \nprevious ciphertext block. A single bit error in a ciphertext block affects the \ndecryption of all subsequent blocks. Rearrangement of the order of the ciphertext \nblocks causes decryption to become corrupted. Basically, in cipher block \nchaining, each plaintext block is XORed (exclusive ORed) with the immediately \nprevious ciphertext block and then encrypted.\nCipher Feedback (CFB) is similar to the previous CBC in that the following data \n• \nis combined with previous data so that identical patterns in the plaintext result in \ndifferent patterns in the ciphertext. However, the difference between CBC and \nCFB is that in CFB data is encrypted a byte at a time and each byte is encrypted \nalong with the previous 7 bytes of ciphertext.\nOutput Feedback (OFB) is a mode similar to the CFB in that it permits encryption \n• \nof differing block sizes, but has the key difference that the output of the \nencryption block function is the feedback, not the ciphertext. The XOR value of \neach plaintext block is created independently of both the plaintext and ciphertext. \nAlso like CFB, OFB uses an initialization vector (IV) and changing the IV in the \nsame plaintext block results in different ciphertext streams. It has no chaining \ndependencies. One problem with it is that the plaintext can be easily altered.\n" }, { "page_number": 244, "text": "230\b\n11  Cryptography\nWhile cryptography is the art of keeping messages secret, cryptanalysis is the \nart of breaking cipher codes and retrieving the plaintext from the ciphertext with­\nout knowing the proper key. The process of cryptanalysis involves a cryptanalyst \nstudying the ciphertext for patterns that can lead to the recovery of either the key or \nthe plaintext. Ciphertexts can also be cracked by an intruder through the process of \nguessing the key.\nThis is an exhaustive trial and error technique which with patience or luck, \nwhichever works first, may lead to the key. Although this seems to be difficult, with \ntoday’s fast computers, this approach is becoming widely used by hackers than ever \nbefore.\nThe power of cryptography lies in the degree of difficulty in cracking the cipher­\ntext back into plaintext after it has been transmitted through either protected or \nunprotected channels. The beauty of a strong encryption algorithm is that the \nciphertext can be transmitted across naked channels without fear of interception \nand recovery of the original plaintext. The decryption process also uses a key and a \ndecryption algorithm to recover the plaintext from the ciphertext. The hallmark of a \ngood cryptographic system is that the security of the whole system does not depend \non either the encryption or decryption algorithms but rather on the secrecy of the \nkey. This means that the encryption algorithm may be known and used several times \nand by many people as long as the key is kept a secret. This further means that the \nbest way to crack an encryption is to get hold of the key.\nKey-based encryption algorithm can either be symmetric, also commonly known \nas conventional encryption, or asymmetric, also known as public key encryption. \nSymmetric algorithms are actually secret-key-based, where both the encryption and \ndecryption algorithms use this same key for encryption and decryption. Asymmetric \nor public key algorithms, unlike symmetric ones, use a different key for encryption \nand decryption, and the decryption key cannot be derived from the encryption key.\n11.2  Symmetric Encryption\nSymmetric encryption or secret key encryption, as it is usually called, uses a com­\nmon key and the same cryptographic algorithm to scramble and unscramble the \nmessage as shown in Figs. 11.1 and 11.2. The transmitted final ciphertext stream \nis usually a chained combination of blocks of the plaintext, the secret key, and the \nciphertext.\nThe security of the transmitted data depends on the assumption that eavesdrop­\npers and cryptanalysts with no knowledge of the key are unable to read the mes­\nsage. However, for a symmetric encryption scheme to work, the key must be shared \nbetween the sender and the receiver. The sharing is usually done through passing the \nkey from the sender to the receiver. This presents a problem in many different ways, \nas we will see in Section 11.2.2. The question which arises is how to keep the key \nsecure while being transported from the sender to the receiver.\nSymmetric algorithms are faster than their counterparts, the public key ­algorithms.\n" }, { "page_number": 245, "text": "11.2  Symmetric Encryption\b\n231\n11.2.1  Symmetric Encryption Algorithms\nThe most widely used symmetric encryption method in the United States is the \nblock ciphers Triple Data Encryption Standard (3DES). Triple DES developed from \nthe original and now cracked DES uses a 64-bit key consisting of 56 effective key \nbits and 8 parity bits. Triple DES encrypts the data in 8-byte chunks, passing it \nthrough 16 different iterations consisting of complex shifting, exclusive ORing, \nsubstitution, and expansion of the key along with the 64-bit data blocks. Figure 11.3 \nshows how Triple DES works.\nAlthough 3DES is complicated and complex, and therefore secure, it suffers \nfrom several drawbacks including the length of its key fixed at 56 bits plus 8 bits of \nparity. The limited key length is making it possible for the ever-increasing speed of \nnewer computers to render it useless as it possible to compute all possible combina­\ntions in the range 0–256 – 1.\nBecause of this, the National Institute of Standards and Technology (NIST) has \npresented the Advanced Encryption Standard (AES), which is expected to replace \nDES. AES is Advanced Encryption Standard whose algorithm was decided to be \nRijndael, developed by two Belgian researchers, Joan Daemen and Vincent Rijmen.\nInternet\nPlaintext\nEncryption\nCiphertext\nDecryption\nPlaintext\nAlgorithm\nAlgorithm\nFig. 11.1  Symmetric Encryption\nCiphertext\nEncryption\n Plaintext \nDecryption\nThis is text\ndata of the\noriginal\ntext\nYZ\nTTHDCFF\nYYHHSZX\nXCCCM\nFig. 11.2  Encryption and Decryption with Symmetric Cryptography\n" }, { "page_number": 246, "text": "232\b\n11  Cryptography\nSeveral other symmetric encryption algorithms in use today include Interna­\ntional Data Encryption Algorithm (IDEA), Blowfish, Rivest Cipher 4 (RC4), RC5, \nand CAST-128. See Table 11.2 for symmetric key algorithms.\nTable 11.2  Symmetric key algorithms\nAlgorithm\t\nStrength\t\nFeatures (key length)\n3DES\t\nStrong\t\n64, 112, 168\nAES\t\nStrong\t\n128, 192, 256\nIDEA\t\nStrong\t\n64, 128\nBlowfish\t\nWeak\t\n32–448\nRC4\t\nWeak\nRC5\t\nStrong\t\n32, 64, 128\nBEST\t\nStrong\nCAST-128\t\nStrong\t\n32, 128\n64-bit plaintext\n56-bit key\nPermuation block\nInitial permutation\n16 permuatation\nkey rounds\nEach round based\non 56-bit key\npermutation\n16 key\npermutations\n16 left circular\nkey shifts\nInverse initial\npermutation\n64-bit ciphertext\nFig. 11.3  DES Algorithm\n" }, { "page_number": 247, "text": "11.3  Public Key Encryption\b\n233\n11.2.2  Problems with Symmetric Encryption\nAs we pointed out earlier, symmetric encryption, although fast, suffers from sev­\neral problems in the modern digital communication environment. These are a direct \nresult of the nature of symmetric encryption. Perhaps the biggest problem is that \na single key must be shared in pairs of each sender and receiver. In a distributed \n­environment with large numbers of combination pairs involved in many-to-one \ncommunication topology, it is difficult for the one recipient to keep so many keys in \norder to support all communication.\nIn addition to the key distribution problem above, the size of the communication \nspace presents problems. Because of the massive potential number of individuals \nwho can carry on communication in a many-to-one, one-to-many, and many-to-\nmany topologies supported by the Internet, for example, the secret-key cryptog­\nraphy, if strictly used, requires billions of secret keys pairs to be created, shared, \nand stored. This can be a nightmare! Large numbers of potential correspondents in \nthe many-to-one, one-to-many, and many-to-many communication topologies may \ncause symmetric encryption to fail because of its requirement of prior relationships \nwith the parties to establish the communication protocols like the setting up of and \nacquisition of the secret key.\nBesides the problems discussed above and as a result of them, the following \nadditional problems are also observable:\nThe integrity of data can be compromised because the receiver cannot verify that \n• \nthe message has not been altered before receipt.\nIt is possible for the sender to repudiate the message because there are no \n• \nmechanisms for the receiver to make sure that the message has been sent by the \nclaimed sender.\nThe method does not give a way to ensure secrecy even if the encryption process \n• \nis compromised.\nThe secret key may not be changed frequently enough to ensure confidentiality.\n• \n11.3  Public Key Encryption\nSince the symmetric encryption scheme suffered from all those problems we have \njust discussed above, there was a need for a more modern cryptographic scheme \nto address these flaws. The answers came from two people: Martin Hellman and \nWhitfield Diffie, who developed a method that seemed to solve at least the first two \nproblems and probably all four by guaranteeing secure communication without the \nneed for a secret key. Their scheme, consisting of mathematical algorithms, led to \nwhat is known as a public key encryption (PKE).\nPublic key encryption, commonly known asymmetric encryption, uses two dif­\nferent keys, a public key known to all and a private key known only to the sender \nand the receiver. Both the sender and the receiver own a pair of keys, one public \n" }, { "page_number": 248, "text": "234\b\n11  Cryptography\nand the other a closely guarded private one. To encrypt a message from sender A to \nreceiver B, as shown in Fig. 11.4, both A and B must create their own pairs of keys. \nThen A and B publicize their public keys – anybody can acquire them. When A has \nto send a message M to B, A uses B’s public key to encrypt M. On receipt of M, B \nthen uses his or her private key to decrypt the message M. As long as only B, the \nrecipient, has access to the private key, then A, the sender, is assured that only B, \nthe recipient, can decrypt the message. This ensures data confidentiality. Data integ­\nrity is also ensured because for data to be modified by an attacker, it requires the \nattacker to have B’s, the recipient’s, private key. Data confidentiality and integrity \nin public key encryption is also guaranteed in Fig. 11.4.\nAs can be seen, ensuring data confidentiality and integrity does not prevent a \nthird party, unknown to both communicating parties, from pretending to be A, the \nsender. This is possible because anyone can get A’s, the sender’s public key. This \nweakness must, therefore, be addressed, and the way to do so is through guarantee­\ning of sender nonrepudiation and user authentication. This is done as follows: after \nboth A and B have created their own pairs of keys and exchanged the public key \npair, A, the sender, then encrypts the message to be sent to B, the recipient, using the \nsender’s private key. Upon receipt of the encrypted message, B, the recipient, then \nuses A’s, the sender’s public key to encrypt the message. The return route is also \nsimilar. This is illustrated in Fig. 11.5. Authentication of users is ensured because \nonly the sender and recipient have access to their private keys. And unless their keys \nhave been compromised, both cannot deny or repudiate sending the messages.\nTo ensure all four aspects of security, that is data confidentiality and integrity \nand authentication and nonrepudiation of users, a double encryption is required as \nillustrated in Fig. 11.6.\nTitle\nText t\nTitle\nRTVY\nTitle\nText t\nInternet\nText M\nCiphertext M\nA Decrypts M using\nA's private key\nB encrypts M using\nA's public key\nTitle\nText t\nInternet\nA encrypts M using\nB's public key\nB decrypts M using\nB's private key\nFig. 11.4  Public Key Encryption with Data Integrity and Confidentiality\n" }, { "page_number": 249, "text": "11.3  Public Key Encryption\b\n235\nThe core of public key encryption is that no secret key is passed between two com­\nmunicating parties. This means that this approach can support all ­communication \ntopologies including one-to-one, one-to-many, many-to-many, and many-to-one, \nand along with it, several to thousands of people can communicate with one party \nwithout exchange of keys. This makes it suitable for Internet communication and \nelectronic commerce applications. Its other advantage is that it solves the chronic \nrepudiation problem experienced by symmetric encryption. This problem is solved, \nespecially in large groups, by the use of digital signatures and certificates.\nThe various cryptographic algorithms used in this scheme rely on the degree \nof computational difficulty encountered as an attempt is made to recover the keys. \nTitle\nText t\nTitle\nRTVY\nTitle\nText t\nInternet\nCiphertext M\nA Decrypts M using\nB's public key\nB encrypts M using\nB's private key\nTitle\nText t\nInternet\nA encrypts M using\nA's private key\nB decrypts M using\nA's public key\nM\nFig. 11.5  Authentication and Non-repudiation\nTitle\nText t\nTitle\nRTVY\nTitle\nText t\nInternet\nCiphertext M\nA encrypts M using\nB's public key\nB encrypts M using\nB's private key\nTitle\nText t\nA encrypts M using\nA's private key\nB decrypts M using\nA's public key\nM\nFig. 11.6  Ensuring Data Confidentiality and Integrity and User Authentication and \n­Non-repudiation\n" }, { "page_number": 250, "text": "236\b\n11  Cryptography\nThese algorithms, as we will see in Section 11.4 should be labor intensive and the \namount and difficulty involved should, and actually always, increase with the key \nlength. The longer the key, the more difficult and the longer it should take to guess \nthe key, usually the private key.\n11.3.1  Public Key Encryption Algorithms\nVarious algorithms exist for public key encryption including RSA, DSA, PGP, and \nEl Gamal. Table 11.3 shows the features of such algorithms.\n11.3.2  Problems with Public Key Encryption\nAlthough public key encryption seems to have solved the major chronic encryp­\ntion problems of key exchange and message repudiation, it still has its own prob­\nlems. The biggest problem for public key cryptographic scheme is speed. Public \nkey algorithms are extremely slow compared to symmetric algorithms. This is \nbecause public key calculations take longer than symmetric key calculations since \nthey involve the use of exponentiation of very large numbers which in turn take \nlonger to compute. For example, the fastest public key cryptographic algorithm \nsuch as RSA is still far slower than any typical symmetric algorithm. This makes \nthese algorithms and the public key scheme less desirable for use in cases of long \n­messages.\nIn addition to speed, public key encryption algorithms have a potential to suffer \nfrom the man-in-the-middle attack. The man-in-the-middle attack is a well known \nattack, especially in the network community where an attacker sniffs packets off a \ncommunication channel, modifies them, and inserts them back on to the channel. In \ncase of an encryption channel attack, the intruder convinces one of the correspon­\ndents that the intruder is the legitimate communication partner.\n11.3.3  Public Key Encryption Services\nAs it strives to solve the flaws that have plagued other encryption schemes, public \nkey encryption scheme offers the following services:\nTable 11.3  Public key algorithms\nAlgorithm\t\nStrength\t\nFeatures (key length)\nRSA\t\nStrong\t\n768, 1024\nElGamal\t\nStrong\t\n768, 1024\nDSA\t\nStrong\t\n512 to 1024\nDiffie-Halmann\t\nStrong\t\n768, 1024\n" }, { "page_number": 251, "text": "11.5  \u0007Key Management: Generation, Transportation, and ­Distribution\b\n237\nSecrecy which makes it extremely difficult for an intruder who is able to intercept \n• \nthe ciphertext to be able to determine its corresponding plaintext. See Fig. 11.4.\nAuthenticity which makes it possible for the recipient to validate the source of a \n• \nmessage. See Fig. 11.4.\nIntegrity which makes it possible to ensure that the message sent cannot be \n• \nmodified in any way during transmission. See Fig. 11.5.\nNonrepudiation which makes it possible to ensure that the sender of the message \n• \ncannot later turn around and disown the transmitted message. See Fig. 11.5.\n11.4  \u0007Enhancing Security: Combining Symmetric and Public \nKey Encryptions\nAs we noted in Section 11.2.2, symmetric algorithms, although faster than pub­\nlic key algorithms, are beset with a number of problems. Similarly public key \n­encryption also suffers slowness and the potential of the “man-in-the-middle” \nattacker. To address these concerns and to preserve both efficiency and privacy of \nthe communication channel and increase the performance of the system, a hybrid \ncryptosystem that uses the best of both and at the same time mitigating the worst in \neach system is widely used.\n11.5  \u0007Key Management: Generation, Transportation,\nand ­Distribution\nOne would have thought that the development of advanced technologies would \nalready have solved the chronic problem of exchanging a secret key between two \ncommunicating entities. However, one must seriously think that technology is cre­\nated by humans and humans are part of any technology. But humans also naturally \nform the weakest links in any technology. They are very unpredictable in what they \nare likely to do and why they do what they do. Key exchange in cryptographic tech­\nnologies would not have been a problem, but because of humans, it is.\nIn a small communication network based on a one-to-one communication topol­\nogy, the key exchange probably would not be such a problem. However, in modern \nlarge networks that support many-to-one, many-to-many, and one-to-many com­\nmunication topologies, the creation, distribution, and security of millions of keys \nboils down to a nightmare.\n11.5.1  The Key Exchange Problem\nIn Section 11.2.2 we saw that although symmetric encryption is commonly used \ndue to its historical position in the cryptography and its speed, it suffers from a seri­\nous problem of how to safely and secretly deliver a secret key from the sender to \n" }, { "page_number": 252, "text": "238\b\n11  Cryptography\nthe recipient. This problem forms the basis for the key exchange problem. The key \nexchange problem involves [2] the following:\nensuring that keys are exchanged so that the sender and receiver can perform \n• \nencryption and decryption,\nensuring that an eavesdropper or outside party cannot break the code, and\n• \nensuring the receiver that a message was encrypted by the sender.\n• \nThe strength of an encryption algorithm lies in its key distribution techniques. \nPoor key distribution techniques create an ideal environment for a man-in-the-middle \nattack. The key exchange problem, therefore, highlights the need for strong key dis­\ntribution techniques. Even though the key exchange problem is more prominent in \nthe symmetric encryption cryptographic methods, and it is basically solved by the \npublic key cryptographic methods, some key exchange problems still remain in pub­\nlic key cryptographic methods. For example, symmetric key encryption requires the \ntwo communicating parties to have agreed upon their secret key ahead of time before \ncommunicating, and public key encryption suffers from the difficulty of securely \nobtaining the public key of the recipient. However, both of these problems can be \nsolved using a trusted third party or an intermediary. For symmetric key cryptography, \nthe trusted intermediary is called a Key Distribution Center (KDC). For public key \ncryptography, the trusted and scalable intermediary is called a Certificate Authority \n(CA). See the side bar in Section 9.5.2.2 for a definition of a certificate authority.\nAnother method relies on users to distribute and track each other’s keys and trust \nin an informal, distributed fashion. This has been popularized as a viable alternative \nby the PGP software which calls the model the web of trust [2].\n11.5.2  Key Distribution Centers (KDCs)\nA Key Distribution Center (KDC) is a single, trusted network entity with which all \nnetwork communicating elements must establish a shared secret key. It requires all \ncommunicating elements to have a shared secret key with which they can commu­\nnicate with the KDC confidentially. However, this requirement still presents a prob­\nlem of distributing this shared key. The KDC does not create or generate keys for the \ncommunicating elements; it only stores and distributes keys. The creation of keys \nmust be done somewhere else. Diffie-Halmann is the commonly used algorithm to \ncreate secret keys and it provides the way to distribute these keys between the two \ncommunicating parties. But since the Diffie-Halmann exchange suffers from the \nman-in-the middle attacks, it is best used with a public key encryption algorithm \nto ensure authentication and integrity. Since all network communicating elements \nconfidentially share their secret keys with the KDC, it distributes these keys secretly \nto the corresponding partners in the communication upon request. Any network \nelement that wants to communicate with any other element in the network using \nsymmetric encryption schemes uses the KDC to obtain the shared keys needed for \nthat communication. Figure 11.7 shows the working of the KDC.\n" }, { "page_number": 253, "text": "11.5  \u0007Key Management: Generation, Transportation, and ­Distribution\b\n239\nStallings [3] has a very good scenario which describes the working of the KDC, \nand he describes this working as follows. First both the message sender A and the \nmessage receiver B each must have a secret key they each share with the KDC. A \ninitiates the communication process by sending a request to the KDC for a session \nkey and B’s secret key. The KDC responds to this request by sending a two-part \npacket to A. The first part to be sent to A consists of A’s request to the KDC, B’s \nsecret key, and a session key. The second part, to be sent to B, consists of A’s iden­\ntity and a copy of the session key given to A. Since the packet is to be sent to A, it \nis encrypted by the secret key the KDC shares with A. When A receives the packet, \nA then gets out B’s secret key and encrypts the message together with B’s part of \nthe packet with B’s secret key and sends it to B. On receipt, B uses the secret key B \nshares with the KDC to decrypt the package from A to recover the session key. Now \nthe session key has been distributed to both A and B. After a few housekeeping and \nauthentication handshake, communication can begin.\nThe KDC has several disadvantages including the following:\nThe two network communicating elements must belong to the same KDC.\n• \nSecurity becomes a problem because a central authority having access to keys is \n• \nvulnerable to penetration. Because of the concentration of trust, a single security \nbreach on the KDC would compromise the entire system.\nIn large networks that handle all communication topologies, the KDC then \n• \nbecomes a bottleneck since each pair of users needing a key must access a central \nnode at least once. Also the failure of the central authority could disrupt the key \ndistribution system [4].\nIn large networks with varying communication topologies where network com­\nmunicating elements cannot belong to the same KDC, key distribution may become \nkey distribution center\n1. request from A\nfor session key.\n2. response from\nKDC with session\n+ B's keys\n3. A sends\nencrypted\nmessage M to B\nencrypted with\nB's secret key\n4. B gets A's\nsecreyt and\nsession key from\nKDC and\nencrypts\nmessage.\n5. Both A and B\nhave a session\nkey.\nCommunication\nbegins.\nFig. 11.7  The Working of a KDC\n" }, { "page_number": 254, "text": "240\b\n11  Cryptography\na real problem. Such problems are solved by the Public Key Infrastructure (PKI). \nWe will discuss PKI in Section 11.6.\n11.5.3  Public Key Management\nBecause there was a problem with both authenticity and integrity in the distribution \nof public keys, there was a need to find a solution to this problem. In fact, according \nto Stallings [3], there were two problems: the distribution of the public keys, and \nthe use of public key encryption to distribute the secret key. For the distribution of \npublic keys, there were several solutions including the following:\nPublic announcements where any user can broadcast their public keys or send \n• \nthem to selected individuals\nPublic directory which is maintained by a trusted authority. The directory is \n• \nusually dynamic to accommodate additions and deletions\nCertificate Authority (CA) to distribute certificates to each communicating \n• \nelement. Each communicating element in a network or system communicates \nsecurely with the CA to register its public key with the CA. Since public keys are \nalready in public arena, the registration may be done using a variety of techniques \nincluding the postal service.\n11.5.3.1  Certificate Authority (CA)\nThe CA then certifies that a public key belongs to a particular entity. The entity \nmay be a person or a server in a network. The certified public key, if one can safely \ntrust the CA that certified the key, can then be used with confidence. Certifying a \nkey by the CA actually binds that key to a particular network communicating ele­\nment which validates that element. In a wide area network such as the Internet, \nCAs are equivalent to the digital world’s passport offices because they issue digital \ncertificates and validate the holder’s identity and authority. Just as the passport in \nthe real world has embedded information about you, the certificate issued by the \nCAs has an individual’s or an organization’s public key along with other identifying \ninformation embedded in it and then cryptographically time-stamped, signed, and \ntamper-proof sealed. It can then be used to verify the integrity of the data within it \nand to validate this data whenever it is presented. A CA has the following roles [5]:\nIt authenticates a communicating element to the other communicating parties that \n• \nthat element is what it says it is. However, one can trust the identity associated \nwith a public key only to the extent that one can trust a CA and its identity \nverification techniques.\nOnce the CA verifies the identity of the entity, the CA creates a \n• \ndigital certificate \nthat binds the public key of the element to the identity. The certificate contains \nthe public key and other identifying information about the owner of the public \n" }, { "page_number": 255, "text": "11.5  \u0007Key Management: Generation, Transportation, and ­Distribution\b\n241\nkey (for example, a human name or an IP address). The certificate is digitally \nsigned by the CA.\nSince CA verifies the validity of the communicating elements’ certificates, it is in \ncharge of enrolling, distributing, and revoking certificates. Because certificates are \nissued by many different CAs, much of the format of certificates has been defined \nto ensure validity, manageability, and consistence in the scheme.\nTo lessen the activities of the CA and therefore improve on the performance of the \nCA, users who acquire certificates become responsible for managing their own certifi­\ncates. In doing so, any user who initiates a communication must provide his or her cer­\ntificate and other identifying information such as a date and random number and send \nit to the recipient together with a request for the recipient’s certificate. Upon receipt of \nthese documents, the recipient sends his or her certificate. Each party then validates \neach other’s certificate and upon approval by either party, communication begins.\nDuring the validation process, each user may periodically check the CA’s lists \nof certificates which have become invalid before their expiration dates due to key \ncompromise or administrative reasons. Since this may require online access to the \nCA’s central facility, this may sometimes create a bottleneck.\n11.5.3.2  Digital Certificates\nA digital certificate is a digitally signed message used to attest to the validity of the \npublic key of a communicating element. As we pointed out, digital certificates must \nadhere to a format. Most digital certificates follow the International Telecommuni­\ncation Union (ITU-T) X.509 standard. According to RFC 1422, the X.509 digital \ncertificate has the following fields as shown in Table 11.4.\nIn modern communication, the use of certificates has become common and vital to \nthe security of such communications. For example, in a network environment, in order \nto encrypt transmissions to your server, the client requires the server’s public key. The \nintegrity of that key is vital to the security of the subsequent sessions. If a third party, \nTable 11.4  The ITU-T X.509 digital certificate format [6]\nField\nPurpose\nVersion number\nMost certificates use X.509 version 3.\nSerial number\nUnique number set by a CA\nIssuer\nName of the CA\nSubject issued certificate\nName of a receiver of the certificate\nValidity period\nPeriod in which certificate will valid\nPublic-key algorithm infor­\nmation of the subject of the \ncertificate\nAlgorithm used to sign the certificate with digital signature\nDigital signature of the issuing \nauthority\nDigital signature of the certificate signed by CA\nPublic key\nPublic key of the subject\n" }, { "page_number": 256, "text": "242\b\n11  Cryptography\nfor example, were to intercept the communication and replace the legitimate key with \nhis or her own public key, that man-in-the-middle could view all traffic or even mod­\nify the data in transit. Neither the client nor the server would detect the intrusion.\nSo to prevent this, the client demands from the server, and the server sends the \npublic key in a certificate signed by a certificate authority. The client checks that \ndigital signature. If the signature is valid, the client knows that the CA has certified \nthat this is the server’s authentic certificate, not a certificate forged by a man-in-\nthe-middle. It is important that the CA be a trusted third party in order to provide \nmeaningful authentication.\nAs we close the discussion on digital certificates, let us look at how it compares \nwith a digital signature in authentication. In Section 11.6, we discussed the role of \ndigital signatures in authenticating messages and identifying users in public key \nencryption. But digital signatures alone cannot authenticate any message and iden­\ntify a user without a mechanism to authenticate the public key, a role played by the \ndigital certificate. Similarly a digital certificate alone cannot authenticate a message \nor identify a user without a digital signature. So in order to get a full authentication \nof a message and identify the user one needs both the digital signature and digital \ncertificate, both of them working together.\nSeveral companies now offer digital certificates – that means they are function­\ning as CAs. Among those are VeriSign, American Express, Netscape, US Postal \nService, and Cybertrust.\n11.5.3.3  Using a Private Certificate Authority\nIf a business is running its own Intranet, it is a security imperative that the security \nadministrator chooses either a public CA or a private CA. It is also possible for the \nsecurity administrator to create his or her own CA. If one decides to do this, then \ncare must be taken in doing so. One should consider the following steps [7]:\nConsultation with a security expert before building is essential.\n• \nDo all the CA work offline.\n• \nBecause it plays a crucial role in the security of the network, it is important \n• \nthat access, both physical and electronic, to the in-house CA must be highly \nrestricted.\nProtect the CA from all types of surveillance.\n• \nRequire users to generate key pairs of adequate sizes, preferably 1024-bit.\n• \nIf the decision is not to use an in-house CA, then it is important to be careful in \nchoosing a good trusted CA.\n11.5.4  Key Escrow\nKey escrow is a scheme in which a copy of the secret key is entrusted to a third \nparty. This is similar to entrusting a copy of the key to your house or car to a trusted \n" }, { "page_number": 257, "text": "11.6  Public Key Infrastructure (PKI)\b\n243\nfriend. In itself, it is not a bad idea because you can genuinely lose the key or lock \nit inside the house or car. So in case of the loss of the main key, a copy can always \nbe retrieved from the friend. For private arrangements such as this, the idea of a \nkey escrow is great. However, in a public communication network like the Internet, \nthe idea is not so good. Key escrow began because, as the Internet become more \naccessible, wrong characters and criminals joined in with vices such as money laun­\ndering, gambling pornography, and drugs. The U.S. government, at least in public, \nfound it necessary to rein in on organized crime on the Internet. The way to do it, \nas it was seen at that time, was through a key escrow program, and it was hence \nborn.\nSince it was first proposed by government, the key escrow program raised a \nheated debate between those who feel that the program of key escrow is putting \nindividual privacy at risk and those who argue that law enforcement officials must \nbe given the technological ability and sometimes advantage to fight organized crime \non the Internet.\nThe key escrow debate was crystallized by the Clipper chip. The Clipper chip, \nfunded by the U.S. government, was intended to protect private online and telecom­\nmunication communications, while at the same time permitting government agents \nto obtain the keys upon presentation of legal warrant. The government appointed \ntwo government agencies to act as the escrow bodies. These agencies were the \nNIST and the Treasury Department.\nThe opposition to the Clipper chip was so strong that government was forced to \nopt for its use to be voluntary.\n11.6  Public Key Infrastructure (PKI)\nWe saw in Section 11.5.2 that in large networks with varying communication topol­\nogies where network communicating elements cannot belong to the same KDC, key \ndistribution becomes a real problem. These problems are solved when a Public Key \nInfrastructure (PKI) is used instead of KDCs to provide trusted and efficient key \nand certificate management. What then is this PKI? Merike Kaeo, quoting the Inter­\nnet X.509 Public Key Infrastructure PKIX defines public key infrastructure (PKI) \nas the set of hardware, software, people, policies, and procedures needed to create, \nmanage, store, distribute, and revoke certificates based on public key cryptography \n[2]. PKI automate all these activities. PKI works best when there is a large mass \nof users. Under such circumstances, it creates and distributes digital certificates \nwidely to many users in a trusted manner. It is made up of four major pieces: the \ncertificates that represent the authentication token; the CA that holds the ultimate \ndecision on subject authentication; the registration authority (RA) that accepts and \nprocesses certificate signing requests on behalf of end users; and the Lightweight \nDirectory Access Protocol (LDAP) directories that hold publicly available certifi­\ncate information [8].\n" }, { "page_number": 258, "text": "244\b\n11  Cryptography\n11.6.1  Certificates\nWe defined certificates in Section 11.5.3.1 as the cryptographic proof that the \npublic key they contain is indeed the one that corresponds to the identity stamped \non the same certificate. The validation of the identity of the public key on the \ncertificate is made by the CA that signs the certificate before it is issued to the \nuser. Let us note here for emphasis that public keys are distributed through digital \ncertificates. The X.509 v3 certificate format, as we noted in Section 11.5.3.1, has \nnine fields. The first seven make up the body of the certificate. Any change in \nthese fields may cause the certificate to become invalid. If a certificate becomes \ninvalid, the CA must revoke it. The CA then keeps and periodically updates the \ncertificate revocation list (CRL). End-users are, therefore, required to frequently \ncheck on the CRL.\n11.6.2  Certificate Authority\nCAs are vital in PKI technology to authoritatively associate a public key signature \nwith an alleged identity by signing certificates that support the PKI. Although the \nCAs play an important role in the PKI technology, they must be kept offline and \nused only to issue certificates to a select number of smaller certification entities. \nThese entities perform most of the day-to-day certificate creation and signature \nverification.\nSince the CAs are offline and given their role in the PKI technology, there must \nbe adequate security for the system on which they are stored so that their integrity is \nmaintained. In addition, the medium containing the CA’s secret key itself should be \nkept separate from the CA host in a highly secure location. Finally, all procedures \nthat involve the handling of the CA private key should be performed by two or more \noperators to ensure accountability in the event of a discrepancy.\n11.6.3  Registration Authority (RA)\nThe RAs accept and process certificate signing requests from users. Thus, they cre­\nate the binding among public keys, certificate holders, and other attributes.\n11.6.4  Lightweight Directory Access Protocols (LDAP)\nThese are repositories that store and make available certificates and Certificate \nRevocation Lists (CRLs). Developed at the University of Michigan, the LDAP was \nmeant to make the access to X.509 directories easier. Other ways of distributing \ndigital certificates are by FTP and HTTP.\n" }, { "page_number": 259, "text": "11.7  Hash Function\b\n245\n11.6.5  Role of Cryptography in Communication\nFrom our discussion so far, you should by now have come to the conclusion that cryp­\ntography is a vital component in modern communication and that public key technol­\nogy, in particular, is widely used and is becoming more and more acknowledged as \none of the best ways to secure many applications in e-commerce, e-mail, and VPNs.\n11.7  Hash Function\nIn the previous sections, we have seen how both symmetric and public key encryp­\ntions are used to ensure data confidentiality and integrity and also user authentica­\ntion and non-repudiation, especially when the two methods are combined. Another \nway to provide data integrity and authenticity is to use hash functions.\nA hash function is a mathematical function that takes an input message M of \na given length and creates a unique fixed length output code. The code, usually a \n128-bit or 160-bit stream, is commonly referred to as a hash or a message digest. A \none-way hash function, a variant of the hash function, is used to create a signature \nor fingerprint of the message – just like a human fingerprint. On input of a message, \nthe hash function compresses the bits of a message to a fixed-size hash value in a \nway that distributes the possible messages evenly among the possible hash values. \nUsing the same hash function on the same message always results in the same mes­\nsage digest. Different messages always hash to different message digests\nA cryptographic hash function does this in a way that makes it extremely dif­\nficult to come up with two or more messages that would hash to a particular hash \nvalue. It is conjectured that the probability of coming up with two messages hashing \non the same message digest is of the order of 264 and that of coming up with any \nmessage hashing on a given message digest is of the order of 2128 [9].\nIn ensuring data integrity and authenticity, both the sender and the recipient \nperform the same hash computation using the same hash function on the message \nbefore the message is sent and after it has been received. If the two computations \nof the same hash function on the same message produce the same value, then the \nmessage has not been tampered with during transmission.\nThere are various standard hash functions of message digest length including the 160-\nbit (SHA-1 and MD5) and 128-bit streams (RSA, MD2, and MD4). Message Digest \nhash algorithms MD2, MD4, and MD5 are credited to Ron Rivest, while Secure Hash \nAlgorithm (SHA) was developed by the NIST. The most popular of these hash algo­\nrithms are SHA and MD5. Table 11.5 shows some more details of these algorithms.\nTable 11.5  Standard hash algorithms\nAlgorithm\t\nDigest length (bits)\t\nFeatures (key length)\nSHA-1\t\n160\t\n512\nMD5\t\n160\t\n512\nHMAC-MD5\t\nVersion of MD5\t\n512 (key version of MD5)\nHMAC-SHA-1\t\nVersion of SHA-1\t\n512 (key version of SHA-1)\nPIPEND\t\n160\t\n128\n" }, { "page_number": 260, "text": "246\b\n11  Cryptography\n11.8  Digital Signatures\nWhile we use the hash functions to ensure the integrity and authenticity of the mes­\nsage, we need a technique to establish the authenticity and integrity of each message \nand each user so that we ensure the nonrepudiation of the users. This is achieved by \nthe use of a digital signature.\nA digital signature is defined as an encrypted message digest, by the private \nkey of the sender, appended to a document to analogously authenticate it, just like \nthe handwritten signature appended on a written document authenticates it. Just \nlike in the handwritten form, a digital signature is used to confirm the identity of \nthe sender and the integrity of the document. It establishes the nonrepudiation of \nthe sender.\nDigital signatures are formed using a combination of public key encryption and \none-way secure hash function according to the following steps [10]:\nThe sender of the message uses the message digest function to produce a message \n• \nauthentication code (MAC).\nThis MAC is then encrypted using the private key and the public key encryption \n• \nalgorithm. This encrypted MAC is attached to the message as the digital \nsignature.\nThe message is then sent to the receiver. Upon receipt of the message, the recipi­\nent then uses his or her public key to decrypt the digital signature. First, the recipient \nmust verify that the message indeed came from the expected sender. This step veri­\nfies the sender’s signature. It is done via the following steps [2]:\nThe recipient separates the received message into two: the original document and \n• \nthe digital signature.\nUsing the sender’s public key, the recipient then decrypts the digital signature \n• \nwhich results in the original MAC.\nThe recipient then uses the original document and inputs it to the hash function \n• \nto produce a new MAC.\nThe new MAC is compared with the MAC from the sender for a match.\n• \nIf these numbers compare, then the message was received unaltered, the data \nintegrity is assured, and the authenticity of the sender is proven. See Fig. 11.8 for \nthe working of a digital signature verification.\nBecause digital signatures are derived from the message as a digest which is then \nencrypted, they cannot be separated from the messages they are derived from and \nremain valid.\nSince digital signatures are used to authenticate the messages and identify the \nsenders of those messages, they can be used in a variety of areas where such double \nconfirmation is needed. Anything that can be digitized can be digitally signed. This \nmeans that digital signatures can be used with any kind of message, whether it is \nencrypted or not, to establish the authenticity of the sender and that the message \narrived intact. However, digital signatures cannot be used to provide the confiden­\ntiality of the message content.\n" }, { "page_number": 261, "text": "Exercises\b\n247\nAmong the most common digital signature algorithms in use today are the Digi­\ntal Signature Standard (DSS) proposed by NIST and based on the El Gamal public \nkey algorithm and RSA. DSS is faster than RSA.\nAlthough digital signatures are popular, they are not the only method of authen­\nticating the validity of the sender and the integrity of the message. Because they \nare very complex, other less complex methods are also in use, especially in the \nnetwork community. Such methods include the cyclic redundancy checking (CRC). \nIn CRC, a digital message is repeatedly divided until a remainder is derived. The \nremainder, the divisor, along with the message is then transmitted to the recipient. \nUpon receipt, the recipient would execute the same division process looking for the \nsame remainder. Where the remainder is the same, the recipient is assured that the \nmessage has not been tampered with during transmission.\nExercises\n  1.\t Discuss the basic components of cryptography.\n  2.\t Discuss the weaknesses of symmetric encryption.\n  3.\t Discuss the weaknesses of public key encryption.\n  4\t Why is a hybrid cryptosystem preferred over symmetric and public key encryp­\ntion systems?\n  5.\t Why is PKI so vital in modern communications?\n  6.\t Discuss the role of digital signatures in modern communication.\n  7.\t Some say that with the development of systems such as IPSec, the role the CAs \nplay in modern communication will diminish and eventually cease. Comment \non this statement.\nText Message\nHash\nHash\nPublic Key encrptioon\nDigital signature\nPublic Key decrptioon\nCompare\nMAC\nMAC\nFig. 11.8  Verifying a Digital Signature in Message Authentication\n" }, { "page_number": 262, "text": "248\b\n11  Cryptography\n  8.\t \u0007In a modern communication network, what are the limitations of a tree-struc­\ntured CA system? Why is it necessary?\n  9.\t Discuss the limitations of a KDC system in modern communication.\n10.\t Discuss the future of PKI.\nAdvanced Exercises\n1.\t Discuss the differences between digital certificates and digital signatures in \nauthentication.\n2.\t Discuss the role and function of a PKI.\n3.\t Describe the sequence of steps a sender of a message takes when sending the \nmessage with a digital signature. What steps does the receiver of such a message \ntake to recover the message?\n4.\t Compare and contrast the problems and benefits of KDC and PKI.\n5.\t Describe the message authentication process using\n(a)\t Symmetric encryption\n(b)\t Public key encryption\n(c)\t Hash function\nReferences\n\t 1.\t Stein, Lincoln, D. Web Security: A Step-by-Step Reference Guide. Boston, MA: ­Addison-Wesley, \n1998.\n\t 2.\t Kaeo, Marike. Designing Network Security. Indianapolis: Cisco Press, 1999.\n\t 3.\t Stallings, William. Cryptography and Network Security: Principles and Practice. Second \nEdition. Upper Saddle River NJ: Prentice Hall, 1999.\n\t 4.\t Frame Technology. http://www.cs.nps.navy.mil/curricula/tracks/security/notes/chap05_33.html\n\t 5.\t Key Distribution and Certification. http://cosmos.kaist.ac.kr/cs441/text/keydist.htm\n\t 6.\t Panko, Raymond, R. Corporate Computer Security. Upper Saddle River NJ: Prentice Hall, \n2004.\n\t 7.\t Certificates and Certificate Authorities. http://www-no.ucsd.edu/oldsecurity/Ca.html.\n\t 8.\t Ram and J. Honta. Keeping PKI Under Lock and Key. NetworkMagazine.com. http://www.\nnetworkmagazine.com/article/NMG20001004S0015.\n\t 9.\t Documentation on Cryptography: Message digests and digital signatures. http://pgp.rasip.fer.\nhr/pgpdoc2/pgpd2_50.html\n\t10.\t Public Key Digital Signatures. http://www.sei.cmu.edu/str/descriptions/pkds_body.html.\n" }, { "page_number": 263, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_12, © Springer-Verlag London Limited 2009\n\b\n249\nChapter 12\nFirewalls\n12.1  Definition\nThe rapid growth of the Internet has led to a corresponding growth of both users \nand activities in cyberspace. Unfortunately, not all these users and their activities \nare reputable; thus, the Internet has been increasingly, at least to many individuals \nand businesses, turning into a “bad Internet.” Bad people are plowing the Internet \nwith evil activities that include, among other things, intrusion into company and \nindividual systems looking for company data and individual information that erodes \nprivacy and security. There has, therefore, been a need to protect company systems, \nand now individual PCs, keeping them out of access from those “bad users” out \non the “bad Internet.” As companies build private networks and decide to connect \nthem onto the Internet, network security becomes one of the most important con­\ncerns network system administrators face. In fact, these network administrators are \nfacing threats from two fronts: the external Internet and the internal users within \nthe company network. So network system administrators must be able to find ways \nto restrict access to the company network or sections of the network from both the \n“bad Internet” outside and from unscrupulous inside users.\nSuch security mechanisms are based on a firewall. A firewall is a hardware, soft­\nware, or a combination of both that monitors and filters traffic packets that attempt \nto either enter or leave the protected private network. It is a tool that separates a \nprotected network or part of a network, and now increasingly a user PC, from an \nunprotected network – the “bad network” like the Internet. In many cases the “bad \nnetwork” may even be part of the company network. By definition, a “firewall,” is \na tool that provides a filter of both incoming and outgoing packets. Most firewalls \nperform two basic security functions:\nPacket filtering based on \n• \naccept or deny policy that is itself based on rules of the \nsecurity policy.\nApplication proxy gateways that provide services to the inside users and at the \n• \nsame time protect each individual host from the “bad” outside users.\nBy denying a packet, the firewall actually drops the packet. In modern firewalls, \nthe firewall logs are stored into log files and the most urgent or dangerous ones are \n" }, { "page_number": 264, "text": "250\b\n12  Firewalls\nreported to the system administrator. This reporting is slowly becoming real time. \nWe will discuss this shortly.\nIn its simplest form, a firewall can be implemented by any device or tool that \nconnects a network or an individual PC to the Internet. For example, an Ethernet \nbridge or a modem that connects to the “bad network” can be set as a firewall. Most \nfirewalls products actually offer much more as they actively filter packets from and \ninto the organization network according to certain established criteria based on the \ncompany security policy. Most organization firewalls are bastion host, although \nthere are variations in the way this is set up. A bastion host is one computer on the \norganization network with bare essential services, designated and strongly fortified \nto withstand attacks. This computer is then placed in a location where it acts as a \ngateway or a choke point for all communication into or out of the organization net­\nwork to the “bad network.” This means that every computer behind the bastion host \nmust access the “bad network” or networks through this bastion host. Figure 12.1 \nshows the position of a bastion host in an organization network.\nFor most organizations, a firewall is a network perimeter security, a first line of \ndefense of the organization’s network that is expected to police both network traf­\nfic inflow and outflow. This perimeter security defense varies with the perimeter of \nthe network. For example, if the organization has an extranet, an extended network \nconsisting of two or more LAN clusters, or the organization has a Virtual Private \nNetwork (VPN) (see Chapter 16), then the perimeter of the organization’s network \nRouter\nFirewall/Bastion Hostl\nLaptop\nLaptop\nLaptop\nServer\nInternet\nPrivate Network\nFig. 12.1  Bastion host between a private network and the “bad network”\n" }, { "page_number": 265, "text": "is difficult to defne. In this case, then each component of the network should have \nits own firewall. See Fig. 12.2.\nAs we pointed out earlier, the accept/deny policy used in firewalls is based on an \norganization’s security policy. The security policies most commonly used by orga­\nnizations vary ranging from completely disallowing some traffic to allowing some \nof the traffic or all the traffic. These policies are consolidated into two commonly \nused firewall security policies [1]:\nDeny-everything-not-specifically-allowed which sets the firewall in such a \n• \nway that it denies all traffic and services except a few that are added as the \norganization needs develop.\nAllow-everything-not-specifically-denied which lets in all the traffic and services \n• \nexcept those on the “forbidden” list which is developed as the organization’s \ndislikes grow.\nBased on these policies, the following design goals are derived:\nAll traffic into and out of the protected network must pass through the \n• \nfirewall.\nOnly authorized traffic, as defined by the organizational security policy, in and \n• \nout of the protected network, will be allowed to pass.\nLaptop\nLaptop\nLaptop\nLaptop\nWorkstation\nLaptop\nServer\nLaptop\nLaptop\nFirewall\nFirewall\nRouter\nVPN Tunnel and firewall\nInternet\nSite B Network\nSite A Network\nFirewall\nServer\nServer Tower\nFig. 12.2  Firewalls in a changing parameter security\n12.1  Definition\b\n251\n" }, { "page_number": 266, "text": "252\b\n12  Firewalls\nThe firewall must be immune to penetration by use of a trusted system with \n• \nsecure operating system.\nWhen these policies and goals are implemented in a firewall, then the firewall is \nsupposed to [1]\nPrevent intruders from entering and interfering with the operations of the \n• \norganization’s network. This is done through restricting which packets can enter \nthe network based on IP addresses or port numbers.\nPrevent intruders from deleting or modifying information either stored or in \nmotion within the organization’s network.\nPrevent intruders from acquiring proprietary organization information.\n• \nPrevent insiders from misusing the organization resources by restricting \n• \nunauthorized access to system resources.\nProvide authentication, although care must be taken because additional services \n• \nto the firewall may make it less efficient.\nProvide end-points to the VPN.\n• \n12.2  Types of Firewalls\nFirewalls are used very widely to offer network security services. This has resulted \nin a large repertoire of firewalls. To understand the many different types of fire­\nwalls, we need only look at the kind of security services firewalls offer at different \nlayers of the TCP/IP protocol stack.\nAs Table 12.1 shows, firewalls can be set up to offer security services to many \nTCP/IP layers. The many types of firewalls are classified based on the network \nlayer it offers services in and the types of services offered\nThe first type is the packet inspection or filtering router. This type of firewall uses \na set of rules to determine whether to forward or block individual packets. A packet \ninspection router could be a simple machine with multiple network interfaces or a \nsophisticated one with multiple functionalities. The second type is the application \ninspection or proxy server. The proxy server is based on specific application ­daemons \nto provide authentication and to forward packets. The third type is the authentication \nand virtual private networks (VPN). A VPN is an encrypted link in a private network \nrunning on a public network. The fourth firewall type is the small office or home \n(SOHO) firewall, and the fifth is the network address translation (NAT).\nTable 12.1  Firewall services based on network protocol layers\nLayer\t\nFirewall services\nApplication\t\nApplication-level gateways, encryption, SOCKS Proxy Server\nTransport\t\nPacket filtering (TCP, UDP, ICMP)\nNetwork\t\nNAT, IP-filtering\nData link\t\nMAC address filtering\nPhysical\t\nMay not be available\n" }, { "page_number": 267, "text": "12.2.1  Packet Inspection Firewalls\nPacket filter firewalls, the first type of firewalls, are routers that inspect the contents \nof the source or destination addresses and ports of incoming or outgoing TCP, UDP, \nand ICMP packets being sent between networks and accept or reject the packet \nbased on the specific packet policies set in the organization’s security policy. Recall \nthat a router is a machine that forwards packets between two or more networks. A \npacket inspection router, therefore, working at the network level, is programmed to \ncompare each packet to a list of rules set from the organization’s security policy, \nbefore deciding if it should be forwarded or not. Data is allowed to leave the system \nonly if the firewall rules allow it.\nTo decide whether a packet should be passed on, delayed for further inspec­\ntion, or dropped, the firewall looks through its set of rules for a rule that matches \nthe contents of the packet’s headers. If the rule matches, then the action to deny or \nallow is taken; otherwise, an alternate action of sending an ICMP message back to \nthe originator is taken.\nTwo types of packet filtering are used during packet inspection: static or state­\nless filtering in which a packet is filtered in isolation of the context it is in, and state­\nful filtering in which a packet is filtered actually based on the context the packet is \nin. The trend now for most inspection firewalls is to use stateful filtering.\nThe static or stateless filtering is a full duplex communication bastion server \nallowing two-way communication based on strict filtering rules. Each datagram \nentering the server either from the “bad” network outside the company network or \nfrom within the network is examined based on the preset filtering rules. The rules \napply only to the information contained in the packet and anything else like the state \nof the connection between the client and the server are ignored.\nThe stateful filtering is also a full duplex communication bastion server. However, \nunlike the straight packet filtering firewall, this filters every datagram entering the \nserver both from within and outside the network based on the context which requires \na more complex set of criteria and restrictions. For each packet, the firewall examines \nthe date and state of connection between the client and the server. Because this type \nof filtering pays attention to the data payload of each packet, it is, therefore, more \nuseful and of course more complex. Examination of the data part of the packet makes \nit useful in detecting questionable data such as attachments and data from hosts not \ndirectly connected to the server. Requests from or to third party hosts and server to \nserver are strictly inspected against the rule base and logged by the firewall.\nWhether static or stateful, the rules a filtering server follows are defined based \non the organization’s network security policy, and they are based on the following \ninformation in the packet [2, 3]:\nSource address. All outgoing packets must have a source address internal to the \n• \nnetwork. Inbound packets must never have source addresses that are internal.\nDestination address. Similarly, all outgoing packets must not have a destination \n• \naddress internal to the network. Any inbound packet must have a destination \naddress that is internal to the network.\n12.2  Types of Firewalls\b\n253\n" }, { "page_number": 268, "text": "254\b\n12  Firewalls\nTCP or UDP source and destination port number\n• \nICMP message type\n• \nPayload data type\n• \nConnection initialization and datagram using TCP ACK bit.\n• \nAccording to Niels Provos [4] and as Table 12.1 shows, packet inspection based \non IP addresses, port numbers, ACK and sequence numbers, on TCP, UDP, and \nICMP headers, and on applications may occur at any one of the following TCP/IP \nand ISO stack layers:\nThe \n• \nlink layer provides physical addressing of devices on the same network. \nFirewalls operating on the link layer usually drop packets based on the media \naccess control (MAC) addresses of communicating hosts.\nThe \n• \nnetwork layer contains the Internet protocol (IP) headers that support \naddressing across networks. IP headers are inspected.\nThe \n• \ntransport layer contains TCP, UDP, and ICMP headers and provides data \nflows between hosts. Most firewalls operate at the network and transport layer \nand inspect these headers.\nThe \n• \napplication layer contains application specific protocols like HTTP, FTP, \nand SET. Inspection of application-specific protocols can be computationally \nexpensive because more data needs to be inspected.\nLet us now look at the different ways of implementing the filtering firewall based on \nIP address, TCP/UDP port numbers, and sequence numbers and ACK filtering.\n12.2.1.1  IP Address Filtering\nIP address filtering rules are used to control traffic into and out of the network \nthrough the filtering of both source and destination IP addresses. Since in a state­\nless filter, no record is kept, the filter does not remember any packet that has passed \nthrough it. This is a weakness that can be exploited by hackers to do IP-spoofing. \nTable 12.2 shows rules that filter based on IP destination, and Fig. 12.3 shows a \nTCP, UDP, and port number filtering firewall.\n12.2.1.2  TCP and UDP Port Filtering\nAlthough IP address header filtering works very well, it may not give the system \nadministrator enough flexibility to allow users from a trusted network to access \nTable 12.2  Destination IP filtering\nApplication protocol\t\nSource IP\t\nDestination IP\t\nAction\nHTTP\t\nAny\t\n198.124.1.0\t\nAllow\nTelnet\t\nAny\t\n198.213.1.1\t\nDeny\nFTP\t\nAny\t\n198.142.0.2\t\nAllow\n" }, { "page_number": 269, "text": "specific services from a server located in the “bad network” and vice versa. For \nexample, we may not want users from the “bad network” to Telnet into any trusted \nnetwork host but the administrator may want to let them access the Web services \nthat are on the same or another machine. To leave a selective but restricted access to \nthat machine, the administrator has to be able to set filters according to the TCP or \nUDP port numbers in conjunction with the IP address filters. Table 12.3 illustrates \nthe filtering rules based on TCP and UDP ports number filtering.\nUnfortunately, as Eric Hall [5] points out, there are a several problems with \nthis approach. First, it is not easy to know what port numbers the servers that you \nare trying to access are running on. As Hall observes, modern day servers such as \nHTTP and Gopher are completely configurable in this manner, allowing the user \nto run them on any port of choice. If this type of filtering is implemented, then the \nnetwork users will not be able to access those sites that do not use the “standard” \nTable 12.3  Filtering rules based on TCP and UDP destination port numbers\nApplication\t\nProtocol\t\nDestination port number\t\nAction\nHTTP\t\nTCP\t\n80\t\nAllow\nSSL\t\nUDP\t\n443\t\nDeny\nTelnet\t\nTCP\t\n23\t\nAllow\nLaptop\nLaptop\nLaptop\nLaptop\nWorkstation\nLaptop\nServer\nLaptop\nLaptop\nFirewall\nFirewall\nRouter\nVPN Tunnel and firewall\nInternet\nSite B Network\nSite A Network\nFirewall\nServer\nServer Tower\nFig. 12.3  TCP, UDP, and port number filtering firewall\n12.2  Types of Firewalls\b\n255\n" }, { "page_number": 270, "text": "256\b\n12  Firewalls\nport numbers prescribed. In addition to not being able to pin-point to a “standard” \nport number, there is also a potential of some of the incoming response packets \ncoming from an intruder port 80.\n12.2.1.3  \u0007Packet Filtering Based on Initial Sequence Numbers (ISN) \nand Acknowledgement (ACK) Bits\nA fundamental notion in the design and reliability of the TCP protocol is a \nsequence number. Every TCP connection begins with a three-way handshaking \nsequence that establishes specific parameters of the connection. The connection \nparameters include informing the other host of the sequence numbers to be used. \nThe client initiates the three-way handshake connection request by not only setting \nthe synchronization (SYN) flag, but also by indicating the initial sequence number \n(ISN) that it will start with in addressing data bytes; the octets. This ISN is placed \nin the sequence number field.\nUpon receipt of each octet, the server responds by setting the header flags SYN \nand ACK; it also sets its ISN in the sequence number field of the response, and it \nupdates the sequence number of the next octet of data it expects from the client.\nThe acknowledgment is cumulative so that an acknowledgment of sequence \nnumber n indicates that all octets up to but not including n have been received. This \nmechanism is good for duplicate detection in the presence of retransmission that \nmay be caused by replays. Generally, the numbering of octets within a packet is that \nthe first data octet immediately following the header is the lowest numbered, and the \nfollowing octets are numbered consecutively. For the connection to be maintained, \nevery subsequent TCP packet in an exchange must have its octets’ ACK bits set \nfor the connection to be maintained. So the ACK bit indicates whether a packet is \nrequesting a connection or a connection has been made. Packets with 0 in the ACK \nfield are requesting for connections, while those with a 1 have ongoing connections. \nA firewall can be configured to allow packets with ACK bit 1 to access only speci­\nfied ports and only in designated directions since hackers can insert a false ACK bit \nof 1 into a packet. This makes the host think that a connection is ongoing. Table 12.4 \nshows the rules to set the ACK field.\nAccess control can be implemented by monitoring these ACK bits. Using these \nACK bits, one can limit the types of incoming data to only response packets. This \nmeans that a remote system or a hacker cannot initiate a TCP connection at all, but \ncan only respond to packets that have been sent to it.\nHowever, as Hall notes, this mechanism is not hacker proof since monitoring TCP \npackets for the ACK bit doesn’t help at all with UDP packets, as they don’t have any \nTable 12.4  Rules for filtering based on ACK field bit\nSequence number\t\nIP Destination address\t\nPort number\t\nACK\t\nAction\n15\t\n198.123.0.1\t\n80\t\n0\t\nDeny\n16\t\n198.024.1.1\t\n80\t\n1\t\nAllow\n" }, { "page_number": 271, "text": "ACK bit. Also there are some TCP connections such as FTP that initiate connections. \nSuch applications then cannot work across a firewall based on ACK bits.\n12.2.1.4  Problems with Packet Filtering Firewalls\nAlthough packet filtering, especially when it includes a combination of other pref­\nerences, can be effective, it, however, suffers from a variety of problems including \nthe following:\nUDP Port Filtering. UDP was designed for unreliable transmissions that do \n• \nnot require or benefit from negotiated connections such as broadcasts, routing \nprotocols, and advertise services. Because it is unreliable, it does not have \nan ACK bit; therefore, an administrator cannot filter it based on that. Also an \nadministrator cannot control where the UDP packet was originated. One solution \nfor UDP filtering is to deny all incoming UDP connections but allow all outgoing \nUDP packets. Of course, this policy may cause problems to some network users \nbecause there are some services that use UDP such as NFS, NTP, DNS, WINS, \nNetBIOS-over-TCP/IP, and NetWare/IP and client applications such as Archie and \nIRC. Such a solution may limit access to these services for those network users.\nPacket filter routers don’t normally control other vulnerabilities such as SYN \n• \nflood and other types of host flooding.\nPacket filtering does not control traffic on VPN.\n• \nFiltering, especially on old firewalls, does not hide IP addresses of hosts on the \n• \nnetwork inside the filter but lets them go through as outgoing packets where an \nintruder can get them and target the hosts.\nThey do not do any checking on the legitimacy of the protocols inside the \n• \npacket.\n12.2.2  \u0007Application Proxy Server: Filtering Based on Known \nServices\nInstead of setting filtering based on IP addresses, port numbers, and sequence num­\nbers, which may block some services from users within the protected network trying \nto access specific services, it is possible to filter traffic based on popular services \nin the organization. Define the filters so that only packets from well known and \npopularly used services are allowed into the organization network, and reject any \npackets that are not from specific applications. Such firewall servers are known as \nproxy servers\nA proxy server, sometimes just an application firewall, is a machine server that \nsits between a client application and the server offering the services the client appli­\ncation may want. It behaves as a server to the client and as a client to the server, \nhence a proxy, providing a higher level of filtering than the packet filter server \n12.2  Types of Firewalls\b\n257\n" }, { "page_number": 272, "text": "258\b\n12  Firewalls\nby examining individual application packet data streams. As each incoming data \nstream is examined, an appropriate application proxy, a program, similar to normal \nsystem daemons, is generated by the server for that particular application. The proxy \ninspects the data stream and makes a decision of either to forward, drop, or refer \nfor further inspection. Each one of these special servers is called a proxy server. \nBecause each application proxy is able to filter traffic based on an application, it is \nable to log and control all incoming and outgoing traffic and therefore offer a higher \nlevel of security and flexibility in accepting additional security functions like user-\nlevel authentication, end-to-end encryption, intelligent logging, information hiding, \nand access restriction based on service types [1].\nA proxy firewall works by first intercepting a request from a host on the inter­\nnal network and then passing it on to its destination, usually the Internet. But before \npassing it on, the proxy replaces the IP source address in the packet with its own IP \naddress and then passes it on. On receipt of packet from an external network, the proxy \ninspects the packet, replaces its own IP destination address in the packet with that of \nthe internal host, and passes it on to the internal host. The internal host does not sus­\npect that the packet is from a proxy. Figure 12.4 shows a dual-homed proxy server.\nModern proxy firewalls provides three basic operations [6]:\nHost IP address hiding – When the host inside the trusted network sends an \n• \napplication request to the firewall and the firewall allows the request through to \nthe outside Internet, a sniffer just outside the firewall may sniff the packet and \nit will reveal the source IP address. The host then may be a potential victim for \nattack. In IP address hiding, the firewall adds to the host packet its own IP header. \nSo that the sniffer will only see the firewall’s IP address. So application firewalls \nthen hide source IP addresses of hosts in the trusted network.\nHeader destruction is an automatic protection that some application firewalls use \n• \nto destroy outgoing packet TCP, UDP, and IP headers and replace them with its \nown headers so that a sniffer outside the firewall will see only the firewall’s IP \naddress. In fact, this action stops all types of TCP, UDP, and IP header attacks.\nLaptop\nLaptop\nFirewall\nPrivate Network\nLaptop computer\nRouter\nInternet\nExternal IP-address\nInternal IP-address\nFig. 12.4  A dual-homed proxy server\n" }, { "page_number": 273, "text": "Protocol enforcement. Since it is common in packet inspection firewalls to allow \n• \npackets through based on common port numbers, hackers have exploited this by \nport spoofing where they hackers penetrate a protected network host using common \nused and easily allowed port numbers. With an application proxy firewall, this is \nnot easy to do because each proxy acts as a server to each host and since it deals \nwith only one application, it is able to stop any port spoofing activities.\nAn example of a proxy server is a Web application firewall server. Popular Web \napplications are filtered based on their port numbers as below.\nHTTP (port 80)\n• \nFTP (port 20 and 21)\n• \nSSL (port 443)\n• \nGopher (port 70)\n• \nTelnet (port 23)\n• \nMail (port 25)\n• \nFor newer application firewall, the following proxies are also included: HTTP/\nSecure HTTP, FTP, SSL, Gopher, email, Telnet and others. This works for both \nincoming and outgoing requests.\nProxy firewalls fall into two types: application and SOCKS proxies [7, 8].\n12.2.2.1  Application Proxy\nApplication-level proxies automate the filtering and forwarding processes for the \nclient. The client application initiates the process by contacting the firewall. The \ndaemon proxy on the firewall picks up the request, processes it, and if it is accept­\nable, connects it to the server in the “bad network” (the outside world). If there is \nany response, it then waits and returns the data to the client application.\nAs we pointed out earlier, application level proxies offer a higher level of secu­\nrity because in handling all the communications, they can log every detail of the \nprocess, including all URLs visited and files downloaded. They can also be used as \nvirus scans, where possible, and language filters for inappropriate content. At login, \nthey can authenticate applications as well as users through a detailed authentication \nmechanism that includes a one-time password. Also since users do not have direct \naccess to the server, it makes it harder for the intruder to install backdoors around \nthe security system.\nTraditional filter firewalls work at a network level to address network access \ncontrol and block unauthorized network-level requests and access into the network. \nBecause of the popularity of application level services such as e-mail and Web \naccess, application proxy firewalls have become very popular to address applica­\ntion layer security by enforcing requests within application sessions. For example, \na Web application firewall specifically protects the Web application communication \nstream and all associated application resources from attacks that happen via the \nWeb protocol.\n12.2  Types of Firewalls\b\n259\n" }, { "page_number": 274, "text": "260\b\n12  Firewalls\nThere are two models followed in designing an application firewall: a positive \nsecurity model, which enforces positive behavior; and a negative security model, \nwhich blocks recognized attacks [9].\nPositive Security Model\nA positive security model enforces positive behavior by learning the application \nlogic and then building a security policy of valid known requests as a user inter­\nacts with the application. According to Bar-Har, the approach has the following \nsteps [9]:\nThe initial policy contains a list of valid starting conditions which the user’s \n• \ninitial request must match before the user’s session policy is created.\nThe application firewall examines the requested services in detail. For example if \n• \nit is a Web page download, the page links and drop-down menus and form fields \nare examined before a policy of all allowable requests that can be made during \nthe user’s session is built.\nUser requests are verified as valid before being passed to the server. Requests not \n• \nrecognized by the policy are blocked as invalid requests.\nThe session policy is destroyed when the user session terminates. A new policy \n• \nis created for each new session.\nNegative Security Model\nUnlike the positive model which creates a policy based on user behavior, a negative \nsecurity model is based on a predefined database of “unacceptable” signatures. The \napproach again according to Bar-Har is as follows [9]:\nCreate a database of known attack signatures.\n• \nRecognized attacks are blocked, and unknown requests (good or bad) are assumed \n• \nto be valid and passed to the server for processing.\nAll users share the same static policy.\n• \nApplication firewalls work in real time to address security threats before they \nreach either the application server or the private network.\n12.2.2.2  SOCKS Proxy\nA SOCKS proxy is a circuit-level daemon server that has limited capabilities in \na sense that it can only allow network packets that originate from nonprohibited \nsources without looking at the content of the packet itself. It does this by work­\ning like a switchboard operator who cross-wires connections through the system to \nanother outside connection without minding the content of the connection, but pays \n" }, { "page_number": 275, "text": "attention only to the legality of the connection. Another way to describe SOCKS \nservers is to say that these are firewall servers that deal with applications that have \nprotocol behaviors that cannot be filtered. Although they let through virtually all \npackets, they still provide core protection for application firewalls such as IP hiding \nand header destruction.\nThey are faster than application-level proxies because they do not open up the \npackets and although they cannot provide for user authentication, they can record \nand trace the activities of each user as to where he or she is connected to. Figure \n12.5 shows a proxy server.\n12.2.3  Virtual Private Network (VPN) Firewalls\nA VPN, as we will see in Chapter 16, is a cryptographic system including Point-to-\nPoint Tunneling Protocol (PPTP), Layer 2 Tunneling Protocol (L2TP), and IPSec \nthat carry Point-to-Point Protocol (PPP) frames across an Internet with multiple data \nlinks with added security. VPNs can be created using a single remote computer con­\nnecting on to a trusted network or connecting two corporate network sites. In either \ncase and at both ends of the tunnels, a VPN server can also act as a firewall server. \nMost firewall servers, however, provide VPN protection which runs in parallel with \nother authentication and inspection regimes on the server. Each packet arriving at a \nfirewall is then passed through an inspection and authentication module or a VPN \nmodule. See Fig. 12.6.\nThe advantages of a VPN over non-VPN connections like standard Internet \nconnections are as follows:\nLaptop\nLaptop\nApplication Firewall\nPrivate Network\nLaptop computer\nInternet\nHTTP Request\nFTP Request\nSMTP (email) Request\nFig. 12.5  A proxy firewall server\n12.2  Types of Firewalls\b\n261\n" }, { "page_number": 276, "text": "262\b\n12  Firewalls\nVPN technology encrypts its connections.\n• \nConnections are limited to only machines with specified IP addresses.\n• \n12.2.4  Small Office or Home (SOHO) Firewalls\nA SOHO firewall is a relatively small firewall that connects a few personal com­\nputers via a hub, switch, a bridge, even a router on one side and connecting to a \nbroadband modem like DSL or cable on the other. See Fig. 12.7. The configuration \ncan be in a small office or a home.\nIn a functioning network, every host is assigned an IP address. In a fixed network \nwhere these addresses are static, it is easy for a hacker to get hold of a host and use it to \nComputer\nLaptop\nLaptop\nServer\nServer\nLaptop\nLaptop\nLaptop\nProtected Private Network\nProtected Private Network\nVPN Tunnel through Firewalls\nFig. 12.6  VPN connections and firewalls\n" }, { "page_number": 277, "text": "stage attacks on other hosts within and outside the network. To prevent this from hap­\npening, a NAT filter can be used. It hides all inside host TCP/IP information. A NAT \nfirewall actually functions as a proxy server by hiding identities of all internal hosts \nand making requests on behalf of all internal hosts on the network. This means that to \nan outside host, all the internal hosts have one public IP address, that of the NAT.\nWhen the NAT receives a request from an internal host, it replaces the host’s IP \naddress with its own IP address. Inward bound packets all have the NAT’s IP address \nas their destination address. Figure 12.8 shows the position of a NAT firewall.\n12.3  Configuration and Implementation of a Firewall\nThere are actually two approaches to configuring a firewall to suit the needs of an \norganization. One approach is to start from nothing and make the necessary infor­\nmation gathering to establish the needs and requirements of the organization. This \nis a time-consuming approach and probably more expensive. The other approach \nis what many organizations do and take a short cut and install a vendor firewall \nalready loaded with features. The administrator then chooses the features that best \nmeet the established needs and requirements of the organization.\nWhether the organization is doing an in-house design of its own firewall or buy­\ning it off-the shelf, the following issues must be addressed first [10]:\nTechnical Capacity – whether large or small, organizations embarking on \n• \ninstallation of firewalls need some form of technical capacity. Such capacity may \nbe outsourced if it suits the organization.\nSecurity Review – before an organization can install a firewall, there must be \n• \nsecurity mechanisms based on a security policy to produce a prioritized list of \nsecurity objectives.\nAuditing Requirements – based on the security policy, auditing frequency and \n• \nwhat must be in the audit. For example, the degree of logging needed and the \nLaptop\nLaptop\nHub/Switch\nModem\nSOHO Router/Firewall\nTo\nISP\nLaptop\nFig. 12.7  A SOHO firewall\n12.3  Configuration and Implementation of a Firewal\b\n263\n" }, { "page_number": 278, "text": "264\b\n12  Firewalls\ndetails that are cost effective and thorough. The details included guidelines for \nrecordings, especially if the organization has plans of pursuing security incidents \nin courts of law.\nFiltering and Performance Requirements – decide on the acceptable trade-off \n• \nbetween security and performance for the organization. Then use this trade-off \nto set the level of filtering that meets that balance.\nAuthentication – if authentication for outbound sessions is required, then install \n• \nit and make sure that users are able to change their passwords.\nRemote Access – if accept remote access is to be allowed, include the requirements \n• \nfor authentication and encryption of those sessions. Also consider using VPN to \nencrypt the session. Many firewalls come with a VPN rolled in.\nApplication and network requirements – decide on the type of network traffic to be \n• \nsupported, whether network address translation (NAT), static routing, or dynamic \nrouting are needed, and whether masquerading a block of internal addresses is \nsufficient instead of NAT. As Fennelly [10] puts it, a poor understanding of the \nrequirements can lead to implementing a complicated architecture that might not \nbe necessary.\nDecide on the protocol for the firewall – finally, the type of protocols and services \n• \n(proxies) the firewall will work with must be decided on. The decision is actually \nbased on the type of services that will be offered in the organization network.\n12.4  The Demilitarized Zone (DMZ)\nA DMZ is a segment of a network or a network between the protected network and \nthe “bad external network.” It is also commonly referred to as a service network. The \npurpose of a DMZ on an organization network is to provide some insulation and extra \nLaptop\nLaptop\nNAT Firewall\nPrivate Network\nLaptop computer\nRouter\nInternet\n198.143.1.0 Public IP-address\nInternal IP-address\nFig. 12.8  A NAT firewall\n" }, { "page_number": 279, "text": "security to servers that provide the organization services for protocols such as HTTP/\nSHTTP, FTP, DNS, and SMTP to the general public. There are different setups for these \nservers. One such setup is to make these servers actually bastion hosts so that there is \na secure access to them from the internal protected network to allow limited access. \nAlthough there are restrictions on accesses from the outside network, such restrictions \nare not as restrained as those from within the protected network. This enables custom­\ners from the outside to access the organization’s services on the DMZ servers.\nNote that all machines in the DMZ area have a great degree of exposure from \nboth external and internal users. Therefore, these machines have the greatest poten­\ntial for attacks. This implies that these machines must be protected from both exter­\nnal and internal misuse. They are therefore fenced off by firewalls positioned on \neach side of the DMZ. See Fig. 12.9 for the positioning of DMZ servers.\nAccording to Joseph M. Adams [11], the outer firewall should be a simple screen­\ning firewall just to block certain protocols, but let others through that are allowed \nin the DMZ. For example, it should allow protocols such as FTP, HTTP/SHTTP, \nSMTP, and DNS while denying other selected protocols and address signatures. \nThis selective restriction is important not only to machines in the DMZ but also to \nthe internal protected network because once an intruder manages to penetrate the \nmachines in the DMZ, it is that easy to enter the protected internal network. For \nexample, if DMZ servers are not protected, then an intruder can easily penetrate \nthem. The internal firewall, however, should be more restrictive in order to more \nprotect the internal network from outsider intruders. It should deny even access to \nthese protocols from entering the internal network.\nLaptop\nLaptop\nFirewall\nPrivate Network\nLaptop computer\nRouter\nInternet\nInternal IP-address\nFTP Server\nWeb Server\nDMZ\nThree-pronged Firewall\nFig. 12.9  Placing of Web, DNS, FTP, and SMTP servers in the DMZ\n12.4  The Demilitarized Zone (DMZ)\b\n265\n" }, { "page_number": 280, "text": "266\b\n12  Firewalls\nBeyond the stated advantage of separating the heavily public accessed servers \nfrom the protected network, thus limiting the potential for outside intruders into \nthe network, there are other DMZ advantages. According to Chuck Semeria [12], \nDMZs offer the following additional advantages to an organization:\nThe main advantage for a DMZ is the creation of three layers of protection \n• \nthat segregate the protected network. So in order for an intruder to penetrate \nthe protected network, he or she must crack three separate routers: the outside \nfirewall router, the bastion firewall, and the inside firewall router devices.\nSince the outside router advertises the DMZ network only to the Internet, systems \n• \non the Internet do not have routes to the protected private network. This allows \nthe network manager to ensure that the private network is “invisible,” and that \nonly selected systems on the DMZ are known to the Internet via routing table and \nDNS information exchanges.\nSince the inside router advertises the DMZ network only to the private network, \n• \nsystems on the private network do not have direct routes to the Internet. This \nguarantees that inside users must access the Internet via the proxy services \nresiding on the bastion host.\nSince the DMZ network is a different network from the private network, a Network \n• \nAddress Translator (NAT) can be installed on the bastion host to eliminate the \nneed to renumber or resubnet the private network.\nThe DMZ also has disadvantages including the following:\nDepending on how much segregation is required, the complexity of DMZ may \n• \nincrease.\nThe cost of maintaining a fully functional DMZ can also be high again depending \n• \non the number of functionalities and services offered in the DMZ.\n12.4.1  Scalability and Increasing Security in a DMZ\nAlthough the DMZ is a restricted access area that is meant to allow outside access \nto the limited and often selected resources of an organization, DMZ security is still \na concern to system administrators. As we pointed out earlier, the penetration of \nthe DMZ may very well result in the penetration of the protected internal network \nby the intruder, exploiting the trust relationships between the vulnerable host in the \nDMZ and those in the protected internal network.\nAccording to Marcus Ranum and Matt Curtin [13], the security in the DMZ can be \nincreased and the DMZ scaled by the creation of several “security zones.” This can \nbe done by having a number of different networks within the DMZ. Each zone could \noffer one or more services. For example, one zone could offer services such as mail, \nnews, and host DNS. Another zone could handle the organization’s Web needs.\nZoning the DMZ and putting hosts with similar levels of risk on networks linked \nto these zones in the DMZ helps to minimize the effect of intrusion into the network \n" }, { "page_number": 281, "text": "because if an intruder breaks into the Web server in one zone, he or she may not be \nable to break into other zones, thus reducing the risks.\n12.5  Improving Security Through the Firewall\nThe firewall shown in Fig. 12.9 is sometimes referred to as a three-pronged firewall \nor a tri-homed firewall because it connects to three different networks: the external \nnetwork that connects to the Internet; the DMZ screened subnet; and the internal \nprotected network. Because it is three-pronged, it, therefore, requires three different \nnetwork cards.\nBecause three-pronged firewalls use a single device and they use only a single \nset of rules, they are usually complex. Such a set of rules can be complex and \nlengthy. In addition, the firewall can be a weak point into the protected network \nsince it provides only a single entry point into two networks: the DMZ network and \nthe internal network. If it is breached, it opens up the internal network. Because of \nthis, it is usually better for added security to use two firewalls as in Fig. 12.10.\nOther configurations of firewalls depend on the structure of the network. For \nexample, in a set up with multiple networks, several firewalls may be used, one per \nnetwork. Security in a protected network can further be improved by using encryp­\ntion in the firewalls. Upon receipt of a request, the firewall encrypts the request \nand sends it on to the receiving firewall or server which decrypts it and offers the \nservice.\nLaptop\nLaptop\nPrivate Network\nLaptop computer\nRouter\nFTP Server\nWeb Server\nDMZ\nRouter\nFirewall\nIRouter\nFirewall\nFig. 12.10  Two firewalls in a network with a DMZ\n12.5  Improving Security Through the Firewall\b\n267\n" }, { "page_number": 282, "text": "268\b\n12  Firewalls\nFirewalls can also be equipped with intrusion detection systems (IDS). Many \nnewer firewalls now have IDS software built into them. Some firewalls can be \nfenced by IDS sensors as shown in Fig. 12.11.\n12.6  Firewall Forensics\nSince port numbers are one of the keys used by most firewalls, let us start firewall \nforensics by looking at port numbers. A port number is an integer number between \n1 and 65535 which identifies to the server what function a client computer wants \nto be performed. By port numbering, network hosts are able to distinguish one TCP \nand UDP service from another at a given IP address. This way, one server machine \ncan provide many different services without conflicts among the incoming and out­\ngoing data.\nAccording to Robert Graham [14], port numbers are divided into three ranges:\nThe \n• \nwell-known ports are those from 0 through 1023. These are tightly bound to \nservices and usually traffic on these ports clearly indicates the protocol for that \nservice. For example, port 80 virtually always indicates HTTP traffic.\nThe \n• \nregistered ports are those from 1024 through 49151. These are loosely bound \nto services, which means that while there are numerous services bound to these \nports, these ports are likewise used for many other purposes that have nothing to \ndo with the official server.\nThe d\n• \nynamic and/or private ports are those from 49152 through 65535. In theory, \nno service should be assigned to these ports.\nLaptop\nLaptop\nNAT Firewall\nPrivate Network\nLaptop computer \nRouter\nInternet\nIDS Sensor\nIDS Sensor\nFig. 12.11  Firewalls with IDS sensors\n" }, { "page_number": 283, "text": "In reality, machines start assigning dynamic ports starting at 1024. There is also \nstrangeness, such as Sun starting their RPC ports at 32768 [14].\nUsing port numbers and in a clear and concise document, Robert Graham \nexplains what many of us see in firewall logs. His document is intended for both \nsecurity experts and home users of personal firewalls. The full text of the article can \nbe found here: http://www.robertgraham.com/pubs/firewall-seen.html. We encour­\nage the reader to carefully read this document for a full understanding of and putting \nsense in what a firewalls outputs.\n12.7  Firewall Services and Limitations\nAs technology improves, firewall services have widened far beyond old strict filter­\ning to embrace services that were originally done by internal servers. For example, \nfirewalls can scan for viruses and offer services such as FTP, DNS, and SMTP.\n12.7.1  Firewall Services\nThe broad range of services offered by the firewall are based on the following \naccess controls [4]:\nService control – where the firewall may filter traffic on the basis of IP addresses, \n• \nTCP, UDP, port numbers, and DNS and FTP protocols in addition to providing \nproxy software that receives and interprets each service request before passing it \non.\nDirection control – where permission for traffic flow is determined from the \n• \ndirection of the requests.\nUser control – where access is granted based on which user is attempting to \n• \naccess the internal protected network, which may also be used on incoming \ntraffic.\nBehavior control – in which access is granted based on how particular services \n• \nare used, for example, filtering e-mail to eliminate spam.\n12.7.2  Limitations of Firewalls\nGiven all the firewall popularity, firewalls are still taken as just the first line of defense \nof the protected network because they do not assure total security of the network. Fire­\nwalls suffer from limitations, and these limitations and other weaknesses have led to \nthe development of other technologies. In fact, there is talk now that the development \nof IPSec technology is soon going to make firewall technology obsolete. We may \nhave to wait and see. Some of the current firewall limitations are [14] as follows:\n12.7  Firewall Services and Limitations\b\n269\n" }, { "page_number": 284, "text": "270\b\n12  Firewalls\nFirewalls cannot protect against a threat that bypasses it, such as a dial-in using \n• \na mobile host.\nFirewalls do not provide data integrity because it is not possible, especially \n• \nin large networks, to have the firewall examine each and every incoming and \noutgoing data packet for anything.\nFirewalls cannot ensure data confidentiality because, even though newer firewalls \n• \ninclude encryption tools, it is not easy to use these tools. It can only work if the \nreceiver of the packet also has the same firewall.\nFirewalls do not protect against internal threats.\n• \nFirewalls cannot protect against transfer of virus-infected programs or files.\n• \nExercises\n  1.\t Discuss the differences between a firewall and a packet filter.\n  2.\t Give reasons why firewalls do not give total security.\n  3.\t Discuss the advantages of using an application-level firewall over a network-\nlevel firewall.\n  4.\t Show how data protocols such as TCP, UDP, and ICMP can be implemented in \na firewall and give the type of firewall best suited for each of these protocols.\n  5.\t What are circuit-level firewalls? How are they different from network-level \nfirewalls?\n  6.\t Discuss the limitations of firewalls. How do modern firewalls differ from the \nold ones in dealing with these limitations?\n  7.\t How would you design a firewall that would let Internet-based users upload \nfiles to a protected internal network server?\n  8.\t Discuss the risks to the protected internal network as a result of a DMZ.\n  9.\t What is a bastion router? How different is it from a firewall?\n10.\t Search and discuss as many services and protocols as possible offered by a \nmodern firewall.\nAdvanced Exercises\n1.\t Many companies now offer either trial or free personal firewalls. Using the fol­\nlowing companies, search for a download, and install a personal firewall. The \ncompanies are: Deerfield.com, McAfee, Network Ice, Symantec, Tiny Software, \nand Zone Labs.\n2.\t Design a security plan for a small (medium) company and use that plan to con­\nfigure a firewall. Install the firewall – use some firewalls from #1 above.\n" }, { "page_number": 285, "text": "3.\t Zoning the DMZ has resulted in streamlining and improving security in both the \nDMZ and the protected internal network. Consider how you would zone the DMZ \nthat has servers for the following services and protocols: HTTP/SHTTP, FTP, \nICMP, TELNET, TCP, UDP, Whois, and finger. Install the clusters in the DMZ.\n4.\t Research the differences between IPSec and firewalls. Why is it that some peo­\nple are saying that IPSec will soon make firewalls obsolete?\n5.\t Discuss the best ways of protecting an internal network using firewalls from the \nfollowing attacks:\nSMTP Server Hijacking\n• \nBugs in operating systems\n• \nICMP redirect bombs\n• \nDenial of service\n• \nExploiting bugs in applications.\n• \nReferences\n\t 1.\t Kizza, J. M.. Computer Network Security and Cyber Ethics. Jefferson, NC: McFarland Pub­\nlishers, 2002.\n\t 2.\t Karose J. and Ross K. Computer Networking: A Top-Down Approach Featuring the Internet. \nBoston: Addison-Wesley, 2000.\n\t 3.\t Holden, G.. A Guide to Firewalls and Network Security: Intrusion Detection and VPNs. Clif­\nton Paark, NY: Thomson Learning, 2004.\n\t 4.\t Provos, N. “Firewall.” http://www.win.tue.nl/∼henkvt/provos-firewall.pdf.\n\t 5.\t Hall, E. Internet Firewall Essentials. http://secinf.net/firewalls_and_VPN/Internet_Firewall_\nEssentials.html\n\t 6.\t Panko, R. R. Corporate Computer and Network Security. Upper Saddle River, NJ: Prentice \nHall, 2004.\n\t 7.\t Stein, L. D. Web Security: A Step-by-Step Reference Guide. Reading, MA: Addison-Wesley, \n1998.\n\t 8.\t Grennan, M. Firewall and Proxy Server HOWTO. http://www.tldp.org/HOWTO/Firewall-\nHOWTO.html\n\t 9.\t Bar-Gad, I. Web Firewalls. Network World, 06/03/02: http://www.nwfusion.com/news/\ntech/2002/0603tech.html\n\t10.\t Fennelly, C. Building your firewall, Part 1. http://secinf.net/firewalls_and_VPN/Building_\nyour_firewall_Part_1.html.\n\t11.\t Adams, J. M. FTP Server Security Strategy for the DMZ, June 5, 2001. http://www.mscs.\nmu.edu/∼hnguye/Security2002/Homeworks/assign4/DMZ.pdf.\n\t12.\t Semeria, C. Internet Firewalls and Security A Technology Overview. http://www.patentsform.us/\npatents/5987611.html\n\t13.\t Ranum, M. J. and Matt Curtin. Internet Firewalls: Frequently Asked Questions. http://www.\ninterhack.net/pubs/fwfaq/#SECTION00040000000000000000.\n\t14.\t Graham, R. Firewall Forensics (What am I seeing?). http://www.robertgraham.com/pubs/\nfirewall-seen.html\nReferences\b\n271\n" }, { "page_number": 286, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_13, © Springer-Verlag London Limited 2009\n\b\n273\nChapter 13\nSystem Intrusion Detection and Prevention\n13.1  Definition\nThe psychology and politics of ownership have historically dictated that individuals \nand groups tend to protect valuable resources. This grew out of the fact that once a \nresource has been judged to have value, no matter how much protection given to it, \nthere is always a potential that the security provided for the resource will at some \npoint fail. This notion has driven the concept of system security and defined the \ndisciplines of computer and computer network security. Computer network security \nis made up of three principles: prevention, detection, and response. Although these \nthree are fundamental ingredients of security, most resources have been devoted \nto detection and prevention because if we are able to detect all security threats and \nprevent them, then there is no need for response.\nIntrusion detection is a technique of detecting unauthorized access to a com­\nputer system or a computer network. An intrusion into a system is an attempt by an \noutsider to the system to illegally gain access to the system. Intrusion prevention, \non the other hand, is the art of preventing an unauthorized access of a system’s \nresources. The two processes are related in a sense that while intrusion detection \npassively detects system intrusions, intrusion prevention actively filters network \ntraffic to prevent intrusion attempts. For the rest of the chapter, let us focus on these \ntwo processes.\n13.2  Intrusion Detection\nThe notion of intrusion detection in computer networks is a new phenomenon born, \naccording to many, from a 1980 James Anderson’s paper, “Computer Security \nThreat Monitoring and Surveillance.” In that paper, Anderson noted that computer \naudit trails contained vital information that could be valuable in tracking misuse \nand understanding user behavior. The paper, therefore, introduced the concept of \n“detecting” misuse and specific user events and has prompted the development of \nintrusion detection systems.\n" }, { "page_number": 287, "text": "274\b\n13  System Intrusion Detection and Prevention\nAn intrusion is a deliberate unauthorized attempt, successful or not, to break into, \naccess, manipulate, or misuse some valuable property and where the misuse may \nresult into or render the property unreliable or unusable. The person who intrudes \nis an intruder.\nAurobindo Sundaram [1] divides intrusions into six types as follows:\nAttempted break-ins, which are detected by atypical behavior profiles or \n• \nviolations of security constraints. An intrusion detection system for this type is \ncalled anomaly-based IDS.\nMasquerade attacks, which are detected by atypical behavior profiles or violations \n• \nof security constraints. These intrusions are also detected using anomaly-based \nIDS.\nPenetrations of the security control system, which are detected by monitoring for \n• \nspecific patterns of activity.\nLeakage, which is detected by atypical use of system resources.\n• \nDenial of service, which is detected by atypical use of system resources.\n• \nMalicious use, which is detected by atypical behavior profiles, violations of \n• \nsecurity constraints, or use of special privileges.\n13.2.1  The System Intrusion Process\nThe intrusion process into a system includes a number of stages that start with the iden­\ntification of the target, followed by reconnaissance that produces as much information \nabout the target as possible. After enough information is collected about the target and \nweak points are mapped, the next job is to gain access into the system and finally the \nactual use of the resources of the system. Let us look at each one of these stages.\n13.2.1.1  Reconnaissance\nReconnaissance is the process of gathering information about the target system and \nthe details of its workings and weak points. Hackers rarely attack an organization \nnetwork before they have gathered enough information about the targeted network. \nThey gather information about the type of information used in the network, where \nit is stored, how it is stored and the weak entry points to that information. They do \nthe reconnaissance through system scanning for vulnerabilities.\nAlthough vulnerability assessment is not intrusion, it is part of the intrusion pro­\ncess in that it proceeds the intrusion itself. Vulnerability assessment is an auto­\nmated process in which a scanning program sends network traffic to all computers \nor selected computers in the network and expects receiving return traffic that will \nindicate whether those computers have known vulnerabilities. These vulnerabilities \nmay include weaknesses in operating systems, application software, and protocols.\nThrough the years and as technology improved, vulnerability assessment itself \nhas gone through several generations, including using code or script downloaded \n" }, { "page_number": 288, "text": "13.2  Intrusion Detection\b\n275\nfrom the Internet or freely distributed that was compiled and executed for specific \nhardware or platforms.\nOnce they have identified the target system’s vulnerability, then they just go in \nfor a kill.\n13.2.1.2  Physical Intrusion\nBesides scanning the network for information that will eventually enable intrud­\ners to illegally enter an organization network, intruders also can enter an organiza­\ntion network masquerading as legitimate users. They do this through a number of \nways ranging from acquiring special administrative privileges to low-privilege user \naccounts on the system. If the system doesn’t have the latest security patches, it \nmay not be difficult for the hacker to acquire these privileges. The intruder can also \nacquire remote access privileges.\n13.2.1.3  Denial of Service\nDenial-of-service (DoS) attacks are where the intruder attempts to crash a service \n(or the machine), overload network links, overload the CPU, or fill up the disk. The \nintruder is not trying to gain information, but to simply act as a vandal to prevent \nyou from making use of your machine.\nCommon Denial-of-Service Attacks\nPing-of-Death sends an invalid fragment, which starts before the end of packet, \n• \nbut extends past the end of the packet.\nSYN Flood sends a TCP SYN packet, which starts connections, very fast, leaving \n• \nthe victim waiting to complete a huge number of connections, causing it to run \nout of resources and dropping legitimate connections.\nLand/Latierra sends a forged SYN packet with identical source/destination \n• \naddress/port so that the system goes into an infinite loop trying to complete the \nTCP connection.\nWinNuke sends an OOB/URG data on a TCP connection to port 139 (NetBIOS \n• \nSession/SMB), which causes the Windows system to hang.\n13.2.2  The Dangers of System Intrusions\nThe dangers of system intrusion manifests are many including the following:\nLoss of personal data that may be stored on a computer. Personal data loss means \n• \na lot and means different things to different people depending on the intrinsic \n" }, { "page_number": 289, "text": "276\b\n13  System Intrusion Detection and Prevention\nvalue attached to the actual data lost or accessed. Most alarming in personal data \nloss is that the way digital information is lost is not the same as the loss of physical \ndata. In physical data loss, you know that if it gets stolen, then somebody has it \nso you may take precautions. For example, you may report to the police and call \nthe credit card issuers. However, this is not the same with digital loss because \nin digital loss you may even never know that your data was lost. The intruders \nmay break into the system and copy your data and you never know. The damage, \ntherefore, from digital personal data loss may be far greater.\nCompromised privacy. These days more and more people are keeping a lot more \n• \nof their personal data online either through use of credit or debit cards; in addition, \nmost of the information about an individual is stored online by companies \nand government organizations. When a system storing this kind of data is \ncompromised, a lot of individual data gets compromised. This is because a lot of \npersonal data is kept on individuals by organizations. For example, a mortgage \ncompany can keep information on your financial credit rating, social security \nnumber, bank account numbers, and a lot more. Once such an organization’s \nnetwork is compromised, there is much information on individuals that is \ncompromised and the privacy of those individuals is compromised as well.\nLegal liability. If your organization network has personal information of the \n• \ncustomer and it gets broken into, thus compromising personal information that you \nstored, you are potentially liable for damages caused by a hacker either breaking into \nyour network or using your computers to break into other systems. For example, if \na hacker does two or three level-hacking using your network or a computer on your \nnetwork, you can be held liable. A two-level hacking involves a hacker breaking \ninto your network and using it to launch an attack on another network.\n13.3  Intrusion Detection Systems (IDSs)\nAn intrusion detection system (IDS) is a system used to detect unauthorized intru­\nsions into computer systems and networks. Intrusion detection as a technology \nis not new; it has been used for generations to defend valuable resources. Kings, \nemperors, and nobles who had wealth used it in rather an interesting way. They built \ncastles and palaces on tops of mountains and sharp cliffs with observation towers to \nprovide them with a clear overview of the lands below where they could detect any \nattempted intrusion ahead of time and defend themselves. Empires and kingdoms \ngrew and collapsed based on how well intrusions from the enemies surrounding \nthem, could be detected. In fact, according to the Greek legend of the Trojan Horse, \nthe people of Crete were defeated by the Greeks because the Greeks managed to \npenetrate the heavily guarded gates of the city walls.\nThrough the years, intrusion detection has been used by individuals and com­\npanies in a variety of ways including erecting ways and fences around valuable \nresources with sentry boxes to watch the activities surrounding the premises of the \nresource. Individuals have used dogs, flood lights, electronic fences, and closed \ncircuit television and other watchful gadgets to be able to detect intrusions.\n" }, { "page_number": 290, "text": "13.3  Intrusion Detection Systems (IDSs)\b\n277\nAs technology has developed, a whole new industry based on intrusion detec­\ntion has sprung up. Security firms are cropping up everywhere to offer individual \nand property security – to be a watchful eye so that the property owner can sleep or \ntake a vacation in peace. These new systems have been made to configure changes, \ncompare user actions against known attack scenarios, and be able to predict changes \nin activities that indicate and can lead to suspicious activities.\nIn Section 13.2, we outlined six subdivisions of system intrusions. These six can \nnow be put into three models of intrusion detection mechanisms: anomaly-based \ndetection, signature-based detection, and hybrid detection. In anomaly-based detec­\ntion, also known as behavior-based detection, the focus is to detect the behavior that is \nnot normal, or behavior that is not consistent with normal behavior. Theoretically, this \ntype of detection requires a list of what is normal behavior. In most environments this \nis not possible, however. In real-life models, the list is determined from either histori­\ncal or empirical data. However, neither historical nor empirical data represent all pos­\nsible acceptable behavior. So a list has got to be continuously updated as new behavior \npatterns not on the list appear and are classified as acceptable or normal behavior. The \ndanger with this model is to have unacceptable behavior included within the training \ndata and later be accepted as normal behavior. Behavior-based intrusion detections, \ntherefore, are also considered as rule-based detection because they use rules, usually \ndeveloped by experts, to be able to determine unacceptable behavior.\nIn signature-based detection, also known as misuse-based detection, the focus is \non the signature of known activities. This model also requires a list of all known unac­\nceptable actions or misuse signatures. Since there are an infinite number of things that \ncan be classified as misuse, it is not possible to put all these on the list and still keep \nit manageable. So only a limited number of things must be on the list. To do this and \ntherefore be able to manage the list, we categorize the list into three broad activities:\nunauthorized access,\n• \nunauthorized modification, and\n• \ndenial of service.\n• \nUsing these classifications, it is then possible to have a controlled list of misuse \nwhose signatures can be determined. The problem with this model, though, is that it \ncan detect only previously known attacks.\nBecause of the difficulties with both the anomaly-based and signature-based \ndetections, a hybrid model is being developed. Much research is now focusing on \nthis hybrid model [1].\n13.3.1  Anomaly Detection\nAnomaly based systems are “learning” systems in a sense that they work by con­\ntinuously creating “norms” of activities. These norms are then later used to detect \nanomalies that might indicate an intrusion. Anomaly detection compares observed \nactivity against expected normal usage profiles “leaned.” The profiles may be devel­\noped for users, groups of users, applications, or system resource usage.\n" }, { "page_number": 291, "text": "278\b\n13  System Intrusion Detection and Prevention\nIn anomaly detection, it is assumed that all intrusive activities are necessarily \nanomalous. This happens in real life too, where most “bad” activities are anomalous \nand we can, therefore, be able to character profile the “bad elements” in society. \nThe anomaly detection concept, therefore, will create, for every guarded system, a \ncorresponding database of “normal” profiles. Any activity on the system is checked \nagainst these profiles and is deemed acceptable or not based on the presence of such \nactivity in the profile database.\nTypical areas of interest are threshold monitoring, user work profiling, group \nwork profiling, resource profiling, executable profiling, static work profiling, adap­\ntive work profiling, and adaptive rule base profiling.\nAnonymous behaviors are detected when the identification engine takes observed \nactivities and compares them to the rule-base profiles for significant deviations. \nThe profiles are commonly for individual users, groups of users, system resource \nusages, and a collection of others as discussed below [2]:\nIndividual profile is a collection of common activities a user is expected to do \n• \nand with little deviation from the expected norm. This may cover specific user \nevents such as the time being longer than usual usage, recent changes in user \nwork patterns, and significant or irregular user requests.\nGroup profile. This is a profile that covers a group of users with a common work \n• \npattern, resource requests and usage, and historic activities. It is expected that \neach individual user in the group follows the group activity patterns.\nResource profile. This includes the monitoring of the use patterns of the \n• \nsystem resources such as applications, accounts, storage media, protocols, \ncommunications ports, and a list of many others the system manager may wish \nto include. It is expected, depending on the rule-based profile, that common uses \nwill not deviate significantly from these rules.\nOther profiles. These include executable profiles that monitor how executable \n• \nprograms use the system resources. This, for example, may be used to monitor \nstrange deviations of an executable program if it has an embedded Trojan worm \nor a trapdoor virus. In addition to executable profiles, there are also the following \nprofiles: work profile which includes monitoring the ports, static profile whose \njob is to monitor other profiles periodically updating them so that those profiles, \ncannot slowly expand to sneak in intruder behavior, and a variation of the work \nprofile called the adaptive profile which monitors work profiles, automatically \nupdating them to reflect recent upsurges in usage. Finally, there is also the \nadoptive rule base profile which monitors historic usage patterns of all other \nprofiles and uses them to make updates to the rule base [3].\nBesides being embarrassing and time consuming, the concept also has other \nproblems. As pointed out by Sundaram [1], if we consider that the set of intrusive \nactivities only intersects the set of anomalous activities instead of being exactly the \nsame, then two problems arise:\nAnomalous activities that are not intrusive are classified as intrusive.\n• \nIntrusive activities that are not anomalous result in false negatives, that is, events \n• \nare not flagged intrusive, though they actually are.\n" }, { "page_number": 292, "text": "13.4  Types of Intrusion Detection Systems\b\n279\nAnomaly detection systems are also computationally expensive because of the \noverhead of keeping track of, and possibly updating, several system profile metrics.\n13.3.2  Misuse Detection\nUnlike anomaly detection where we labeled every intrusive activity anomalous, \nthe misuse detection concept assumes that each intrusive activity is representable \nby a unique pattern or a signature so that slight variations of the same activity pro­\nduce a new signature and therefore can also be detected. Misuse detection systems, \nare therefore, commonly known as signature systems. They work by looking for a \nspecific signature on a system. Identification engines perform well by monitoring \nthese patterns of known misuse of system resources. These patterns, once observed, \nare compared to those in the rule base that describe “bad” or “undesirable” usage of \nresources. To achieve this, a knowledge database and a rule engine must be devel­\noped to work together. Misuse pattern analysis is best done by expert systems, mod­\nel-based reasoning, or neural networks.\nTwo major problems arise out of this concept:\nThe system cannot detect unknown attacks with unmapped and unarchived \n• \nsignatures.\nThe system cannot predict new attacks and will, therefore, be responding after an \n• \nattack has occurred. This means that the system will never detect a new attack.\nIn a computer network environment, intrusion detection is based on the fact that \nsoftware used in all cyber attacks often leave a characteristic signature. This signa­\nture is used by the detection system and the information gathered is used to deter­\nmine the nature of the attack. At each different level of network investigative work, \nthere is a different technique of network traffic information gathering, analysis, and \nreporting. Intrusion detection operates on already gathered and processed network \ntraffic data. It is usually taken that the anomalies noticed from the analysis of this \ndata would lead to distinguishing between an intruder and a legitimate user of the \nnetwork. The anomalies resulting from the traffic analyses are actually large and \nnoticeable deviations from historical patterns of usage. Identification systems are \nsupposed to identify three categories of users: legitimate users, legitimate users per­\nforming unauthorized activities, and of course intruders who have illegally acquired \nthe required identification and authentication.\n13.4  Types of Intrusion Detection Systems\nIntrusion detection systems are also classified based on their monitoring scope. \nThere are those that monitor only a small area and those that can monitor a wide \narea. Those that monitor a wide area are known as network-based intrusion detec­\ntion and those that have a limited scope are known as host-based detections.\n" }, { "page_number": 293, "text": "280\b\n13  System Intrusion Detection and Prevention\n13.4.1  Network-Based Intrusion Detection Systems (NIDSs)\nNetwork-based intrusion detection systems have the whole network as the monitoring \nscope. They monitor the traffic on the network to detect intrusions. They are responsible \nfor detecting anomalous, inappropriate, or other data that may be considered unauthor­\nized and harmful occurring on a network. There are striking differences between NIDS \nand firewalls. Recall from Chapter 11 that firewalls are configured to allow or deny \naccess to a particular service or host based on a set of rules. Only when the traffic matches \nan acceptable pattern is it permitted to proceed regardless of what the packet contains. \nAn NIDS also captures and inspects every packet that is destined to the network regard­\nless of whether it is permitted or not. If the packet signature based on the contents of the \npacket is not among the acceptable signatures, then an alert is generated.\nThere are several ways an NIDS may be run. It can either be run as an indepen­\ndent standalone machine where it promiscuously watches over all network traffic \nor it can just monitor itself as the target machine to watch over its own traffic. For \nexample, in this mode, it can watch itself to see if somebody is attempting a SYN-\nflood or a TCP port scan.\nWhile NIDSs can be very effective in capturing all incoming network traffic, \nit is possible that an attacker can evade this detection by exploiting ambiguities in \nthe traffic stream as seen by the NIDS. Mark Handley, Vern Paxson, and Christian \nKreibich list the sources of these exploitable ambiguities as follows [4]:\nMany NIDSs do not have complete analysis capabilities to analyze a full range \n• \nof behavior that can be exposed by the user and allowed by a particular protocol. \nThe attacker can also evade the NIDS: even if the NIDS does perform analysis \nfor the protocol.\nSince NIDSs are far removed from individual hosts, they do not have full knowledge \n• \nof each host’s protocol implementation. This knowledge is essential for the NIDS \nto be able to determine how the host may treat a given sequence of packets if \ndifferent implementations interpret the same stream of packets in different ways.\nAgain, since NIDSs do not have a full picture of the network topology between \n• \nthe NIDS and the hosts, the NIDS may be unable to determine whether a given \npacket will even be seen by the hosts.\n13.4.1.1  Architecture of a Network-Based Intrusion Detection\nAn intrusion detection system consists of several parts that must work together to \nproduce an alert. The functioning of these parts may be either sequential or some­\ntimes parallel [5, 6]. The parts are shown in Fig. 13.1.\nNetwork Tap/Load Balancer\nThe network tap, or the load balancer as it is also known, gathers data from the \nnetwork and distributes it to all network sensors. It can be a software agent that runs \n" }, { "page_number": 294, "text": "13.4  Types of Intrusion Detection Systems\b\n281\nfrom the sensor or hardware, such as a router. The load balancer or tap is an impor­\ntant component of the intrusion detection system because all traffic into the network \ngoes through it and it also prevents packet loss in high-bandwidth networks. Certain \ntypes of taps have limitations in selected environments such as switched networks. \nIn networks where there are no load balancers, sensors must be placed in such a \nway that they are responsible for traffic entering the network in their respective \nsub-network.\nNetwork Sensor/Monitoring\nThe network sensor or monitor is a computer program that runs on dedicated \nmachines or network devices on mission critical segments. In networks with a load \nbalancer, the sensors receive traffic from the balancer. In other networks without \na load balancer, the sensors receive live traffic from the network and separate it \nbetween suspicious and normal traffic. A sensor can be implemented as an agent on \na mission critical destination machine in a network. They are either anomaly-based \nor signature-based. Promiscuous mode sensors, which are sensors that detect any­\nthing that seems like a possible attempt at intrusion, run on dedicated machines.\nAnalyzer\nThe analyzer determines the threat level based on the nature and threat of the suspi­\ncious traffic. It receives data from the sensors. The traffic is then classified as either \nsafe or an attack. Several layers of monitoring may be done where the primary layer \nHOST-based IDS\nServer\nLaptop\nLaptop\nIDS Outside the firewall\nFirewall\nInternet\nIDS inside the firewall\nFig. 13.1  The architecture of a network-based intrusion detection system\n" }, { "page_number": 295, "text": "282\b\n13  System Intrusion Detection and Prevention\ndetermines the threat severity, secondary layers then determine the scope, intent, \nand frequency of the threat.\nAlert Notifier\nIt contacts the security officer responsible for handling incidents whenever a threat \nis severe enough according to the organization’s security policy. Standard capabili­\nties include on-screen alerts, audible alerts, paging, and e-mail. Most systems also \nprovide SNMP so that an administrator can be notified. Frequent alerts for seem­\ningly trivial threats must be avoided because they result in a high rate of false posi­\ntives. It must also be noted that not reporting frequently enough because the sensors \nare set in such a way that they ignore a number of threats, many of them being real, \nresult in false negatives which results in the intrusion detection system providing \nmisleading sense of security.\nBecause the performance of the intrusion detection system depends on the balanc­\ning of both false positives and false negatives, it is important to use intrusion detec­\ntion systems that are adjustable and can, therefore, offer balancing ­capabilities.\nCommand Console/Manager\nThe role of the command console or manager is to act as the central command \nauthority for controlling the entire system. It can be used to manage threats by rout­\ning incoming network data to either a firewall or to the load balancer or straight \nto routers. It can be accessed remotely so the system may be controlled from any \nlocation. It is typically a dedicated machine with a set of tools for setting policy \nand processing collected alarms. On the console, there is an assessment manager, a \ntarget manager, and an alert manager. The console has its own detection engine and \ndatabase of detected alerts, for scheduled operations and data mining.\nResponse Subsystem\nThe response subsystem provides the capabilities to take action based on threats to \nthe target systems. These responses can be automatically generated or initiated by \nthe system operator. Common responses include reconfiguring a router or a firewall \nand shutting down a connection.\nDatabase\nThe database is the knowledge repository for all that the intrusion detection system \nhas observed. This can include both behavioral and misuse statistics. These statistics \nare necessary to model historical behavior patterns that can be useful during ­damage \n" }, { "page_number": 296, "text": "13.4  Types of Intrusion Detection Systems\b\n283\nassessment or other investigative tasks. Useful information need not necessarily be \nindicative of misuse. The behavioral statistics help in developing the patterns for the \nindividual, and the misuse statistics aid in detecting attempts at intrusion.\n13.4.1.2  Placement of IDS Sensors\nThe position to place a network IDS sensors actually depends on several factors, \nincluding the topology of the internal network to be protected, the kind of security \npolicy the organization is following, and the types of security practices in effect. \nFor example, you want to place sensors in places where intrusions are most likely to \npass. These are the network “weak” points. However, it is normal practice to place \nIDS sensors in the following areas [6]:\nInside the DMZ. We saw in Chapter 11 that the DMZ is perhaps the most ideal \n• \nplace to put any detection system because almost all attacks enter the protected \ninternal network through the DMZ. IDS sensors are, therefore, commonly \nplaced outside of the organization’s network’s first firewall in the DMZ. The \nIDS sensors in the DMZ can be enhanced by putting them into zoned areas. \nAnother good location for IDS sensors is inside each firewall. This approach \ngives the sensors more protection, making them less vulnerable to coordinated \nattacks. In cases where the network perimeter does not use a DMZ, the ideal \nlocations then may include any entry/exit points such as on both sides of the \nfirewall, dial-up servers, and on links to any collaborative networks. These links \ntend to be low-bandwidth (T1 speeds) and are usually the entry point of an \nexternal attack.\nBetween the Firewall and the Internet. This is a frequent area of unauthorized \n• \nactivity. This position allows the NIDS to “see” all Internet traffic as it comes \ninto the network. This location, however, needs a good appliance and sensors \nthat can withstand the high volume of traffic.\nBehind the Network Front Firewall. This is a good position; however, most of \n• \nthe bad network traffic has already been stopped by the firewall. It handles all the \nbad traffic that manages to get through the firewall.\nInside the Network. Commonly placed in strategic points and used to “see” \n• \nsegments of the network. Network segments like these are usually the suspected \nweak areas of the network. The problem with this approach, however, is that \nthe sensors may not be able to cover all the targets it is supposed to. Also it may \ncause the degradation of the network performance.\nFigure 13.2 shows the various places where ID sensors can be deployed.\n13.4.1.3  Advantages of Network-Based Intrusion Detection Systems\nAlthough both NIDSs and HIDSs (13.4.2) have different focuses, areas of deploy­\nment, and deployment requirements, using NIDS has the following advantages [8]:\n" }, { "page_number": 297, "text": "284\b\n13  System Intrusion Detection and Prevention\nAbility to detect attacks that a host-based system would miss because NIDSs \n• \nmonitor network traffic at a transport layer. At this level, the NIDSs are able \nto look at not only the packet addresses but also packet port numbers from the \npacket headers. HIDSs which monitor traffic at a lower link layer packets may \nfail to detect some types of attack.\nDifficulty to remove evidence. Because NIDSs are on dedicated machines that \n• \nare routinely protected, it is more difficult for an attack to remove the evidence \nthan it is with HIDSs which are near or at the attacker’s desk. Also, since NIDSs \nuse live network traffic and it is this traffic that is captured by NIDSs when there \nis an attack, this also makes it difficult for an attacker to remove evidence.\nReal-time detection and response. Because the NIDSs are at the most opportune \n• \nand strategic entry points in the network, they are able to detect foreign \nintrusions into the network in real-time and report as quickly as possible to the \nadministrator for a quick and appropriate response. Real-time notification, which \nmany NIDSs have now, allows for a quick and appropriate response and can \neven let the administrators allow the intruder more time as they do more and \ntargeted surveillance.\nAbility to detect unsuccessful attacks and malicious intent. Because the HIDSs \n• \nare inside the protected internal network, they never come into contact with \nmany types of attack since such attacks are many times stopped by the outside \nfirewall. NIDSs, especially those in the DMZ, come across these attacks (those \nHOST-based IDS\nManager\nLaptop\nLaptop\nLoadBalancer/Network Tap \nIDS inside the firewall\nFirewall\nInternet\nNetworkSensor/ \nMonitoring\nNetworkSensor/ \nMonitoring\nAnalyzer/Alert\nNotifier/Command\nConsole\nDatabase\nResponse Subsystem \nFig. 13.2  The various places of placing the IDS sensors\n" }, { "page_number": 298, "text": "13.4  Types of Intrusion Detection Systems\b\n285\nthat escape the first firewall) that are later rejected by the inner firewall and those \ntargeting the DMZ services that have been let in by the outer firewall. Besides \nshowing these attacks, NIDSs can also record the frequency of these attacks.\n13.4.1.4  Disadvantages of NIDS\nAlthough NIDS are very well suited to monitor all the network coming into the \nnetwork, they have limitations [9]:\nBlind Spots. Deployed at the borders of an organization network, NIDS are \n• \nblind to the whole inside network. As sensors are placed in designated spots, \nespecially in switched networks, NIDS have blind spots – sometimes whole \nnetwork segments they cannot see.\nEncrypted Data. One of the major weaknesses of NIDS is on encrypted data. \n• \nThey have no capabilities to decrypt encrypted data. Although they can scan \nunencrypted parts of the packet such as headers, they are useless to the rest of \nthe package.\n13.4.2  Host-Based Intrusion Detection Systems (HIDS)\nRecent studies have shown that the problem of organization information misuse is \nnot confined only to the “bad” outsiders but the problem is more rampant within \norganizations. To tackle this problem, security experts have turned to inspection of \nsystems within an organization network. This local inspection of systems is called \nhost-based intrusion detection systems (HIDS).\nHost-based intrusion detection is the technique of detecting malicious activi­\nties on a single computer. A host-based intrusion detection system is, therefore, \ndeployed on a single target computer and it uses software that monitors operating \nsystem specific logs, including system, event, and security logs on Windows sys­\ntems and syslog in Unix environments to monitor sudden changes in these logs. \nWhen a change is detected in any of these files, the HIDS compares the new log \nentry with its configured attack signatures to see if there is a match. If a match is \ndetected, then this signals the presence of an illegitimate activity.\nAlthough HIDSs are deployable on a single computer, they can also be put on a \nremote host or they can be deployed on a segment of a network to monitor a section \nof the segment. The data gathered, which sometimes can be overwhelming, is then \ncompared with the rules in the organization’s security policy. The biggest problem \nwith HIDSs is that given the amount of data logs generated, the analysis of such \nraw data can put significant overhead not only on the processing power needed to \nanalyze this data but also on the security staff needed to review the data.\nHost sensors can also use user-level processes to check key system files and \nexecutables to periodically calculate their checksum and report changes in the \nchecksum.\n" }, { "page_number": 299, "text": "286\b\n13  System Intrusion Detection and Prevention\n13.4.2.1  Advantages of Host-Based Intrusion Detection Systems\nHIDSs are new kids on the intrusion detection block. They came into widespread \nin use in the early and mid-1980s when there was a realization after studies showed \nthat a large number of illegal and illegitimate activities in organization networks \nactually originated from within the employees. Over the succeeding years as tech­\nnology advanced, the HIDS technology has also advanced in tandem. More and \nmore organizations are discovering the benefits of HIDSs on their overall security. \nBesides being faster than their cousins the NIDSs, because they are dealing with \nless traffic, they offer additional advantages including the following [8]:\nAbility to verify success or failure of an attack quickly – because they log \n• \ncontinuing events that have actually occurred, they have information that is more \naccurate and less prone to false positives than their cousins, the NIDSs. This \ninformation can accurately infer whether an attack was successful or not quickly \nand a response can be started early. In this role, they complement the NIDSs, not \nas an early warning but as a verification system.\nLow-level monitoring. Because they monitor at a local host, they are able to \n• \n“see” low-level local activities such as file accesses, changes to file permissions, \nattempts to install new executables or attempts to access privileged services, \nchanges to key system files and executables, and attempts to overwrite vital \nsystem files or to install Trojan horses or backdoors. These low-level activities \ncan be detected very quickly, and the reporting is quick and timely to give the \nadministrator time for an appropriate response. Some of these low-level attacks \nare so small and far less intensive such that no NIDS can detect them.\nNear real-time detection and response. HIDSs have the ability to detect minute \n• \nactivities at the target hosts and report them to the administrator very quickly at a \nrate near real-time. This is possible because the operating system can recognize \nthe event before any IDS can, in which case, an intruder can be detected and \nstopped before substantial damage is done.\nAbility to deal with encrypted and switched environments – Large networks are \n• \nroutinely switch-chopped into many but smaller network segments. Each one \nof these smaller networks is then tagged with a NIDS. In a heavily switched \nnetwork, it can be difficult to determine where to deploy a network-based IDS \nto achieve sufficient network coverage. This problem can be solved by use of \ntraffic mirroring and administrative ports on switches, but not as effective. HIDS \nprovides this needed greater visibility into these switched environments by \nresiding on as many critical hosts as needed. In addition, because the operating \nsystems see incoming traffic after encryption has already been de-encrypted, \nHIDSs that monitor the operating systems can deal with these encryptions better \nthan NIDSs, which sometimes may not even deal with them at all.\nCost effectiveness. Because no additional hardware is needed to install HIDS, \n• \nthere may be great organization savings. This compares favorably with the big \ncosts of installing NIDS which require dedicated and expensive servers. In fact, \nin large networks that are switch-chopped which require a large number of NIDSs \nper segment, this cost can add up.\n" }, { "page_number": 300, "text": "13.5  The Changing Nature of IDS Tools\b\n287\n13.4.2.2  Disadvantages of HIDS\nLike their cousin the NIDS, HIDSs have limitations in what they can do. These \nlimitations include the following [9]:\nMyopic viewpoint. Since they are deployed at a host, they have a very limited \n• \nview of the network.\nSince they are close to users, they are more susceptible to illegal tampering.\n• \n13.4.3  The Hybrid Intrusion Detection System\nWe have noted in both Sections 13.4.1 and 13.4.2 that there was a need for both \nNIDS and HIDS, each patrolling its own area of the network for unwanted and \nillegal network traffic. We have also noted the advantages of not using one over the \nother but of using one to complement the other. In fact, if anything, after reading \nSections 13.4.1.3 and 13.4.2.1, one comes out with an appreciation of how comple­\nmentary these two intrusion detection systems are. Both bring to the security of the \nnetwork their own strengths and weaknesses that nicely complement and augment \nthe security of the network.\nHowever, we also know and have noted in Section 13.4.1.4 that NIDS have been \nhistorically unable to work successfully in switched and encrypted networks, and \nas we have also noted in 13.4.2.2, both HIDS and HIDS have not been successful \nin high-speed networks – networks whose speeds exceed 100 Mbps. This raises the \nquestion of a hybrid system that contains all the things that each system has and \nthose that each system misses, a system with both components. Having both com­\nponents provides greater flexibility in their deployment options.\nHybrids are new and need a great deal of support to gain on their two cousins. \nHowever, their success will depend to a great extent on how well the interface \nreceives and distributes the incidents and integrates the reporting structure between \nthe different types of sensors in the HIDS and NIDS spheres. Also the interface \nshould be able to smartly and intelligently gather and report data from the network \nor systems being monitored.\nThe interface is so important and critical because it receives data, collects analy­\nsis from the respective component, coordinates and correlates the interpretation of \nthis data, and reports it. It represents a complex and unified environment for track­\ning, reporting, and reviewing events.\n13.5  The Changing Nature of IDS Tools\nAlthough ID systems are assumed, though wrongly, by management and many in \nthe network community that they protect network systems from outside intruders, \nrecent studies have shown that the majority of system intrusions actually come \nfrom insiders. So newer IDS tools are focusing on this issue. Also, since the human \n" }, { "page_number": 301, "text": "288\b\n13  System Intrusion Detection and Prevention\nmind is the most complicated and unpredictable machine ever, as new IDS tools are \nbeing built to counter systems intrusion, new attack patterns are being developed to \ntake this human behavior unpredictability into account. To keep abreast of all these \nchanges, ID systems must be changing constantly.\nAs all these changes are taking place, the primary focus of ID systems has been on a \nnetwork as a unit where they collect network packet data by watching network packet \ntraffic and then analyzing it based on network protocol patterns “norms,” “normal” \nnetwork traffic signatures, and network traffic anomalies built in the rule base. But \nsince networks are getting larger, traffic heavier, and local networks more splintered, it \nis becoming more and more difficult for the ID system to “see” all traffic on a switched \nnetwork such as an Ethernet. This has led to a new approach to looking closer at the host. \nSo in general, ID systems fall into two categories: host-based and network-based.\n13.6  Other Types of Intrusion Detection Systems\nAlthough NIDS and HIDS and their hybrids are the most widely used tools in net­\nwork intrusion detection, there are others that are less used but more targeting and \ntherefore more specialized. Because many of these tools are so specialized, many \nare still not considered as being intrusion detection systems, but rather intrusion \ndetection add-ons or tools.\n13.6.1  System Integrity Verifiers (SIVs)\nSystem integrity verifiers (SIVs) monitor critical files in a system, such as system \nfiles, to find whether an intruder has changed them. They can also detect other \nsystem components’ data; for example, they detect when a normal user somehow \nacquires root/administrator level privileges. In addition, they also monitor system \nregistries in order to find well known signatures [10].\n13.6.2  Log File Monitors (LFM)\nLog file monitors (LFMs) first create a record of log files generated by network ser­\nvices. Then they monitor this record, just like NIDS, looking for system trends, ten­\ndencies, and patterns in the log files that would suggest that an intruder is attacking.\n13.6.3  Honeypots\nA honeypot is a system designed to look like something that an intruder can hack. \nThey are built for many purposes but the overriding one is to deceive attackers and \nlearn about their tools and methods. Honeypots are also add-on/tools that are not \n" }, { "page_number": 302, "text": "13.6  Other Types of Intrusion Detection Systems\b\n289\nstrictly sniffer-based intrusion detection systems like HIDS and NIDS. However, \nthey are good deception systems that protect the network in much the same way \nas HIDS and NIDS. Since the goal for a honeypot is to deceive intruders and learn \nfrom them without compromising the security of the network, then it is important to \nfind a strategic place for the honeypot.\nTo many, the best location to achieve this goal is in the DMZ for those networks \nwith DMZs or behind the network firewall if the private network does not have a \nDMZ. The firewall location is ideal because of the following [5]:\nMost firewalls log all traffic going through it; hence, this becomes a good way to \n• \ntrack all activities of the intruders. By reviewing the firewall logs, we can determine \nhow the intruders are probing the honeypot and what they are looking for.\nMost firewalls have some alerting capability, which means that with a few \n• \nadditions to the firewall rule base, we can get timely alerts. Since the honeypot \nis built in such a way that no one is supposed to connect to it, any packets sent \nto it are most likely from intruders probing the system. And if there is any \noutgoing traffic coming from the honeypot, then the honeypot is most likely \ncompromised.\nThe firewall can control incoming and outgoing traffic. This means that the \n• \nintruders can find, probe, and exploit our honeypot, but they cannot compromise \nother systems.\nSo any firewall dedicated as a honeypot can do as long as it can control and log \ntraffic going through it. If no firewall is used, then dedicate any machine either in \nthe DMZ or behind a firewall for the purpose of logging all attempted accesses. \nFigure 13.3 shows the positioning of a honeypot.\nHOST-based IDS\nServer\nLaptop\nLaptop\nRouter\nFirewall\nInternet\nHoneypot Workstation Server\nFirewall\nDMZ\nIDS inside the firewall\nFig. 13.3  The positioning of a honeypot\n" }, { "page_number": 303, "text": "290\b\n13  System Intrusion Detection and Prevention\nHoneypots come in a variety of capabilities from the simplest monitoring one to \ntwo intruder activities to the most powerful monitoring many intruder activities. The \nsimplest honeypot is a port monitor which is a simple socket-based program that \nopens up a listening port. The program can listen to any designed port. For example, \nNukeNabbe, for Windows, listens on ports typically scanned for by hackers. It then \nalerts the administrator whenever such designated ports are being scanned. The sec­\nond type of honeypot is the deception system, which instead of listening quietly on \na port, interacts with the intruder, responding to him or her as if it were a real server \nwith that port number. Most deception systems implement only as much of the pro­\ntocol machine as necessary to trap 90% of the attacks against the protocol [10]. The \nnext type of honeypot is the multi-protocol deception system which offers most of \nthe commonly hacked protocols in a single toolkit. Finally, there is a full system that \ngoes beyond what the deception systems do to incorporate the ability to alert the \nsystem administrator on any exceptional condition. Other more complex honeypots \ncombine a full system with NIDSs to supplement the internal logging [10].\n13.6.3.1  Advantages of Honeypots\nPerhaps one would wonder why a system administrator would go through the pain \nof setting up, maintaining, and daily responding to honeypots. There are advantages \nto having honeypots on a network. They include the following [10]:\nSince NIDSs have difficulties distinguishing between hostile and nonhostile \n• \nactivities, honeypots are more suited to digging out hostile intrusions because \nisolated honeypots should not normally be accessed. So if they are accessed at \nall, such accesses are unwanted intrusions and they should be reported.\nA honeypot can attract would-be hackers into the trap by providing a banner that \n• \nlooks like a system that can easily be hacked.\n13.7  Response to System Intrusion\nA good intrusion detection system alert should produce a corresponding response. \nThe type of response is relative to the type of attack. Some attacks do not require \nresponses; others require a precautionary response. Yet others need a rapid and \nforceful response. For the most part, a good response must consist of preplanned \ndefensive measures that include an incident response team and ways to collect IDS \nlogs for future use and for evidence when needed.\n13.7.1  Incident Response Team\nAn incident response team (IRT) is a primary and centralized group of dedicated \npeople charged with the responsibility of being the first contact team whenever \n" }, { "page_number": 304, "text": "13.8  Challenges to Intrusion Detection Systems\b\n291\nan incidence occurs. According to Keao [6], an IRT must have the following \n­responsibilities:\nkeeping up-to-date with the latest threats and incidents,\n• \nbeing the main point of contact for incident reporting,\n• \nnotifying others whenever an incident occurs,\n• \nassessing the damage and impact of every incident,\n• \nfinding out how to avoid exploitation of the same vulnerability, and\n• \nrecovering from the incident.\n• \nIn handling an incident, the team must carefully do the following:\n• \nprioritize the actions based on the organization’s security policy but taking into \n• \naccount the following order:\nhuman life and people’s safety,\n• \nmost sensitive or classified data,\n• \ncostly data and files,\n• \npreventing damage to systems, and\n• \nminimizing the destruction to systems.\n• \nAssess incident damage: This is through doing a thorough check on all the \n• \nfollowing: system log statistics, infrastructure and operating system checksum, \nsystem configuration changes, changes in classified and sensitive data, traffic \nlogs, and password files.\nAlert and report the incident to relevant parties. These may include law \n• \nenforcement agencies, incident reporting centers, company executives, \nemployees, and sometimes the public.\nRecovering from incident: This involves making a post-mortem analysis of all \n• \nthat went on. This post-mortem report should include steps to take in case of \nsimilar incidents in the future.\n13.7.2  IDS Logs as Evidence\nFirst and foremost, IDS logs can be kept as a way to protect the organization in \ncase of legal proceedings. Some people tend to view IDS as a form of wiretap. \nIf sensors to monitor the internal network are to be deployed, verify that there \nis a published policy explicitly stating that use of the network is consent to \nmonitoring.\n13.8  Challenges to Intrusion Detection Systems\nWhile IDS technology has come a long way and there is an exciting future for it \nas the marriage between it and artificial intelligence takes hold, it faces many chal­\nlenges. Although there are IDS challenges in many areas, more serious challenges \nare faced in deploying IDSs in switched environments.\n" }, { "page_number": 305, "text": "292\b\n13  System Intrusion Detection and Prevention\n13.8.1  Deploying IDS in Switched Environments\nThere is a particularly hard challenge faced by organizations trying to deploy IDS in \ntheir networks. Network-based IDS sensors must be deployed in areas where they \ncan “see” network traffic packets. However, in switched networks, this is not pos­\nsible because by their very nature, sensors in switched networks are shielded from \nmost of the network traffic. Sensors are allowed to “see” traffic only from specified \ncomponents of the network.\nOne way to handle this situation has traditionally been to attach a network sensor \nto a mirror port on the switch. But port mirroring, in addition to putting an overhead \non the port, gets unworkable when there is an increase in traffic on that port because \noverloading one port with traffic from other ports may cause the port to bulk and \nmiss some traffic.\nSeveral solutions have been used recently including the following [9]:\nTapping. This involves deploying a line of passive taps that administrators \n• \ncan tap into to listen in on Ethernet connections; by sending “copies” of the \nframes to a second switch with dedicated IDS sensor, overloading a port can \nbe avoided.\nBy using standard Cisco access control lists (ACL) in a Cisco appliance that \n• \nincludes a Cisco Secure IDS, one can tag certain frames for inspection.\nAmong other issues still limiting IDS technology are [2]\n• \nFalse alarms. Though the tools have come a long way, and are slowly gaining \n• \nacceptance as they gain widespread use, they still produce a significant number \nof both false positives and negatives,\nThe technology is not yet ready to handle a large-scale attack. Because of its very \n• \nnature, it has to literally scan every packet, every contact point, and every traffic \npattern in the network. For larger networks and in a large-scale attack, it is not \npossible that the technology can be relied on to keep working with acceptable \nquality and grace.\nUnless there is a breakthrough today, the technology in its current state cannot \n• \nhandle very fast and large quantities of traffic efficiently.\nProbably the biggest challenge is the IDS’s perceived and sometimes exaggerated \n• \ncapabilities. The technology, while good, is not the cure of all computer network \nills that it is pumped up to be. It is just like any other good security tool.\n13.9  Implementing an Intrusion Detection System\nAn effective IDS does not stand alone. It must be supported by a number of other \nsystems. Among the things to consider, in addition to the IDS, in setting up a good \nIDS for the company network are the following [10]:\nOperating Systems. A good operating system that has logging and auditing \n• \nfeatures. Most of the modern operating systems including Windows, Unix, and \n" }, { "page_number": 306, "text": "13.10  Intrusion Prevention Systems (IPSs)\b\n293\nother variants of Unix have these features. These features can be used to monitor \nsecurity critical resources.\nServices. All applications on servers such as Web servers, e-mail servers, and \n• \ndatabases should include logging/auditing features as well.\nFirewalls. As we discussed in Chapter 11, a good firewall should have some \n• \nnetwork intrusion detection capabilities. Set those features.\nNetwork management platform. Whenever network management services such \n• \nas OpenView are used, make sure that they do have tools to help in setting up \nalerts on suspicious activity.\n13.10  Intrusion Prevention Systems (IPSs)\nAlthough IDS have been one of the cornerstones of network security, they have \ncovered only one component of the total network security picture. They have been \nand they are a passive component which only detects and reports without prevent­\ning. A promising new model of intrusion is developing and picking up momentum. \nIt is the intrusion prevention system (IPS), which according to Andrew Yee [12] is \nto prevent attacks. Like their counterparts, the IDS, IPS fall into two categories: \nnetwork-based and host-based.\n13.10.1  Network-Based Intrusion Prevention Systems (NIPSs)\nBecause NIDSs are passively detecting intrusions into the network without prevent­\ning them from entering the networks, many organizations in recent times have been \nbundling up IDS and firewalls to create a model that can detect and then prevent.\nThe bundle works as follows. The IDS fronts the network with a firewall behind \nit. On the detection of an attack, the IDS then goes into the prevention mode by \naltering the firewall access control rules on the firewall. The action may result in \nthe attack being blocked based on all the access control regimes administered by the \nfirewall. The IDS can also affect prevention through the TCP resets; TCP utilizes \nthe RST (reset) bit in the TCP header for resetting a TCP connection, usually sent \nas a response request to a nonexistent connection [12]. But this kind of bundling is \nboth expensive and complex, especially to an untrained security team. The model \nsuffers from latency – the time it takes for the IDS to either modify the firewall \nrules or issue a TCP reset command. This period of time is critical in the success of \nan attack.\nTo respond to this need, a new technology, the IPS, is making its way into the \nnetwork security arena to address this latency issue. It does this by both the intru­\nsion detection system inline with the firewall. Like in NIDS, NIPS architecture \nvaries from product to product, but there is a basic underlying structure to all. These \ninclude traffic normalizer, system service scanner, detection engine, and traffic \nshaper [12].\n" }, { "page_number": 307, "text": "294\b\n13  System Intrusion Detection and Prevention\n13.10.1.1  Traffic Normalizer\nThe normalizer is in the line of network traffic to intercept traffic, resolving the traf­\nfic that has abnormalities before it sends it on. As it normalizes traffic, it may come \nto a point where it will discard the packet that does not conform to the set security \npolicy criteria like if the packet has a bad checksum. It also does further activities \nof the firewall, thus blocking traffic based on the criteria that would normally be put \nin a firewall. The normalizer also may hold packet fragments and reassemble them \ninto a packet based on its knowledge of the target system. The knowledge of the tar­\nget system is provided from a reference table built by the System Service Scanner.\n13.10.1.2  The Detection Engine\nThe detection engine handles all pattern matching that is not handled by the normal­\nizer. These are patterns that are not based on protocol states.\n13.10.1.3  Traffic Shaper\nBefore traffic leaves the NIPS, it must go through the traffic shaper for classifica­\ntion and flow management. The shaper classifies traffic protocol, although this may \nchange in the future to include classification based on user and applications.\n13.10.1.4  NIPS Benefits\nIn his extensive and thorough article “Network Intrusions: From Detection to Pre­\nvention,” Andre Lee gives the following NIPS benefits:\n Zero Latency Prevention. Without the NIDS and firewall bundle, NIPSs reduce \n• \nthis latency drastically by providing the notification within one hardwired \ncircuitry instead of two.\nEffective Network Hygiene. Since many attacks are recycle attacks whose \n• \nsignatures are known, NIPS remove these packets quickly, although it does not \ndo much effective anomaly analysis that is done by the NIDS.\nSimplified Management. Because the would-be bundle of a NIDS and firewall \n• \nare all packaged into one hardware, it reduces storage space and of course overall \nmanagement.\nAlthough it has all these advantages, NIPSs suffer from a number of problems \n• \nincluding the following [12]:\nProduction Readiness. This occurs because the technology is new and has not \n• \ngotten the field-testing it needs to prove effectiveness in every test.\nHigh Availability. This occurs because it is inline and on the first contact with \n• \nnetwork traffic, it may not be able to withstand high traffic availability and \ntolerance needed by all first and head-on network devices.\n" }, { "page_number": 308, "text": "13.11  Intrusion Detection Tools\b\n295\nDetection effectiveness. It has not yet been tested for effectiveness of detection \n• \nand it does not every stop everything, falling short like NIDS.\n13.10.2  Host-Based Intrusion Prevention Systems (HIPSs)\nLike its cousin, the NIDSs, NIPSs also have corresponding HIPS based on one host. \nMost HIPSs work by sand-boxing, a process of restricting the definition of accept­\nable behavior rules used on HIPSs. HIPS prevention occurs at the agent residing at \nthe host. The agent intercepts system calls or system messages by utilizing dynamic \nlinked libraries (dll) substitution. The substitution is accomplished by injecting \nexisting system dlls with vendor stub dlls that perform the interception. So function \ncalls made to system dlls actually perform a jump to vendor stub code where then \nthe bad calls are processed, evaluated, and dealt with. Most vendor stubs are kernel \ndrivers that provide system interception at the kernel level because processes sys­\ntem calls can be intercepted easily.\n13.10.2.1  HIPS Benefits\nAgain like their cousins the HIDS, HIPS have benefits that include the following [12]:\nEffective Context-Based prevention. HIPS are the only solution to prevention of \n• \nattacks that require simulation context. HIPS agents reside on the protected host, \nthey have complete context of the environment, and are therefore more capable \nof dealing with such attacks.\nEffective Against Zero Day Attacks. Since HIPS use sand-boxing method to \n• \ndeal with attacks, they can define acceptable parameters application or operating \nsystem service behavior to enable the agent to prevent any malicious attack on \nthe host.\nAlthough they have good benefits, HIPS also have disadvantages based on \n• \nlimitations that hamper their rapid adoption. Among these limitations are [12]\nDeployment Challenge. As we discussed in the HIDS, there are difficulties in \n• \ndeploying the remote agents on each and every host. These hosts need updating \nand are susceptible to tampering.\nDifficulty of Effective Sandbox Configuration. It can be a challenge to define \n• \neffective and nonrestrictive parameters on hosts.\nLack of Effective Prevention. Because with the use of sand-boxing, HIPS cannot \n• \nuse any standard prevention like signature prevention.\n13.11  Intrusion Detection Tools\nIntrusion detection tools work best when used after vulnerability scans have been \nperformed. They then stand watch. Table 13.1 displays several current ID tools.\n" }, { "page_number": 309, "text": "296\b\n13  System Intrusion Detection and Prevention\nAll network-based intrusion detection tools can provide recon (reconnaissance) \nprobes in addition to port and host scans. As monitoring tools, they give informa­\ntion on\nhundreds of thousands of network connections\n• \nexternal break-in attempts\n• \ninternal scans\n• \nmisuse patterns of confidential data\n• \nunencrypted remote logins or a Web sessions\n• \nunusual or potentially troublesome observed network traffic.\n• \nAll this information is gathered by these tools monitoring network components \n• \nand services that include the following:\nServers for\n• \nMail\n• \nFTP\n• \nWeb activities\n• \nDNS, RADIUS and others\n• \nTCP/IP ports\n• \nRouters, bridges, and other WAN connection\n• \nDrive Space\n• \nEvent log entries\n• \nFile modes and existence\n• \nFile contents\n• \nIn addition to the tools in Table 13.1, several other commercial and freeware IDS \nand scanning tools can be deployed on a network to gather these probes. The most \ncommon are the following:\nFlow-tools. A software package for collecting and processing NetFlow data from \n• \nCisco and Juniper routers\nTripwir. Monitors the status of individual files and determines whether they were \n• \nchanged.\nTCPdump. A freeware and one of the most popular IDS tool created by National \n• \nResearch Group.\nSnort. Another freeware and popular intrusion detection system that alerts and \n• \nreassembles the TCPdump format.\nTable 13.1  Some current ID tools\nName\t\nSource\nRealsecure v.3.0\t\nISS\nNet Perver 3.1\t\nAxent Technologies\nNet Ranger v2.2\t\nCISCO\nFlightRemohe v2.2\t\nNFR Network\nSessi-Wall-3, v4.0\t\nComputer Associates\nKane Security Monitor\t\nSecurity Dynamics\n" }, { "page_number": 310, "text": "Advanced Exercises\b\n297\nPortsentry. A port scan detector that shuts down attacking hosts, denying them \n• \naccess to any network host while notifying administrators.\nDragon IDS. Developed by Network Security Wizards, Inc., it is a popular \n• \ncommercial IDS.\nTCP Wrappers. Logs connection attempts against protected services and evaluates \n• \nthem against an access control list before accepting the connection.\nRealSecure. By Internet Security System (ISS). Very popular IDS.\n• \nShadow. The oldest IDS tool. It is also a freeware.\n• \nNetProwler. An intrusion-detection tool that prevents network intrusions through \n• \nnetwork probing, system misuse, and other malicious activities by users.\nNetwork Auditor gives the power to determine exactly what hardware and \n• \nsoftware is installed on the network and checks this for faults or changes.\nExercises\n  1.\t Are IDSs similar to firewalls?\n  2.\t Why are system intrusions dangerous?\n  3.\t Discuss the best approaches to implementing an effective IDS.\n  4.\t Can system intrusions be stopped? Support your response.\n  5.\t For a system without a DMZ, where is the best area in the network to install a \nhoneypot?\n  6.\t Why are honeypots important to a network? Discuss the disadvantages of hav­\ning a honeypot in the network.\n  7.\t Discuss three approaches of acquiring information needed to penetrate a net­\nwork.\n  8.\t Discuss ways a system administrator can reduce system scanning by hackers.\n  9.\t Discuss the benefits of system scanning.\n10.\t Discuss as many effective ways of responding to a system intrusion as possible. \nWhat are the best? Most implementable? Most cost effective?\nAdvanced Exercises\n  1.\t Snort is a software-based real-time network intrusion detection system devel­\noped by Martin Roesch. It is a good IDS that can be used to notify an admin­\nistrator of a potential intrusion attempt. Download and install Snort and start \nusing it.\n  2.\t The effectiveness of an IDS varies with the tools used. Research and develop a \nmatrix of good and effective IDS tools.\n  3.\t If possible, discuss the best ways to combine a firewall and a honeypot. Imple­\nment this combination and comment on its effectiveness.\n  4.\t Intrusion detection hybrids are getting better. Research the products on the mar­\nket and comment on them as far as their interfaces are concerned.\n" }, { "page_number": 311, "text": "298\b\n13  System Intrusion Detection and Prevention\n  5.\tDiscuss how exploits can be used to penetrate a network. Research and list 10 \ndifferent common exploits.\nReferences\n  1.\t Sundaram, A. An Introduction to Intrusion Detection, ACM Crossroads: Student Magazine. \nElectronic Publication. http://www.acm.org/crossroads/xrds2–4/intrus.html\n  2.\t Kizza, J. M. Computer Network Security and Cyber Ethics. McFarlans Publishers, Jefferson, \nNC: 2002\n  3.\t Bauer, K. R. AINT Misbehaving: A Taxonomy of Anti-Intrusion Techniques. http://www.\nsans.org/newlook/resources/IDFQA/aint.htm.\n  4.\t Handley, M, Paxson V. and Kreibich C. Network Intrusion Detection: Evasion, Traffic \nNormalization, and End-to-End Protocol Semantics. http://www.icir.org/vern/papers/norm-\nusenix-sec-01-html/norm.html\n  5.\t Proctor, P. The Practical Intrusion Detection Handbook. Upper Saddle River, NJ: Prentice \nHall, 2001.\n  6.\t Innella, P. The Evolution of Intrusion Detection Systems. Tetrad Digital Integrity, LC. http://\nwww.securityfocus.com/infocus/1514\n  7.\t Fink, G. A., Chappell B. L., Turner T. G., and O’Donoghue K. F.. “A Metric-Based Approach \nto Intrusion Detection System Evaluation for Distributed Real-Time Systems.” Proceedings \nof WPDRTS, April 15 – 17, 2002, Fort Lauderdale, FL.\n  8.\t Mullins, M. Implementing a network intrusion detection system. 16 May 2002. http://www.\nzdnet.com.au/itmanager/technology/story/0,2000029587,20265285,00.htm\n  9.\t Central Texas LAN Association Network- vs Host-Based Intrusion Detection. http://www.\nctla.org/newsletter/1999/0999nl.pdf.\n10.\t Panko, R. R. Corporate Computer and Network Security. Upper Saddle River, NJ: Prentice \nHall, 2004.\n11.\t FAQ: Network Intrusion Detection Systems. http://www.robertgraham.com/pubs/network-\nintrusion-detection.html\n12.\t Yee, A. “Network Intrusions: From Detection to Prevention.” International Journal of Infor­\nmation Assurance Professionals, 2003, 8(1).\n" }, { "page_number": 312, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_14, © Springer-Verlag London Limited 2009\n\b\n299\nChapter 14\nComputer and Network Forensics\n14.1  Definition\nThe proliferation of computer technology, including wireless technology and tele­\ncommunication, the plummeting prices of these technologies, the miniaturization \nof computing and telecommunication devices, and globalization forces have all \ntogether contributed to our ever growing dependence on computer technology. This \ngrowing dependence has been a bonanza to computer criminals who have seen this \nas the best medium to carry out their missions. In fact, Richard Rubin [1] has called \nthis new environment a tempting environment to cyber criminals and he gives seven \ncompelling reasons that cause such temptations. They are as follows:\nSpeed. Both computer and telecommunication technology have greatly \n• \nincreased the speed of transmission of digital data, which means that one can \nviolate common decency concerning transmission of such data speedily and not \nget caught in the act. Also, the act is over before one has time to analyze its \nconsequences and one’s guilt.\nPrivacy and Anonymity. There is a human weakness that if no one is a witness \n• \nto an act one has committed, then there is less to no guilt on the doer’s part. \nPrivacy and anonymity, both of which can be easily attained using this new \ntechnology, support this weakness enabling one to create what can be called \n“moral distancing” from one’s actions.\nNature of Medium. The nature of storage and transmission of digital information \n• \nin the digital age is different in many aspects from that of the Guttenberg-print \nera. The electronic medium of the digital age permits one to steal information \nwithout actually removing it. This virtual ability to remove and leave the original \n“untouched” is a great temptation, creating an impression that nothing has been \nstolen.\nAesthetic Attraction. Humanity is endowed with a competitive zeal to achieve far \n• \nand beyond our limitations. So we naturally get an adrenaline high whenever we \naccomplish a feat that seems to break down the efforts of our opponents or the \nwalls of the unknown. It is this high that brings about a sense of accomplishment \nand creative pride whenever not so well known creative individuals come up \nwith elegant solutions to technological problems. This fascination and a sense of \n" }, { "page_number": 313, "text": "300\b\n14  Computer and Network Forensics\naccomplishment create an exhilaration among criminals that mitigates the value \nand the importance of the information attacked and justifies the action itself.\nIncreased availability of potential victims. There is a sense of amusement and \n• \nease to know that with just a few key strokes, one’s message and action can be \nseen and consequently felt over wide areas and by millions of people. This sense \nunfortunately can very easily turn into evil feelings as soon as one realizes the \npower he or she has over millions of invisible and unsuspecting people.\nInternational Scope. The global reach of cyberspace creates an appetite for \n• \ngreater monetary, economic, and political powers. The ability to cover the globe \nin a short time and to influence an entire global community can make a believer \nout of a nonbeliever.\nEnormous Powers. The international reach, the speed, and the distancing of one \n• \nself from the act endows enormous powers to an individual which may lead to \ncriminal activities.\nThere are reasons to believe Rubin because the rate of computer crime is on the \nrise. In fact, data from CERT Cyber Crime Reporting Center show steep increases in \ncomputer crimes from 6 reported incidents in 1988 climbing to 76,404 incidents in \nthe second quarter of 2003 [2]. Also data from InteGov International, a division of \nInternational Web Police, in Table 14.1 shows similar increases. Fighting such rising \ncrimes is a formidable task. It includes education, legislation, regulation, enforce­\nment through policing, and forensics. In both computer forensics and network, the \nbattle starts in the technical realms of investigative science that require the knowl­\nedge or skills to identify, track, and prosecute the cyber-criminal. But before we \ndiscuss network forensics, which some call Internet forensics, let us start by looking \nat computer forensics. We will come back to network forensics in ­Section 14.3.\n14.2  Computer Forensics\nBy definition, computer forensics is the application of forensic science techniques \nto computer-based material. This involves the extraction, documentation, exami­\nnation, preservation, analysis, evaluation, and interpretation of computer-based \nmaterial to provide relevant and valid information as evidence in civil, criminal, \nadministrative, and other cases. In general, computer forensics investigates what \ncan be retrieved from the computer’s storage media such as hard disk and other \ndisks. In Section 14.3, we will contrast it with network forensics. Because we are \ndealing with computer-based materials in computer forensic science, the focus is on \nthe computer, first as a tool and as a victim of the crime. The computer as a tool in \nthe crime is merely a role player, for example, as a communication tool, if the crime \nTable 14.1  International Criminal and Civil Complaints Reported to InterGov International [3]\nYear\n1993 1994 1995\n1996\n1997\n1998\n1999\n2000\n2001\n2002\nIncidents 640\n971\n1,494\n4,322 12,775 47,614 94,291 289,303 701,939\n1,351,897\n" }, { "page_number": 314, "text": "14.2  Computer Forensics\b\n301\nis committed using a computer network, or as a storage facility where the bounty \nis stored on the computer files. As a victim, the computer is now the target of the \nattack and it becomes the focus of the forensic investigation. In either case, the com­\nputer is central to the investigations because nearly all forensic cases will involve \nextracting and investigating data that is retrieved from the disks of the computer, \nboth fixed and movable, and all its parts.\n14.2.1  History of Computer Forensics\nThe history of computer forensics is tied up in the history of forensic science. Accord­\ning to Hal Berghel [4], the art of forensic science is actually derived from forensic \nmedicine, an already recognized medical specialty. Forensic medicine’s focus was \nautopsy examination to establish the cause of death. Although computers were in \nfull use by the 1970s, mainly in big organizations and businesses such as banks and \ninsurance companies, crimes involving computers as tools and as victims were very \nrare. One of the first recorded computer crimes during that time period was based \non “interest rounding.” Interest rounding was a round robin policy used by banks to \nfairly distribute truncated floating point interest on depositors’ accounts. The banks \nwould round a depositor’s interest points to a full cent. Anything less than a cent \nwould be moved to the next account in a round robin fashion.\nProgrammers, however, saw this as a source of ill-gotten wealth. They estab­\nlished an account to which they moved this less than a cent interest. With big banks \nwith many depositors, this would add up. Because these programmers, like all com­\nputer criminals of the time, were highly educated, all computer crimes of the period \nwere “white-collar” crimes. Law enforcement agencies of the time did not know \nenough about these types of computer crimes. Even the tools to gather evidence \nwere not available. In a few cases where tools were available, they were often home \nmade [5].\nIt was not until the mid-1980s that some computer forensic tools such as X-Tree \nGold and Norton Disk Editor became available. With these tools, investigators were \nable to recognize file types and were able to extract data on DOS-based disks. The \n1990s saw heightened activities in computer crime and forensic investigations. The \ndecade also produced an assortment of fine forensic tools that included the Forensic \nToolKit.\nAlthough the development of computer forensics started slow, it has now evolved \nas technology developed to where we are today. The increasing use of computers by \nlaw enforcement investigators and prosecutors and, as noted earlier, the widespread \nand rampant increase in computer-related crimes has led to the development of \ncomputer forensics. The primary focus and methodology, although still embedded \nin the basic physical forensics, has been tracing and locating computer hardware, \nrecovering hidden data from the digital storage media, identifying and recovering \nhidden data, decrypting files, decomposing data, cracking passwords, and bypass­\ning normal operating systems security controls and permissions [6].\n" }, { "page_number": 315, "text": "302\b\n14  Computer and Network Forensics\n14.2.2  Elements of Computer Forensics\nThere are three key elements in any forensic investigations: the material itself, its \nrelevance to the case in question, and the validity of any observations/conclusions \nreached by the examiner. Since computer forensics is very similar to ordinary physi­\ncal forensics, these elements remain the same in computer forensics.\n14.2.2.1  The Material\nIn both roles the computer plays in forensic science, the cases we have given above, the \nmaterials involved are both electronic and physical. Physical material investigation falls \nwithin the realms of the traditional police investigations where files and manila enve­\nlopes and boxes are all examined. The electronic form data is a little trickier to deal with. \nIt may be data that does exist in hard copy, such as e-mail text, e-mail headers, email file \nattachments, electronic calendars, Web site log files, and browser information. It may be \ndeleted documents that must be recovered and reconstructed because deleted data does \nnot necessarily disappear. Even when the reference to the deleted file is removed from \nthe computer’s directory, the bits that make up the file often remain on the hard drive \nuntil they are overwritten by new data. Beside deleted data, data also may be encrypted \nor password protected, making it more difficult to get in its original form.\nIf the computer is the focus of the investigation, then information from all system \ncomponents is part of the required data. For example, network nodes and standalone \npersonal computer operating systems create a great deal of administrative, manage­\nment, and control information that is vital in the investigation.\n14.2.2.2  Relevance\nOnce the existence of the material has been established, the next important step is \nto make sure that the material is relevant. The relevancy of the material will depend \non the requesting agency, nature of the request, and the type of the case in question. \nThe requesting agencies are usually one of the following:\nThe victim\n• \nGovernment\n• \nInsurance companies\n• \nThe courts\n• \nPrivate business\n• \nLaw enforcement\n• \nPrivate individuals\n• \nWe will talk more about relevancy when we discuss analysis of evidence.\n14.2.2.3  Validity\nThe question of validity of data is tied up with the relevance of data. It is also based \non the process of authentication of data. We are going to discuss this next.\n" }, { "page_number": 316, "text": "14.2  Computer Forensics\b\n303\n14.2.3  Investigative Procedures\nBoth computer and network forensics (Section 14.3) methodologies consist of three \nbasic components that Kruse and Heiser [7] both call the three As of computer \nforensics investigations. These are as follows: acquiring the evidence, taking care \nto make sure that the integrity of the data is preserved; authenticating the validity of \nthe extracted data – this involves making sure that the extracted data is as valid as \nthe original; and analyzing the data while keeping its integrity.\n14.2.3.1  Looking for Evidence\nAs Kruse puts it, when dealing with computer forensics, the only thing to be sure of \nis uncertainty. So the investigator should be prepared for difficulties in searching for \nbits of evidence data from a haystack. The evidence usually falls into the following \ncategories:\nImpressions: This include fingerprints, tool marks, footwear marks, and other \n• \ntypes of impressions and marks.\nBioforensics: This includes blood, body fluids, hair, nail scrapings, and blood \n• \nstain patterns.\nInfoforensics: This includes binary data fixed in any medium such as on CDs, \n• \nmemory, and floppies.\nTrace Evidence: includes residues of things used in the committing of a crime \n• \nlike arson accelerant, paint, glass and fibers.\nMaterial evidence: This includes physical materials such as folders, letters, and \n• \nscraps of papers.\nAs you start, decide on what the focus of the investigation is. At the start, decide on\nWhat you have to work with: This may include written and technical policies, \n• \npermissions, billing statements, and system application and device logs.\nWhat you want to monitor: Includes employer and employee rights, Internet \n• \ne-mail, and chat room tracking.\nDeciding what to focus on requires the investigator to make a case assessment \nthat identifies the case requirements. To do this, the investigator must establish the \nfollowing [5]:\nSituation – gives the environment of the case.\n• \nNature of the case – broadly states the nature of the case.\n• \nSpecifics about the case – states out what the case is about\n• \nTypes of evidence to look for – stating physical and electronic data and the \n• \nmaterials to be collected and examined.\nOperating system in use at the time of the incident.\n• \nKnown disk formats at the time of the incident.\n• \nLocation of evidence both physical and electronic.\n• \n" }, { "page_number": 317, "text": "304\b\n14  Computer and Network Forensics\nOnce this information is collected, the investigation may start creating the profile \nof the culprit. At this point, you need to decide whether to let the suspect systems \nidentified above run for a normal day, run periodically, or be pulled altogether if \nsuch actions will help the evidence gathering stage. Pulling the plug means that you \nwill make copies of the computer content and work with the copies while keeping \nthe original intact. Make sure that the system is disconnected and that all that may be \naffected by the disconnection such volatile data is preserved before the disconnec­\ntion. Make duplication and imaging of all the drives immediately, and ensure that the \nsystem remains in its “frozen” state without being used during the investigation.\nOne advantage of pulling the plug is to “freeze” the evidence and prevent it from \nbeing contaminated with either new use or modifications or alterations. Also freez­\ning the system prevents errors committed after the reported incident and before a \nfull investigation is completed. However, freezing the system may result in several \nproblems, including the destruction of any evidence of any ongoing processes.\nOn the other hand, working with a live system has its share of problems. For \nexample, the intruder may anticipate a “live” investigation that involves an investi­\ngator working with a system still in operation. If the intruder anticipates such action, \nthen he or she may alter the evidence wherever the evidence is well ahead of the \ninvestigator, thus compromising the validity of the evidence.\nWhether you use a “live” system or a “frozen” one, you must be careful in the \nuse of the software, both investigative and system software. Be careful and weigh \nthe benefits of using software found on the system or new software. A number of \nforensic investigators prefer not to use any software found on the system for fear of \nusing compromised software. Instead they use new software on the copy system, \nincluding system software. Another variation used by some investigators is to verify \nthe software found on the system and then use it after. Each of these methods has \nadvantages and disadvantages and one has to be careful to choose what best serves \nthe particular situation under review.\n14.2.3.2  Handling Evidence\nThe integrity of the evidence builds the validity of such evidence and consequently \nwins or loses a case under investigation because it is this evidence that is used in \nthe case to establish the facts upon which the merits, or lack of, are based. It is, \ntherefore, quite important and instructive that extreme care must be taken when \nhandling forensic evidence. Data handling includes extraction and the establish­\nment of a chain-of-custody. The chain-of-custody itself involves packaging, stor­\nage, and transportation. These three form the sequence of events along the way from \nthe extraction point to the court room. This sequence of events is traceable if one \nanswers the following questions:\nWho extracted the evidence and how?\n• \nWho packaged it?\n• \nWho stored it, how, and where?\n• \nWho transported it?\n• \n" }, { "page_number": 318, "text": "14.2  Computer Forensics\b\n305\nThe answers to these questions are derived from the following information [5]:\nCase:\n• \nCase number – a number assigned to the case to uniquely identify the case\n• \nInvestigator – name of the investigator and company affiliation\n• \nNature of the case – a brief description of the case.\n• \nEquipment involved:\n• \nFor all computing equipment carefully describe the equipment including the \n• \nmaker, vendor, model and serial number.\nEvidence:\n• \nLocation where it is recorded\n• \nWho recorded it\n• \nTime and date of recording\n• \nThis information may be filled in a form called the chain-of-evidence form.\n14.2.3.3  Evidence Recovery\nThe process of evidence extraction can be easy or complicated depending on the \nnature of the incident and the type of computer or network upon which the incident \ntook place. The million dollar question in evidence extraction is What do I extract \nand what do I leave behind? To answer this question, remember that if you are in an \narea extracting data and you remove what you think is sufficient evidence only to \ncome back for more, you may find that what you left behind is of no value anymore, \na big loss. So the rule of thumb is extract and collect as much as you can so that the \nreturn trip is never needed.\nWhat are the candidates for evidence extraction? There are many, including hard­\nware such as computers, printers, scanners, and network connectors such as modems, \nrouters, and hubs. Software items include systems programs and logs, application \nsoftware, and special user software. Documentation such as scrap paper and any­\nthing printed within the vicinity are also candidates and so are materials such as \nbackup tapes and disks, CDs, cassettes, floppy and hard disks, and all types of logs.\nIn fact, according to Sammes and Jenkinson [8], an investigator should start the \njob only when the following items are at hand:\nAn adequate forensic toolkit which may be a complete forensic computer \n• \nworkstation\nA search kit\n• \nSearch and evidence forms and sketch plan sheets\n• \nEvidence bag\n• \nStill, digital, and video cameras\n• \nDisk boxes\n• \nMobile phone\n• \n" }, { "page_number": 319, "text": "306\b\n14  Computer and Network Forensics\nBlank floppy disks\n• \nA flashlight\n• \nBitstream imaging tool\n• \nEvidence container\n• \nWith these at hand, the investigator then starts to gather evidence by performing \nthe following steps [5]:\nArrange for interviews with all parties involved in the case. This gives the \n• \ninvestigator a chance to collect more evidence and materials that might help the \ncase.\nFill out the evidence form.\n• \nCopy the digital evidence disk by making a bit-stream copy or bit-by-bit copy \n• \nof the original disk. This type of disk copying is different from a simple disk \ncopy which cannot copy deleted files or e-mail messages and cannot recover file \nfragments. Bit-stream copying then creates a bit-stream image. As we will see \nin Section 14.4, there are several tools on the market to do this. Digital evidence \ncan be acquired in three ways:\nCreating a bit-stream of disk-to-image file of the disk. This is the most \n• \ncommonly used approach\nMaking a bit-stream disk-to-disk used in cases that a bit-by-bit imaging \n• \ncannot be done due to errors.\nMaking a sparse data copy of a file or folder.\n• \nAlways let the size of the disk, the duration you have to keep the disk, and the \ntime you have for data acquisition determine which extraction method to use. For \nlarge original source disks, it may be necessary to compress the evidence or the \ncopy. Computer forensics compress tools are of two types: lossless compression \nwhich does not discard data when it compresses a file and lossy compression which \nloses data but keeps the quality of the data upon recovery. Only lossless compres­\nsion tools such as WinZip or PKZip are acceptable in computer forensics. Other \nlossless tools that compress large files include EnCase and SafeBack. Compressed \ndata should always have MD5, SHA-1 hash, or Cyclic Redundancy Check (CRC) \ndone on the compressed data for security after storage and transportation.\nFor every item of the evidence extracted, assign a unique identification number. \nAlso for each item, write a brief description of what you think it is and where it was \nrecovered. You may also include the date and time it was extracted and by whom. It \nis also helpful, where possible, to keep a record of the evidence scene either by tak­\ning a picture or by video. In fact where possible, it is better to video tape the whole \nprocess including individual items. This creates an additional copy, a video copy, \nof the evidence. After all the evidence has been collected and identified and catego­\nrized, it must be stored in a good clean container that is clearly labeled and safely \nstored. It is important to store the evidence at the most secure place possible that is \nenvironmentally friendly to the media on which the evidence is stored. For exam­\nple, the place must be clean and dry. If the evidence was videotaped, the video must \n" }, { "page_number": 320, "text": "14.2  Computer Forensics\b\n307\nbe stored in an area where video recordings can last the longest. Where it requires \nseizure of items, care must be taken to make sure that evidence is not destroyed. If \nit requires dismantling the evidence object for easy moving and transportation, it is \nprudent that there be an identical reconstruction. Every electronic media item seized \nmust be taken for examination.\nWhen there is a need to deal with an unknown password, several approaches can \nbe used. These include second guessing, use of back doors, an undocumented key \nsequence that can be made available by manufacturers, and use of a back up.\nAnd finally the investigator has to find a way of dealing with encrypted evi­\ndence. If the encrypting algorithm is weak, there are always ways and software to \nbreak such encryptions. However, if the algorithms are of a strong type, this may be \na problem. These problems are likely to be encountered in encrypted e-mails, data \nfiles on hard drives, and hard disk partitions. Several products are available to deal \nwith these situations [8]:\nFor encrypted e-mails – use PGP\n• \nFor encrypted hidden files – use Encrypted Magic Folders (http://www.pc-magic.\n• \ncom), Cryptext (http://www.tip.net.au/∼njpayne ), and Data Fortess (http://www.\nmontgomery.hypermart.net/DataFotress).\nFor hard drive encrypted files – use BestCrypt (http://www.jectico.sci.fi/home.\n• \nhtml). Others are: IDEA, Blowfish, DES, Triple-DES, and CAST.\n14.2.3.4  Preserving Evidence\nThere is no one standard way for securing evidence. Each piece of evidence, pack­\ning, and storage are taken on a case-by-case basis. Packaging the evidence is not \nenough to preserve its integrity. Extra storage measures must be taken to preserve \nthe evidence for a long time if necessary. One of the challenges in preserving digital \nevidence is its ability to disappear so fast. In taking measures to preserve evidence, \ntherefore, this fact must be taken into account. Evidence preservation starts at the \nevidence extraction stage by securing the evidence scene from onlookers and other \ninterested parties. If possible, allow only those involved in the extraction to view it. \nSeveral techniques are used including the following:\nCatalog and package evidence in a secure and strong anti-static, well-padded, \n• \nand labeled evidence bag that can be secured by tape and zippers. Make sure that \nthe packaging environment keeps the evidence uncontaminated by cold, hot, or \nwet conditions in the storage bag.\nBack up the original data including doing a disk imaging of all suspected media. \n• \nCare must be taken especially when copying a disk to another disk, it is possible \nthat the checksum of the destination disk always results in a different value than \na checksum of the original disk. According to Symantec, the difference is due to \ndifferences in disk geometry between the source and destination disks [9]. Since \nGhost, a Norton forensic product, does not create an exact duplicate of a disk but \n" }, { "page_number": 321, "text": "308\b\n14  Computer and Network Forensics\nonly recreates the partition information as needed and copies the contents of the \nfiles, investigators using Ghost for forensic duplication must be careful as it does \nnot provide a true bit-to-bit copy of the original.\nDocument and timestamp, including the date, every and all steps performed \n• \nin relation to the investigation, giving as many details as possible; however, \ninsignificant the steps is. Note all network connections before and during the \ninvestigation.\nImplement a credible control access system to make sure that those handling the \n• \nevidence are the only ones authorized to handle the evidence.\nSecure your data by encryptions, if possible. Encryption is very important in \n• \nforensic science because it is used by both the investigator and the suspect. It \nis most commonly used by the suspect to hide content and by the investigator \nto ensure the confidentiality of the evidence. The integrity of the evidence is \nmaintained when it has not been altered in any way. Encryption technology can \nalso verify the integrity of the evidence at the point of use. Investigators must \ncheck to see that the encrypted system under examination has a key recovery \nsystem. It makes the job of the investigators ten times as more difficult if they \nencounter encrypted evidence. Data can become intercepted during transit.\nPreserve the evidence as much as possible by not adding or removing software, \n• \nusing only trusted tools, not using programs that use the evidence media.\nIf possible validate and or authenticate your data by using standards, such as \n• \nKerberos, and using digital certificates, biometrics, or timestamping. All these \ntechnologies are used in authentication, validation, and verification. The time \nwhen an object was signed always affects its trustworthiness because an expired \nor a revoked certificate is worthless. Timestamping is useful when collecting \nevidence because it provides incontestable proof that the digital evidence was in \nexistence at a specific time and date and has not been changed since that date.\nIn addition to timestamping, the images of the hard drives and any volatile data \nsaved before “freezing” the system, the following can also be timestamped [7]:\nOngoing collection of suspect activities including log files, sniffer outputs, and \n• \noutput from intrusion detection system\noutput from any reports or searches performed on a suspect machine, including \n• \nall files and their associated access times\ndaily typed copies of investigator’s notes.\n• \nNote, however, that criminals can use all these same tools against investigators.\n14.2.3.5  Transporting Evidence\nWhere it is necessary to transport the evidence either for safer security, more space, \nor to court, great care must be taken to safeguard the integrity of the evidence you \nhave painstakingly collected and labored to keep safe and valid. Keep in mind \nthat transportation pitfalls can be found across the transportation channel from the \n" }, { "page_number": 322, "text": "14.2  Computer Forensics\b\n309\n­starting point all the way to the destination. Be aware that containers can be opened \nmidway even from trusted individuals. So find the most secure, trusted, and verified \nway to transport the evidence. This may include constant and around the clock mon­\nitoring, and frequent checks including signatures of all those handling the evidence \nalong the way. The goal is to maintain a chain of custody to protect the integrity of \nthe evidence and to make it difficult for anybody to deny the evidence because it \nwas tempered with.\nSince during transportation the integrity of data may be affected, it is impor­\ntant to use strong data hiding techniques such as encryptions, steganography, pass­\nword-protected documents, and other ways. Data hiding, a form of steganography, \nembeds data into digital media for the purpose of identification and annotation. \nSeveral constraints, however, affect this process: the quantity of data to be hidden, \nthe need for invariance of this data under conditions where a “host” signal is subject \nto distortions, and the degree to which the data must be immune to interception, \nmodification, or removal by a third party [10].\nOne of the important goals of data hiding in digital media in general and com­\nputer forensics in particular is to provide assurance of content integrity. Therefore, \nto ensure content integrity, the hidden data must stay hidden in a host signal even \nif that signal is subjected to degrading manipulation such as filtering, resampling, \ncropping, or lossy data compression.\nSince data can be compromised during transit, there are ways to test these \nchanges. Among these are the use of parity bits, redundancy checks used by com­\nmunication protocols, and checksums. Even though these work, unfortunately they \ncan all fall prey to deliberate attempts by hackers using simple utilities that can \nrender them all useless. To detect deliberate attempts at data during transmission, a \nbetter technique is a cryptographic form of checksum called a hash function. Apply­\ning a hash function to data results in a hash value or a message digest. A robust hash \nalgorithm such as MD5 and SHA-1 can deliver a computationally infeasible test of \ndata integrity. Hash algorithms are used by examiners in two ways: to positively \nverify that data has been altered by comparing digests taken before and after the \nincident and to verify that evidence have not been altered.\nAnother way to safeguard evidence in transition, if it has to be moved either as a \ndigital medium carried by somebody or electronically transferred, is data compres­\nsion. As we have seen in Section 14.2.3.4, data compression can be used to reduce \nthe size of data objects such as files. Since compression is a weak form of encryp­\ntion, a compressed file can be further encrypted for more security.\n14.2.4  Analysis of Evidence\nAfter dealing with the extraction of evidence, the identification, storage, and trans­\nportation, there now remains the most important and most time consuming part of \ncomputer and network forensic science, that of analysis. As Kruse et al. noted, the \nmost important piece of advice in forensics is “don’t take anything for granted.” \n" }, { "page_number": 323, "text": "310\b\n14  Computer and Network Forensics\nForensic evidence analysis is painstakingly slow and should be thorough. The pro­\ncess of analyzing evidence done by investigators to identify patterns of activity, file \nsignature anomalies, unusual behaviors, file transfers and several other trends to \neither support or reject the case, is the most crucial and time consuming in forensic \ninvestigation and should depend on the nature of the investigation and amount of \ndata extracted. For example, nonlitigation cases may not involve as much care as \nthe care needed for litigation ones because in litigation cases, there must be enough \nevidence of good quality to fend off the defense. According to Kruse, the following \nthings should not be taken for granted [7]:\nExamine shortcuts, Internet, Recycle Bins, and the Registry\n• \nReview the latest release of the system software with an eye on new methods of \n• \ndata hiding.\nCheck every data tape, floppy disk, CD-ROM, DVD, and Flash Memory found \n• \nduring evidence extraction.\nLook in books, manuals, under keyboards, on the monitor, and everywhere where \n• \npeople usually hide passwords and other pertinent information.\nDouble-check the analysis.\n• \nRe-examine every file and folder, logfiles, and print spool.\n• \nRecover any encrypted or archived file.\n• \nOnce the evidence has been acquired and carefully preserved, then the analysis \nprocess begins. Make sure that all evidence is received at the examination center. All \nitems must be in sealed evidence bags. An external examination of all items must \nbe done before the internal examinations can begin. For disks and other recordable \nmedia, an imaging of each must be done. Currently tools to do this job include \nDeriveSpy, EnCase, CaptureIt, FTKExplorer, and dd to name a few.\nIt is normal to start with the hard drives with the following [11]:\nHard Drive Physical Analysis – seeking information of partitions, damaged \n• \nsectors, and any data outside of the partitions.\nHard Drive Logical Analysis – seeking information on active file metadata, \n• \ncontext of information, file paths, file sizes, and file signatures.\nAdditional hard drive analysis – looking for active files, file system residues, \n• \nerased files, electronic communications, and peripheral devices.\nAfter dealing with the hard drives, continue with other peripherals, documenta­\ntion, and every other component that is relevant to the incident. The tools most \nused in this endeavor are discussed in Section 14.4. It is also important to note \nhere that the amount of work done and sometimes the quality of the analysis \ndone may depend on the platform you use. Forensic investigators are religiously \ndevoted to their operating systems, but it is advisable to use whatever makes you \ncomfortable.\nThe analysis itself should not be constrained, it should take any direction and any \nform. Specifically it should focus on devices and on the storage media. Although we \nprefer the analysis to be loose and flowing, keeping close to the following guide­\nlines is helpful [5]:\n" }, { "page_number": 324, "text": "14.2  Computer Forensics\b\n311\nClearly know what you are looking for.\n• \nHave a specific format for classifying data.\n• \nHave and keep tools for data reconstruction.\n• \nRequest or demand for cooperation from agencies and departments, especially \n• \nwhere you have to ask for help in evidence protection.\nUse only recently wiped media like disks as target media to store evidence. There \n• \nare several tools to clean wipe a disk.\nInventory the hardware and software on the suspect system because all may be \n• \npart of the investigation.\nOn the suspect system, remove the hard drive(s), noting the time and date in the \n• \nsystem’s CMOS.\nOn the image disk:\n• \nList and check all directories, folders, and files.\n• \nExamine the contents of each. Where tools are needed to recover passwords \n• \nand files, acquire such tools.\nNote where every item found on the disk(s) was found and identify every \n• \nexecutable, noting its function(s).\n14.2.4.1  Data Hiding\nWhile analyzing evidence data, it is very important to pay particular attention to \ndata hiding. There are many ways data can be hidden in a file system including the \nfollowing.\nDeleted Files\nDeleted files can be recovered manually using hex editor. When a file on a Windows \nplatform is deleted, the first character of the directory entry is changed to a sigma \ncharacter – hex value of E5. The operating system takes this sigma to indicate that \nthe entry should not be displayed because the file has been deleted. The entry in the \nFile Allocation Table (FAT) is also changed to zero, indicating unused sectors and \ntherefore available to the operating system for allocation.\nSimilarly, MS-DOS does not remove data in clusters of files declared as deleted. \nIt merely marks them as available for reallocation. It is, therefore, quite possible to \nrecover a file that has been deleted, provided the clusters of the file have not been \nreused. DOS programs such as UNERASE and UNDELETE try to recover such \nfiles. But Norton Disk Editor is more effective.\nNote that the operating system does not do anything to the data in these sectors \nuntil reallocating the sectors to another file. The data in the sectors are then over­\nwritten. Before that, http://www.intergov.org/public_information/general_informa­\ntion/latest_web_stats.html data in these sectors can be reconstructed.\n" }, { "page_number": 325, "text": "312\b\n14  Computer and Network Forensics\nHidden Files\nData hiding is one of the most challenging aspects of forensic analysis. With spe­\ncial software, it is possible to mark a partition “hidden” such that the operating \nsystem will no longer access it. Other hidden areas can be created by setting par­\ntition tables to start at head 0, sector 1 of a cylinder, and the first sector of the \npartition proper – the boot record, to start at head 1, sector 1 of the cylinder. The \nconsequence of this is that there will invariably be a number of unused sectors at \nthe beginning of each partition, between the partition table sector and the boot \nrecord sector [8].\nIn addition to these hidden areas, operating systems also hide files and filenames \nfrom users. Files and filenames, especially system files, are purposely hidden from \nusers because we want the users not to be able to access those files from their regu­\nlar display list. The filenames of system programs are usually hidden because aver­\nage users do not have to know them and those who know them do not need to have \nthem listed. When they need to see them they can always list them.\nEvery operating system has a way of hiding and displaying hidden files. For \nexample, Linux has a very simple way of “hiding” a file. Creating a file with \nan added a period to the front of the filename which defines to Linux that the \nfilename is “hidden” makes it hidden. To display Linux hidden files add the -a \nflag (display all filenames) to the ls (list) command like “ls –a.” This displays \nall of files in the current directory whether hidden or not. Similarly UNIX does \nnot display any files or directories that begin with the dot (.) character. Such files \ncan be displayed by either the Show Hidden Files option or the -a switch of the \nls command.\nBecause of these cases, it is therefore, always prudent to assume that the candi­\ndate system has hidden files and data. Hidden data is always a clue for investigators \nto dig deeper. There are a number of ways to hide data including encryption; com­\npression; codes; steganography; and using invisible names, obscure names, mis­\nleading names, and invisible names. We will discuss these throughout this chapter.\nSlack Space\nThis is unused space in a disk cluster. Both DOS and Windows file systems use \nfixed-size clusters. During space allocation, even if the actual data being stored \nrequire less storage than the cluster size, an entire cluster is reserved for the file. \nSometimes this leaves large swats of used space called slack space. When a file is \ncopied, its slack space is not copied. It is not possible to eliminate all slack space \nwithout changing the partition size of the hard disk or without deleting or compress­\ning many small files into a larger one. Short of eliminating these wasted spaces, it \nis good to have software tools to examine this slack space, find out how big it is, \nand what is hidden in it. If this is not done, there is a risk of slack space containing \nremnants of hostile code or hidden confidential files.\n" }, { "page_number": 326, "text": "14.2  Computer Forensics\b\n313\nBad Blocks\nA bad track is an area of the hard disk that is not reliable for data storage. It is pos­\nsible to map a number of disk tracks as “bad tracks.” These tracks are then put into \na bad track table that lists any areas of the hard disk that should not be used. These \n“bad tracks” listed on the table are then aliased to good tracks. This makes the \noperating system avoid the areas of the disk that cannot be read or written. An area \nthat has been marked as “bad” by the controller may well be good and could store \nhidden data. Or a good sector could be used to store incriminating data and then be \nmarked as bad. A lot of data can be hidden this way in the bad sectors by the suspect. \nNever format a disk before you explore all the bad blocks because formatting a disk \ndeletes any data that may be on the disk.\nSteganography Utilities\nSteganography is the art of hiding information in ways that prevent its detection. \nSteganography, an ancient craft, has seen a rebirth with the onset of computer tech­\nnology with computer-based steganographic techniques that embed information in \nthe form of text, binary files, or images by putting a message within a larger one in \nsuch a way that others cannot discern the presence or contents of the hidden mes­\nsage. The goal of steganography is to avoid drawing suspicion to the transmission \nof a hidden message. This is, therefore, a threat to forensic analysts as they now \nmust consider a much broader scope of information for analysis and investigation. \nSteganalysis uses utilities that discover and render useless such covert messages.\nPassword-Cracking Software\nThis is software that once planted on a user’s disk or finds its way to the password \nserver tries to make any cryptosystems untrustworthy or useless by discovering \nweak algorithms, wrong implementation, or application of cryptalgorithms and \nhuman factor.\nNTFS Streams\nIn NTFS (Windows NT File System), a file object is implemented as a series of streams. \nStreams are an NTFS mechanism allowing the association and linking of new data \nobjects with a file. However, the NT NTFS has an undocumented feature that is referred \nto by different names, including Alternate Data Streams, Multiple Data Streams on the \nMicrosoft TechNet CD, Named Data Streams, and Forked Data Streams. Whatever \nname it is called, this feature of NTFS is not viewable to ordinary NT tools. That \nmeans that data hidden in these streams are not viewable by GUI-based programs and \n" }, { "page_number": 327, "text": "314\b\n14  Computer and Network Forensics\nWindow Explorer for example. It is, however, easy to write in these streams using \nWindows Notepad. If this happens, however, then File Explorer has no mechanism \nto enumerate these additional streams. Therefore, they remain hidden to the observer. \nThis is a security nightmare because these streams can be exploited by attackers for \nsuch things as Denial of Service and virus attacks. Also many network users can store \ndata on an NT server that administrators are not aware of and cannot control.\nCodes and Compression\nThere are two techniques combined here. Coding is a technique where characters \nof the data are systematically substituted by other characters. This technique can be \nused by system users to hide vital or malicious data. Data compression on the other \nhand is a way of reducing the size of data object like a file. This technique is also \nincreasingly being used by suspects to hide data. Forensic investigators must find \na way to decipher coded or compressed evidence. Uncompressing compressed data \ncan reveal to investigators whether evidence is encrypted or not. To deal with all \nthese, it is imperative that a forensic investigators acquires forensic tools that can \ndecompress, decrypt, decode, crack passwords, and tools to uncover hidden data. \nWe will survey these tools in Section 14.4.\nForensic analysis is done to positively identify the perpetrator and the method he \nor she is using or used to commit the act, to determine network vulnerabilities that \nallowed the perpetrator to gain access into the system, to conduct a damage assess­\nment of the victimized network, and to preserve the evidence for judicial action, if it \nis necessary. These objectives which drive the analysis are similar in many ways to \nthose set for physical forensics. So computer forensics examiners should and must \ndevelop the same level of standards and acceptable practices as those adhered to by \nphysical investigators.\n14.2.4.2  Operating System-Based Evidence Analysis\nMost forensic analysis tools are developed for particular platforms. Indeed many \nforensic investigators prefer to work on specific platforms than on others. Let us \nbriefly look at forensic analysis based on the following platforms:\nMicrosoft-Based File Systems (FAT8, FAT16, FAT 32, and VFAT)\nBecause most computer forensic tools so far are developed for Microsoft file \n­systems, we will start with that. According to Bill Nelson et al., an investigator per­\nforming forensic analysis on a Microsoft file system must do the following [5]:\nRun an anti-virus program scan for all files on the forensic workstation before \n• \nconnecting for a disk-to-disk bit-stream imaging.\n" }, { "page_number": 328, "text": "14.3  Network Forensics\b\n315\nRun an anti-virus scan again after connecting the copied disk-to-disk bit-stream \n• \nimage disk to all drives including the copied drive unless the copied volumes \nwere imaged by EnCase or SaveSet.\nExamine fully the copied suspect disk noting all boot files in the root.\n• \nRecover all deleted files, saving them to a specified secure location.\n• \nAcquire evidence from FAT.\n• \nProcess and analyze all recovered evidence.\n• \nNTFS File System\nUse tools such as DriveSpy to analyze evidence just like in FAT File Systems.\nUNIX and LINUX File Systems\nAlthough forensic tools for Linux are still few, the recent surge in Linux use \nhas led to the development of new tools, including some freeware such as TCT, \nCTCUTILs, and TASK. These tools and most GUI tools can also analyze Unix. \nThese include EnCase, FTK, and iLook. Because most Unix and Linux systems \nare used as servers, investigators, according to Nelson et al must use a live system. \nWhen dealing with live systems, the first task for the investigator is to preserve any \ndata from all system activities that are stored in volatile memory. This saves the \nstate of all running processes, including those running in the background. These \nactivities include the following [5]:\nConsole messages\n• \nRunning processes\n• \nNetwork connections\n• \nSystem memories\n• \nSwap space\n• \nMacintosh File System\nAll system running Mac OS9X or later versions use the same forensic tools such \nas Unix, Linux, and Windows. However, for older MAC systems, it is better to use \ntools like Expert Witness, EnCase, and iLook.\n14.3  Network Forensics\nIn 14.2 we gave a definition for computer forensics that network forensics contrasts. \nUnlike computer forensics that retrieves information from the computer’s disks, \nnetwork forensics, in addition retrieves information on which network ports were \nused to access the network. Dealing with network forensics, therefore, implies tak­\ning the problems of computer forensics and multiplying them one hundred times, \n" }, { "page_number": 329, "text": "316\b\n14  Computer and Network Forensics\na thousand times, and sometimes a million times over. Some of the things we do \nin computer forensics cannot be done in network forensics. For example, it is easy \nto take an image of a hard drive when we are dealing with one or two computers. \nHowever, when you are dealing with a network with five thousand nodes, it is not \nfeasible. There are other differences. Network forensics, as Bergehel observed, is \ndifferent from computer forensics in several areas, although it grew out of it. And \nits primary objective, to apprehend the criminal, is the same. There are several dif­\nferences that separate the two including the following:\nUnlike computer forensics where the investigator and the person being \n• \ninvestigated, in many cases the criminals, are on two different levels with the \ninvestigator supposedly on a higher level of knowledge of the system, the \nnetwork investigator and the adversary are at the same skills level.\nIn many cases, the investigator and the adversary use the same tools: one to cause \n• \nthe incident, the other to investigate the incident. In fact, many of the network \nsecurity tools on the market today, including NetScanTools Pro, Tracroute, and \nPort Probe, used to gain information on the network configurations, can be used \nby both the investigator and the criminal. As Berghel puts it, the difference \nbetween them is on the ethics level, not the skills level.\nWhile computer forensics, as we have seen in Section 14.3, deals with the \n• \nextraction, preservation, identification, documentation, and analysis, and it still \nfollows well-defined procedures springing from law enforcement for acquiring, \nproviding chain-of-custody, authenticating, and interpretation, network forensics \non the other hand has nothing to investigate unless steps were in place (like \npacket filters, firewalls, and intrusion detection systems) prior to the incident.\nHowever, even if network forensics does not have a lot to go after, there are \nestablished procedures to deal with both intrusive and non intrusive incidents. For \nintrusive incidents, an analysis needs to be done.\n14.3.1  Intrusion Analysis\nNetwork intrusions can be difficult to detect let alone analyze. A port scan can take \nplace without a quick detection, and more seriously a stealthy attack to a crucial \nsystem resource may be hidden by a simple innocent port scan. If an organization \noverlooks these simple incidents, it may lead to serious security problems. An intru­\nsion analysis is essential to deal with these simple incidents and more serious ones \nlike backdoors that can make re-entry easy for an intruder, a program intentionally \nleft behind to capture proprietary data for corporate espionage, or a program in \nwaiting before launching a Denial-of-Service attack.\nThe biggest danger to network security is pretending that an intrusion will never \noccur. As we noted in Section 10.3, hackers are always ahead of the game, they \nintentionally leave benign or not easily detectable tools behind on systems that they \nwant to eventually attack. Unless intrusion analysis is used, none of these may be \n" }, { "page_number": 330, "text": "14.3  Network Forensics\b\n317\ndetected. So the purpose of intrusion analysis is to seek answers to the following \nquestions:\nWho gained entry?\n• \nWhere did they go?\n• \nHow did they do it?\n• \nWhat did they do once into the network?\n• \nWhen did it happen?\n• \nWhy the chosen network?\n• \nCan it be prevented in future?\n• \nWhat do we learn from the incident?\n• \nAnswers to these questions help us to learn exactly what happened, determine the \nintruder motives, prepare an appropriate response, and make sure it doesn’t happen \nagain. A thorough intrusion analysis requires a team of knowledgeable people who \nwill analyze all network information to determine the location of evidence data. Such \nevidence data can reside in any one location of the network, including appliances and \nservice files that are fundamental to the running of the network like [11]:\nRouters and firewalls\n• \nFTP and DNS server files\n• \nIntrusion detection systems monitor log files\n• \nSystem log files including Security, System, Remote Access, and Applications\n• \nExchange servers\n• \nServers’ hard drives.\n• \nIntrusion analysis involves gathering and analyzing data from all these network \npoints. It also consists of the following services [4]:\nIncident response plan\n• \nIncident response\n• \nTechnical analysis of intrusion data\n• \nReverse engineering of attacker tools (reverse hacking)\n• \nAll results of the analysis must be redirected to an external safe and secure place.\nOn systems such as Unix and Linux servers, the intrusion investigators must \nexamine system log files to identify accounts used during the penetration. Investi­\ngators must also examine [5]:\nAll running processes\n• \nAll network connections\n• \nAll deleted files\n• \nAll background processes\n• \nFile system\n• \nMemory status\n• \nContents of each swap\n• \nBackup media\n• \nAll files created or modified during the incident.\n• \n" }, { "page_number": 331, "text": "318\b\n14  Computer and Network Forensics\nThese help the investigator to reconstruct the system in order to be able to deter­\nmine what happened.\n14.3.1.1  Incident Response Plan\nThe incident response plan should be based on one of the three philosophies: watch \nand warn; repair and report; and pursue and prosecute. In watch and warn, a moni­\ntoring and reporting system is set up to notify a responsible party when an incident \noccurs. This is usually a simple monitoring and reporting system with no actions \ntaken beyond notifications. Some of these systems have now become real-time \nmonitoring and reporting systems. The repair and report philosophy aims at bring­\ning the system back to normal running as soon as possible. This is achieved through \na quick identification of the intrusion, repairing all identified vulnerability, or block­\ning the attack and quickly reporting the incident to the responsible party. Finally the \npursue and prosecute philosophy involves monitoring for incidents, collection of \nevidence if an attack occurs, and reporting beyond technical staff that involves law \nenforcement and court charges.\nThe response plan should also outline the procedures to be taken and indicate the \ntraining needed. Under the procedures, everyone should know what he or she should \ndo. The procedures should also indicate what level of priorities should receive the \ngreatest level of attention. The response plan is important to an investigator because \nif the plan is good and it is followed, it should have documented the circumstances \nthat may have caused the incident and what type of response was immediately taken. \nFor example, were the machines “frozen”? When and by whom? What immedi­\nate information about the attack and the attacker was known, who knew about it, \nand what was done immediately? What procedures were taken to deal with remote \nsystems and connections to public networks? Disconnecting from the network can \nisolate the systems and keep the attackers from entering or sometimes exiting the \nnetwork. However, severing all connections may not only disrupt the services, it \nmay also destroy the evidence. Communication is important and there should be \none designated person to handle all communication, especially to the public. Finally \nresponse plan information also consists of documentation of the activities on the sys­\ntem and networks as well as system configuration information before the incident. \nIt also consists of support information such as a list of contacts and their responses; \ndocumentation on the uses of tools and by whom is also included [12]. Since differ­\nent circumstances require different responses, the investigator needs to know what \nresponse was taken and have all the documentation of whatever was done.\n14.3.1.2  Incident Response\nIncident response is part of the security plan that must be executed whenever an inci­\ndent occurs. Two items are important to an investigator in the incident response. These \nare incident notification and incident containment. In incident notification, what the \n" }, { "page_number": 332, "text": "14.3  Network Forensics\b\n319\ninvestigator wants to know are as follows: Who knew first and what were the first \nresponses? Who was notified in the process and what were the responses? It is com­\nmon that the first person to notice the incident always deals with it. Usually employ­\nees “see” the incident in progress first and inform the “Techs” that the machines are \nrunning “funny” or slow. Incident notification procedures need to be built into the \noperating incident plan. The work of the response team may also be of interest to the \ninvestigator. The response team should get clear and precise information, and it should \nconsist of people with the knowledge and skills needed to handle security incidents. It \nis this expertise that the investigator needs to tap into. Finally, since the reporting pro­\ncedures require management to know immediately, the investigator may be interested \nin that trail of information. Also the response team may have information, preliminary \nat first but may improve later, of the extent of the attack. Usually they know who was \naffected and what actions were taken on their machines and tools. Also note if law \nenforcement agencies were contacted and what type of information was given.\nIncident containment is required to stop the incident if possible, but more so to \nminimize the effects of the incident. Rapid response is critical in today’s automated \nattacks that are able to scan systems, locate vulnerabilities, and penetrate them with \nlightning speed and with limited human intervention. Incident containment is impor­\ntant to the investigator because it contains efforts taken to deny access to the system \nand the number of affected systems. The containment plan consists of the following \nresponse items: determination of affected systems, denying the attacker access to \nsystems, elimination of rogue processes, and regaining control [12]. The documenta­\ntion in each of these should provide the investigator with a trove of good information. \nThe investigators should be particularly interested in the plan’s regaining of control \nbecause valuable evidence clues may be lost. To regain control means to bring the \nsystem back to the state it was in before the incident. The first effort in regain­\ning control is to lock out the attacker. This is important because, when discovered, \nthe attacker may try to destroy as much of the evidence as possible. Blocking the \nattacker’s access may be achieved by blocking access at the firewall or a complete \ndisconnection of the system. Actions that follow may include change of passwords, \ndisabling of services, removal of backdoors, if those can be found, and monitoring \nof activities. In addition, if no further legal actions are required, the sanitation of \nthe system may be required. However, if further legal recourse is anticipated, then \nthis may be avoided for some time to allow the investigator to recover the evidence. \nAfter the evidence has been collected, then the rebuilding of the system involving \nthe use of backups, applying security patches, and reloading of data begins. Since \nattacks can originate either from outside or internally, incident containment plans \nmust be handled with care and secrecy in case the suspect is in the house.\n14.3.1.3  Technical Analysis of the Intrusions\nThe most difficult, time consuming, and technically challenging part of network \nforensics is the technical analysis of intrusions and intrusion data. Typically, unlike \ncomputer forensics where most of the evidence may reside on the victim machine, \n" }, { "page_number": 333, "text": "320\b\n14  Computer and Network Forensics\nin network forensics evidence does not reside on one hard drive or one machine, it \nmay require to search many disks and many network computers. As we pointed out \nearlier, the investigator must have almost the same skills as the suspect and many \ntimes may use the same tools. In any case, as we discussed in Section 14.3.1, in any \nsuspected incident occurring in a network environment, we may need to analyze the \nfollowing network information to determine the location of pertinent information.\nOne of the most important and crucial source of logs on the Internet is the ISP. \nSince ISPs deal with lots of dial-up customers, each customer dialing in must be \nauthenticated before a call is dynamically assigned an IP address by the Dynamic \nHost Configuration Protocol (DHCP) server. This IP address is associated with \na DNS, thus allowing reverse lookup. The authentication is done by the Remote \nAuthentication Dial-In User Service (RADIUS). However, RADIUS does not only \nauthenticate calls, it also maintains records that can be used to track down a suspect \n[7]. RADIUS information includes IP address assigned, connection time, telephone \nnumber used from a caller ID, and login name. ISPs maintain these logs for some \ntime, sometimes up to a year, before purging them. However, investigators should \nnot take this information as always valid. It can and it has been changed before. But \nas Kruse points out, the value of ISP information is to have the telephone number, \ndate, and time of the incident. This can be followed by a subpoena.\nOther good sources of investigator information are e-mail and new postings. \nBoth these services offer good tracking attributes like\nStore-and-forward architecture that move messages of printable characters from \n• \nnetwork-node to network-node in a next-hop framework.\nHuman-readable message headers that contain the path between sender and \n• \nreceiver.\nThis information is useful to an investigator. For example, all e-mail servers have \nthe ability to maintain a logging information. Let us look at this closely. E-mail \nprograms, called clients, are based on application level protocols. There are sev­\neral of these protocols, including: Post Office Protocol (POP), Internet Mail Access \nProtocol (IMAP), Microsoft’s Mail API (MAPI), and HTTP for Web-based mail. \nAll outgoing e-mails use a different protocol called Simple Mail Transfer Protocol \n(SMTP). Unlike incoming protocols above used to receive e-mails, outgoing pro­\ntocol SMTP does not require authentication. The SMTP at the client sends email \nmessages to the STMP at the mail server or at the ISP, which then relays e-mail mes­\nsages to their destinations without any authentication. However, to give such emails \nsome degree of trust, authentication protocols such as PGP or S/MIME (Secure \nMultipurpose Internet Mail Extensions) are used on top of STMP. SMTP servers, \nhowever, maintain logging information which is more reliable than mail headers \nand may be useful to an investigator.\nAnother good source of information for forensic investigators is Usenet, a huge \ndistributed news bulletin board consisting of thousands of news topics beautifully \narranged. Throughout the news network are thousands of news servers running Net­\nwork News Transfer Protocol (NNTP). In the header of each message news body, \nthere is a path that forms the crest of the investigation. One can trace every NNTP \n" }, { "page_number": 334, "text": "14.4  Forensics Tools\b\n321\nhost that the message has traversed in reverse chronological order. Also like mail \nservers, NNTP may or may not accept postings from nonmembers.\nFinally, enormous amount of data can be gotten from monitoring systems like \nfirewalls, intrusion detection systems, and operating system logs.\n14.3.1.4  Reverse Hacking\nReverse engineering, commonly known as reverse hacking, is literally taking an \noffending package, breaking it up, and using it to try and trace the source of the \nattack. Anti-virus writers have long used the technique by capturing the virus signa­\nture, usually a traffic package, breaking it up, and studying the patterns which then \nlead to an anti-virus.\n14.3.2  Damage Assessment\nIt has been difficult so far to effectively assess damage caused by system attacks. For \nthe investigator, if the damage assessment report is available, it can provide a trove \nof badly needed information. It shows how widespread the damage was, who was \naffected and to what extent. Further it shows what data, system, services, and privi­\nleges were compromised. It is also from this report that the length of the incident can be \nestablished and the causes, vulnerability exploited, safeguards bypassed, and detection \navoided. From this report, one can also be able to determine if the attack was manual \nor automated. If the source of the attack is indicated in the report, then one can use it to \ntrace network connections which may lead to other directions of the investigation.\nTo achieve a detailed report of an intrusion detection, the investigator must carry \nout a post mortem of the system by analyzing and examining the following [5]:\nSystem registry, memory, and caches. To achieve this, the investigator can use dd \n• \nfor Linux and Unix sytems.\nNetwork state to access computer networks accesses and connections. Here \n• \nNetstat can be used.\nCurrent running processes to access the number of active processes. Use ps for \n• \nboth Unix and Linux.\nData acquisition of all unencrypted data. This can be done using MD5 and SHA-1 \n• \non all files and directories. Then store this data in a secure place.\n14.4  Forensics Tools\nLet us end this chapter by looking at the tools of the trade for forensic investiga­\ntors. Like a hunter, forensic investigators rely on their tools. They succeed or fail \nbased on their tools. Because of this, it is important that the investigators make sure \nthat their tools are not only trusted but also that they work before they start the job. \n" }, { "page_number": 335, "text": "322\b\n14  Computer and Network Forensics\nAlways try the tools on something before they are fully deployed for work. Make \nsure that the tools do exactly what you want them to do.\nFollowing Hal Berghel’s observations on differentiating computer forensics from \nnetwork forensics, we are going to split the tools into two. In Section 14.4.1 we will \ndiscuss tools used mainly in computer forensics and in Section 14.4.2 we will look \nat those used in network forensics.\nHaving done that, however, we do not want to look naïve as if we do not know \nthat the two disciplines are actually intertwined. Network forensics, for all its knowl­\nedge level requirements and tools sharing between the suspects and investigators, \nis still very much anchored in computer forensics. Many of the tools, as we will see \nare, therefore, used in both areas without a thought.\nIn addition, despite the latest call for their separation, which in many areas is still \nacademic, many still treat the two areas as one. In fact, much of the current writing \non the market has yet to differentiate the two. However, efforts are on to try and \ndifferentiate the two for better services.\n14.4.1  Computer Forensics Tools\nIn Section 14.3, we indicated that computer forensics, as an established science, has \nbeen in use for some time. It, therefore, has well established tools and procedures in \nplace. Many of these are either software-based or hardware-based [7].\n14.4.1.1  Software-Based Forensics Tools\nMost forensic software tools are primarily recovery and imaging tools. Most of \nthem are classified as follows:\nViewers – to report on file systems and file types on all system disks.\n• \nDrive imaging – ordinary file copying tools miss the hidden data because they \n• \nback up only individual characters. Forensic software creates disk images that \ncapture all the slack space, unallocated areas, and swap files.\nDisk wiping – after making copies and deleting the contents of a disk something \n• \nstill remains on the disk. Disk wiping cleans everything off a disk.\nData Integrity\n• \nRecovery/search – be able to thumb through tons of data looking for that one clue\n• \nForensic software tools are also categorized based on whether they are com­\nmand-line or GUI.\nCommand-Line Forensics Software Tools\nThese tools are popular and have a wide acceptance in industry mainly because \nof ­legacy. The first small computers were mainly PC which were mainly based on \nDOS. When computer crimes started hitting the headlines, most of them were being \n" }, { "page_number": 336, "text": "14.4  Forensics Tools\b\n323\n­perpetuated on PCs, most of them running DOS. So no wonder the first forensic \ntools were based on DOS and were command-line based. Among the rich collection \nby Nelson et al. [5] are the following shown in Tables 14.2, 14.3, 14.4, 14.5, 14.6, \n14.7, and 14.8.\nTable 14.2  Forensic Tools by New Technologies, Inc. (www.forensics-intl.com)\nTool\t\nFunction\nCopyQM\t\nDisk copying\nCRCMD5\t\nCalculates CRC-32, MD5 hash\nDiskSearch\t\nOverwrites hard drive\nDiskSearch32\t\nKeyword search on MS-FAT12, FAT16, FAT32\nDiskSig\t\nCRC-32 and MD5 for entire disk signature\nDiskSearch Pro\t\nKeyword search for MS_FAT and NTFS file Systems\nFileList\t\nCreates datafiles with compressed outputs\nFileCNUT\t\nConverts FileList into dBaseIII files\nFilter-I\t\nFilters nonprintable characters from mixed data file\nGetFree\t\nExtracts unallocated free space in MS-FAT file system\nGraphic Image\t\nLocates/deletes graphic image from slack/freespace, \nFile Extractor\t\n  reconstructs: BMP, GIF, JPG formats\u0007\nNet Threat Analyser\t\n\u0007Extracts data such as e-mail addresses/URL from disk. Similar to \n  DiskSeach Pro\nM-Sweep Pro\t\nErases individual files from disk on MS-FAT and NTFS file systems\nSafeBack\t\nDisk drive bit-stream imaging/sector-by-sector copy\nTextExtract Plus\t\nOriginal keyword search\nTable 14.3  Forensic tools by DataLifter (www.datalifter.com)\nTool\t\nFunction\nDs2dump\t\n\u0007Collects data from slack and free space and copies all file slack and \n  unallocated space from a FAT system\nTable 14.4  Forensic tools by Digital Intelligence (www.digitalintel.com)\nTool\t\nFunction\nDriveSpy\t\n\u0007Tools that provide forensic analysis from all MS-FAT12, FAT16, \n  and FAT32 file systems. Does not analyze NTFS, Unix/Linux file \n  systems. Copies forensic activities to a textfile and later integrates \n  it into a report.\nPDBlock\t\n\u0007Write-blocker disables write capabilities of Interrupt 13 in the BIOS \n  of Intel PC when system tries to write on disk.\nPDWipe\t\nDeletes all data from disk/wipe disk\nImage\t\n\u0007Creates compressed and uncompressed datafile of a floppy disk \n  images\nPart\t\n\u0007Creates multiple MS operating systems installed on forensic \n  ­workstation\nTable 14.5  Tools by Columbia Data Products (www.cdp.com)\nTool\t\nFunction\nSnap Back DatArrest\t\n\u0007Bit-stream imaging from a copy or a network connection to a remote \n  server.\nSnapCopy\t\nDuplicates image copy disk-to-disk copy.\n" }, { "page_number": 337, "text": "324\b\n14  Computer and Network Forensics\n GUI-Based Forensic Software Tools\nBecause of the popularity of both Windows and GUI applications, forensic tools \nhave also increasingly become GUI-based. GUI-based forensic software tools are \neasier to use than their counterparts, the command-line based. Tables 14.9, 14.10, \n14.11, and 14.12 explore some of these tools.\nTable 14.6  Forensic tools by Tools That Work (www.toolsthatwork.com)\nTool\t\nFunction\nByte Back\t\n\u0007Clones and images sectors of a disk, recovers files automatically \n  on FAT/NTFS file systems, rebuilds partitions and bootrecord on \n  FAT/NTSF file systems, wipes, edits, and scans a disk. Runs on\n  MS-DOS 5.0.\nTable 14.7  Forensic tools by Danny Mares (www.dmares.com)\nTool\t\nFunction\nMaresWare\t\n\u0007Several tools for DOS/UNIX: wipes, catalogs programs, locks boot \n  programs, images floppy disk, hashes programs, Hex editor, hash \n  compare, deletes files and directories, keyword search for sector \n  and programs\nTable 14.8  Forensic Tools by DIBS USA, Inc. (www.dibusa.com)\nTool\t\nFunction\nDIBS Mycroft\t\nDOS-based. Searches and locks disks.\nTable 14.9  Forensic tools by Access Data (www.accessdata.com)\nTool\t\nFunction\nPassword Recovery\nToolkit (PRTK)\t\n\u0007Interprets or hashes passwords in products such as Office 2000, \n  WinZip. New version has encryption capabilities for Windows XP, \n  Internet Explorer, and Netscape Navigator.\nPRTK (DNA)\t\nCracks passwords of networked stations\nForensic Toolkit (FTK)\t\n\u0007Has several functions: text indexing from searching, data ­recovery \n  from (NTFS, FAT, NTFS compressed Linux Ext2fs and Ext3fs \n  formats, e-mail recovery, data extraction from (PKZIP, WinZip, \n  GZip, TAR) archives, file filtering.\nTable 14.10  Forensic Tools by Guidance Software (www.encase.com)\nTool\t\nFunction\nEnCase\t\n\u0007Very popular tool that: extracts messages in MS-PST files, spans \n  multiple Redundancy Array of Inexpensive Disk (RAID) volumes. \n  Does NTFS compression and ACL of files.\n" }, { "page_number": 338, "text": "14.4  Forensics Tools\b\n325\nTable 14.11  Forensic tools by Ontrack (www.ontrack.com)\nTool\t\nFunction\nCaptureIT\t\n\u0007Creates/recovers images from a bad disk resulting from a head crash \n  while running from a boot floppy creating up to 600 MB of image \n  volumes, runs mechanical diagnostic tests on selected disks. Does \n  not compress saved images.\nFacTracker\t\n\u0007Analyzes data from CaptureIT and it restores deleted files, runs a \n  keyword search, identifies file signatures from altered files, and \n  generates findings report.\nTable 14.12  Forensic tools by Several Manufacturers\nTool\nManufacturer\nFunction\nRecover NT\nFile Recovery\nPhoto Recovery\nLC Technologies Software\n(www.lc-tech.com)\nAll three for data recovery \n­(undeletes). ­Running in Micro­\nsoft 9X, Me, NT, 2000, XP. FAT \nand NTFS file systems. Photo \nrecovery from many media done \nfrom digital images.\nWinHex\nSf-soft\n(www.sf.soft.de/winhex)\nInspects and repairs data files on a \ndisk, disk cloning, disk sector \nimaging with/out compression \nand encryption, keyword search.\nDIBS Analyzer\nProfessional \n­Forensic Software\nDIBS USA\n(www.dibsusa.com)\nAnalysis for satellite modules (spe­\ncific tasks for analysis) including \ncore modules FAT16 and FAT32\nPro Discover DFT\nTechnology Pathways\n(www.techpathways.com)\nSeveral services including: ­imaging \nof disk files, read images from \nUnix/Linux dd, access suspect \ndisks through write-block, \ndisplays other data streams from \nNT and Windows 2000 NTFS \nfile systems.\nData Lifter\nCollection of tools for file ­extractor, \ndisk cataloging of files, image \nlinker, e-mail and Internet history \nretriever for Internet Explorer \nand Netscape ­Navigator, Recycle \nBin ­history reviewer, screen cap­\nture ­function, and file slack and \nfree space acquisition tool.\nExpert Witness\nASRData\nData recovery on Machantosi using \nHFS and HFS + file system, all \nMicrosoft FAT file systems, \ngenerate reports, export data \n­findings to Excel\nSmart\nASRData\nData recovery for Linux, BeOS, \nanalyze data on all Microsoft FAT, \nNTFS, Linux’s Extefs and Ext3fs, \nHFS, and Reiser.\n" }, { "page_number": 339, "text": "326\b\n14  Computer and Network Forensics\nTable 14.13  Forensic Products by LC Technologies (www.lc-tech.com)\nTool\nFunction\nDRAC 2000 Workstation Has two high capacity disk drives one for booting and the other for \nevidence data acquisition. Also includes removable IDE disk.\nFirewire Peripherals\n+ (Read-only)-IDE bays\n+ Drive Image Stations\n+ Firewire\n− Hot swap write-blocker\n− Two IDE bays Hot-swap ­write-blocker\n− Assorted controller cards/Firewire internal ­interface devices/\nfirewire blockers\n14.4.1.2  Hardware–Based Forensics Tools\nAlthough most forensic tools are software-based, there is an ample supply of hard­\nware-based forensic tools. Hardware tools are based on a workstation that can be \nstationary, portable, or lightweight. Lightweight workstations are based on laptops. \nThe choice of the type of workstation an investigator uses is determined by the \nnature of the investigation and the environment of the incident location. There are \nfully configured turn-key workstations that can be bought or the investigator can \nbuild his or her own. Hardware-based tools also include write-blockers that allow \ninvestigators to remove and reconnect a disk drive on a system without having to \nshut the system down. These tools, many shown in Tables 14.13 and 14.14, connect \nto the computer using Firewire, USB or SCSI controllers.\n14.4.2  Network Forensics Tools\nLike in computer forensics, after collecting information as evidence, the next big \ndecision is the analysis tools that are needed to analyze. This job is a lot easier if the \nsystem you are investigating was built up by you. Depending on the platform, you \ncan start with tcpdump and the strings command. TCPdump will display individual \npackets or filter a few packets out of a large data set, and the string command gives \na transcript of the information that passed over the network. Similarly Snort allows \nthe investigator to define particular conditions that generate alarms or traps.\nHowever, the job is not so easy if the investigator does not have any knowledge \nof the system. In this case, he or she is likely to depend on commercial tools. The \nforensic investigator’s ability to analyze will always be limited by the capabili­\nties of the system. Most commercial forensics tools perform continuous network \nmonitoring based on observed data from internal and external sources. Monitoring \nexamines the flow of packets into and out of every port in the network. With this \nblanket monitoring, it is possible to learn a lot about individual users and what they \nare doing and with whom. While analysis of individual traffic flows is essential to \na complete understanding of network usage, with real-time monitoring on the way, \nnetwork monitoring is going to require significant amounts of resources.\nOne of the benefits of monitoring is the early warning intelligence-gathering \ntechnique sometimes called recon probes. A standard forensic tool such as ­TCPdump \ncan provide the investigator with these probes. The probes can also come from other \n" }, { "page_number": 340, "text": "14.4  Forensics Toolsd\b\n327\nTable 14.14  Forensic hardware products by Several Manufacturers\nProduct\nManufacturer\nFunction\nBRAProtect\nBIA Protect\n(www.biaprotect.com)\nRecover data from RAID \n­computers, connects via \nUSB and firewire ports. \n­Portable with preloaded \n­software.\nTower, Portable workhorse, \nSteel tower, and Air-File\nForensic Computers\n(www.forensic-computer.com)\nMany forensic functions\nWorkstation, movable \n­workstation, and Rapid \nAction Imaging Device \n(RAID)\nDIBS USA\n(www.dibsusa.com)\nMany forensic functions\nForensic Recovery Evidence \nDevice (FRED) (tower, \nFREDDLE, FRED Sr, \nFREDc), FireChief for \nlaptops\nDigital Intelligence\n(www.digitalintel.com)\nMany forensic functions\nImage Master Solo\nImage Master Solo\n(www.ccs-iq.com)\nDisk duplicating systems\nImageMaster Solo-2\nImage Master Solo\nSmall duplicating device for \ndisks, generates signatures, \nCD back up.\nEnCase SCSI-based\nGuidance Software\n(www.encase.com)\nWrite-blockers that is \n­hot-swappable for data \n­acquisition.\nSeveral products :\n+ AEC7720UW\n+ AEC7720WP\nAcard\nInterface cards that allow \nconnection to IDE disks \n(CDROMS-to-SCISI, \n­SCISI-to-IDE), write­\nblockers.\nNoWriter\nTechnology Pathways\n(www.techpathways.com)\nWrite-blocker and \n­hot-swapper, connects to \nUSB and Firewire, IDE. \nIdentifies any protected \narea on a suspect disk. \nUsed Windows, DOS, \nLinux, Unix.\nDriveDock\nWeibeTech\nExternal Firewire IDE, \n­write-blocker.\nnetwork monitoring tools such as firewalls, and host-based and network-based intru­\nsion detection systems.\nExercises\n  1.\t In your opinion, is computer forensics a viable tool in the fight against the \ncyber crime epidemic?\n  2.\t Discuss the difficulties faced by cyber crime investigators.\n  3.\t Differentiate between computer and network forensics.\n" }, { "page_number": 341, "text": "328\b\n14  Computer and Network Forensics\n  4.\t Discuss the limitations of computer forensics in the fight against cyber \ncrimes.\n  5.\t Many of the difficulties of collecting digital evidence stem from its ability to \ndry up so fast, and the inability of investigators to move fast enough before the \nevidence disappears. Suggest ways investigators might use to solve this prob­\nlem.\n  6.\t Handling forensic evidence in cyber crime situations must be done very care­\nfully. Discuss the many pitfalls that an investigator must be aware of.\n  7.\t One of the methods used in extracting computer forensics evidence is to freeze \nthe computer. While this is considered a good approach by many people, there \nare those who think it is shoddy work. Discuss the merits and demerits of com­\nputer “freezing.”\n  5.\t It is so much easier to extract evidence from a computer than from a network. \nDiscuss the difficulties faced by investigators collecting evidence from a net­\nwork.\n  9.\t Encryption can be used both ways: by the criminals to safeguard their data and \nby the investigators to safeguard their findings. Discuss the difficulties investi­\ngators face when dealing with encrypted evidence.\n10.\t Discuss the many ways cyber criminals and other computer and network users \nmay use to frustrate investigators.\nAdvanced Exercises\n1.\t Hal Berghel meticulously distinguishes between computer forensics and net­\nwork forensics by giving examples of the so-called “dual usage” network secu­\nrity tools. Study four such tools and demonstrate their “dual usage.”\n2.\t Discuss, by giving extensive examples, the claim put forward by Berghel that \ncomputer forensics investigators and network forensics investigators have simi­\nlar levels of skills.\n3.\t It has been stated on many occasions that “reverse hacking” is a good policy for \nnetwork security. Define “reverse hacking” and discuss the stated opinion.\n4.\t Study the new techniques of digital reconstruction and show how these new \ntechniques are improving the fortunes of both computer and network forensics.\n5.\t Discuss the future of both computer and network forensics in view of the obser­\nvation that network forensics is but a small science soon to be forgotten.\nReferences\n\t 1.\t Rubin, R. “More Distancing and the Use of Information: The Seven Temptations.” In J. M. \nKizza. Social and Ethical Effects of the Computer Revolution. 1996. McFarland & Company, \nJefferson, NC.\n" }, { "page_number": 342, "text": "References\b\n329\n\t 2.\t CERT/CC Statistics 1988 – 2003. http://www.cert.org/stats/cert_stats.html\n\t 3.\t InterGov International. International Web Police. http://www.intergov.org/public_informa­\ntion/general_information/latest_web_stats.html\n\t 4.\t “Intrusion Analysis.” http://www.crucialsecurity.com/intrusionanalysis.html\n\t 5.\t Nelson, B., Amelia P., Frank E., and Chris S.. Guide to Computer Forensics and ­Investigations. \nBoston, MA: Course Technologies, 2004.\n\t 6.\t Berghel, H.. “The Discipline of Internet Forensics,” Communications of the ACM, August \n2003 46(8).\n\t 7.\t Kruse II, W. and Jay, G. H.. Computer Forensics: Incident Response Essentials. Reading: \nMA., Addison-Wesley, 2002.\n\t 8.\t Sammes, T. and Brian, J.. Forensic Computing: A Practitioner’s Guide. Springer, London, \n2000.\n\t 9.\t “Symantec Knowledge Base.” http://service2.symantec.com/SUPPORT/ghost.nsf/\n\t10.\t Bender, W., Gruhl D., Morimoto N., and Lu. A. “Techniques for data hiding “. IBM Systems \nJournal, 1996 35(3&4)\n\t11.\t TekTron. “Computer Forensics.” http://www.tektronsolutions.com/computerforensics.htm\n\t12.\t Pipkin, D. L. Information Security: Protecting the Global Enterprise. Prentice Hall PTR. \nUpper Saddle River, NJ. 2000.\n" }, { "page_number": 343, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_15, © Springer-Verlag London Limited 2009\n\b\n331\nChapter 15\nVirus and Content Filtering\n15.1  Definitions\nAs the size of global computer networks expands and the use of the Internet \nskyrockets, the security issues do manifest themselves not only in the security of \ncomputer networks but also in individual user security on individual PCs con­\nnected to the Internet either via an organization’s gateway or an Internet Service \nProvider (ISP). The security of every user, therefore, is paramount whether the user \nis a member of an organization network or a user of a home PC via an independent \nISP. In either case, the effort is focused on protecting not only the data but also the \nuser.\nThe most effective way to protect such a user and the data is through content \nfiltering. Content filtering is a process of removing unwanted, objectionable, and \nharmful content before it enters the user network or the user PC. The filtering pro­\ncess can be located in several locations including on a user’s PC, on a server within \nan organization, as a service provided by an ISP, or by means of a third party site \nthat provides the basis of a closed community.\nIn their report to the Australian Government on Content Filtering, Paul Greenfield \net al. [1] divide the process of content filtering into two approaches: inclusion \nfiltering, and exclusion filtering.\n15.2  Scanning, Filtering, and Blocking\nScanning is a systematic process of sweeping through a collection of data looking \nfor a specific pattern. In a network environment, the scanning process may involve \na program that sweeps through thousands of IP addresses looking a particular IP \naddress string or a string that represents a vulnerability or a string that represents a \nvulnerable port number. Filtering, on the other hand, is a process of using a com­\nputer program to stop an Internet browser on a computer from being able to load \ncertain Web pages based upon predetermined criteria such as IP addresses. Block­\ning, like filtering is also a process of preventing certain types of information from \nbeing viewed on a computer’s screen or stored on a computer’s disk. In this section, \n" }, { "page_number": 344, "text": "332\b\n15  Virus and Content Filtering\nwe are going to look at these three processes and see how they are used in computer \nnetworks and personal computers as a way to enhance security.\n15.2.1  Content Scanning\nAll Internet content inbound into and outbound from either an organization’s \nnetwork, an ISP gateway, or a user PC is always scanned before it is filtered. So \nscanning is very important in content filtering. Let us look at the ways scanning is \ndone on the content of the Internet, either inbound or outbound. There are two forms \nof scanning: pattern-based and heuristic scanning.\n15.2.1.1  Pattern-Based Scanning\nIn pattern-based scanning, all content coming into or leaving the network, an ISP \ngateway, or user PC is scanned and checked against a list of patterns, or definitions, \nsupplied and kept up to date by the vendor. The technique involves simply compar­\ning the contents, which can be done in several ways as we saw in Section 11.2.1. \nNearly all anti-virus software packages work this way. This approach can, however, \nbe slow and resource intensive.\n15.2.1.2  Heuristic Scanning\nHeuristics scanning is done by looking at a section of code and determining what \nit is doing, then deciding, whether the behavior exhibited by the code is unwanted, \nharmful like a virus or otherwise malicious. This approach to scanning is com­\nplex because it involves modeling the behavior of code and comparing that abstract \nmodel to a rule set. The rule set is kept in a rule database on the machine and the \ndatabase is updated by the vendor. Because of the checking and cross-checking, \nthis approach takes more time and it is also resource intensive, if not more than the \nprevious one. Theoretically heuristics has several advantages over pattern-based \nscanning including better efficiency and accuracy. It can, potentially, detect viruses \nthat haven’t been written yet.\n15.2.2  Inclusion Filtering\nInclusion filtering is based on the existence of an inclusion list. The inclusion list \nis a permitted access list – a “white list” probably vetted and compiled by a third \nparty. Anything on this list is allowable. The list could be a list of URL for allow­\nable Web sites, for example; it could be a list of allowable words, or it could be a \nlist of allowable packet signatures for allowable packets. The nature of the list is \n" }, { "page_number": 345, "text": "15.2  Scanning, Filtering, and Blocking\b\n333\ndetermined by the security policy of the organization or a committee of a commu­\nnity. As Greenfield noted, this type of filtering can be 100% effective – assuming \nthe person or organization that has compiled the white list shares the same set of \nvalues as the Internet user.\nBut the inclusion list approach, despite its effectiveness, has several drawbacks \nincluding the following:\nThe difficulty to come up with a globally accepted set of criteria. This is a direct \n• \nresult of the nature of the Internet as a mosaic of a multitude of differing cultures, \nreligions, and political affiliations. In this case, it is almost impossible to come \nup with a truly accepted global set of moral guidelines.\nThe size of the inclusion list. As more and more acceptable items become \n• \navailable and qualify to be added on the list, there is a potential for the list to \ngrow out of control.\nDifficulty of finding a central authority to manage the list. In fact, this is one \n• \nof the most difficult aspect of the inclusion list approach to content filtering. \nFor example, even through we have been suffering from virus attacks for years, \nthere is no one authoritative list managed by a central authority that contains all \nthe virus signatures that have ever been produced. There are currently highly \ninclusive lists managed by either private anti-virus companies or publicly \nsupported reporting agencies such as the Computer Emergency Reporting Team \n(CERT) Center.\n15.2.3  Exclusion Filtering\nAnother approach to content filtering is the use of an exclusion list. This is the \nopposite of the inclusion list process we have discussed previously. An exclusion \nlist is actually a “black list” of all unwanted, objectionable, and harmful content. \nThe list may contain URLs of sites, words, signatures of packets, and patterns of \nwords and phrases. This is a more common form of filtering than inclusion filtering \nbecause it deals with manageable lists. Also it does not pre-assume that everything \nis bad until proven otherwise.\nHowever, it suffers from a list that may lack constant updates and a list that is not \ncomprehensive enough. In fact, we see these weaknesses in the virus area. No one \nwill ever have a fully exhaustive list of all known virus signatures, and anti-virus \ncompanies are constantly ever updating their master lists of virus signatures.\n15.2.4  Other Types of Content Filtering\nIn the previous two sections, we have discussed the two approaches to content \nfiltering. In each one of these approaches, a list is produced. The list could be made \nup of URLs, words (keyword), phrases, packet signatures, profile, image analysis, \n" }, { "page_number": 346, "text": "334\b\n15  Virus and Content Filtering\nand several other things. Let us now look at the details of content filtering based on \nthese items [1].\n15.2.4.1  URL Filtering\nWith this approach, content into or out of a network is filtered based on the \nURL. It is the most popular form of content filtering, especially in terms of \ndenial of access to the targeted site. One of the advantages of URL filtering is \nits ability to discriminate and carefully choose a site but leave the IP address of \nthe machine that hosts functioning and therefore providing other services to the \nnetwork or PC.\nBecause of the low-level of and fine tuning involved in URL filtering, many \ndetails of the set up and format of the target is needed in order to be able to provide \nthe required degree of effectiveness. In addition, because of the low-level details \nneeded, when there are changes in the files in the URL, these changes must be cor­\nrespondingly affected in the filter.\n15.2.4.2  Keyword Filtering\nKeyword filtering requires that all the inbound or outbound content be scanned, \nand every syntactically correct word scanned is compared with words either on the \ninclusive – white list or exclusive black list depending on the filtering regime used. \nAlthough it is the oldest and probably still popular, it suffers from several draw­\nbacks including the following:\nIt is text-based, which means that it fails to check all other forms of data like \n• \nimages, for example.\nIt is syntactically based, meaning that it will block words with prefixes or suffixes \n• \nthat syntactically look like the forbidden words, ignoring the semantics of the \nsurrounding text.\n15.2.4.3  Packet Filtering\nAs we discussed in Chapter 1, network traffic moves between network nodes based \non a packet, as an addressable unit, with two IP addresses: the source address and \nthe destination addresses. Throughout this book we have discussed the different \nways these addresses are used in transporting data. As we saw in Chapter 11, con­\ntent is blocked based on these IP addresses. Because of this approach, if content \nis blocked or denied access based on IP addresses, this means that no content can \ncome from or go to the machine whose address is in the block rules. This kind of \nblocking is indiscriminate because it blocks a machine based on its addresses, not \ncontent, which means that a machine may have other good services but they are all \n" }, { "page_number": 347, "text": "blocked. As we discussed in Section 11.2, packet filtering can also be done based on \nother contents of a packet such as port numbers and sequence numbers.\n15.2.4.4  Profile Filtering\nThe use of artificial intelligence in content filtering is resulting into a new brand of \ncontent filters based on the characteristics of the text “seen” so far and the learning \ncycles “repeats” done to discriminate all further text from this source. However, \nbecause of the complexity of the process and the time involved and needed for the \nfilters to “learn,” this method, so far, has not gained popularity. In the pre-process­\ning phase, it needs to fetch some parts of the document and scan it – either text \nbased or content-based, in order to “learn.” This may take time.\n15.2.4.5  Image Analysis Filtering\nEver since the debut of the World Wide Web with its multimedia content, Inter­\nnet traffic in other formats different from text has been increasing. Audio and \nvideo contents are increasing daily. To accommodate these other formats and \nbe able to filter based on them, new approaches had to be found. Among these \napproaches is the one based on analyzed images. Although new, this approach \nis already facing problems of pre-loading images for analysis, high bandwidth \nmaking it extremely slow, and syntactic filtering making it indiscriminate \nsemantically.\n15.2.5  Location of Content Filters\nAt the beginning of the chapter, we stated that there are four best locations to install \ncontent filters. These four locations include, first and foremost, the user’s PC, at the \nISP as the main gateway to and from the Internet to the user PC, at the organization \nserver, and finally by the third party machine. Let us briefly look at each one of \nthese locations.\n15.2.5.1  Filtering on the End User’s computer\nAt this location, the user is the master of his or her destiny. Using software installed \non the user machine, the user can set blocking rules and blocking lists that are \nexpressive of his or her likes and dislikes. Because this location makes the user the \nfocus of the filtering, the user is also responsible for updating the blocking rules \nand lists. In addition, the user is responsible for providing the security needed to \nsafeguard the blocking rules and lists from unauthorized modifications.\n15.2  Scanning, Filtering, and Blocking\b\n335\n" }, { "page_number": 348, "text": "336\b\n15  Virus and Content Filtering\n15.2.5.2  Filtering at the ISP’s Computer\nUnlike filtering at the user PC, filtering at the ISP removes the responsibility of \nmanaging the filtering rules from the user and lists and places it with the ISP. It also \nenhances the security of these items from unauthorized local changes. However, \nit removes a great deal of local control and the ability to affect minute details that \nexpress the user’s needs.\nBecause this is a centralized filtering, it has several advantages over the others. \nFirst, it offers more security because the ISP can make more resources available than \nthe user would. Second, the ISP can dedicate complete machines – called proxy serv­\ners, to do the filtering, thus freeing other machines and making the process faster. \nFinally, the ISP can have more detailed lists and databases of these lists than a user.\nIn Section 11.2.2, we discussed the use of proxy servers and filters as firewalls. \nSo we have a basic understanding of the working of proxy servers. The proxy serv­\ners are installed in such a way that all traffic to and from the ISP must go through \nthis proxy server to be able to access the Internet. A proxy filter can be configured \nto block a selected service.\n15.2.5.3  Filtering by an Organization server\nTo serve the interest of an organization, content filtering can also be done at a dedi­\ncated server at an organization. Just like at the ISP, the organization’s system admin­\nistrator can dedicate a server to filtering content into and out of the organization. All \ninbound and outbound traffic must go through the filters. Like ISP filtering, this is \ncentralized filtering and it offers a high degree of security because the filtering rules \nand lists are centrally controlled.\n15.2.5.4  Filtering by a Third Party\nFor organizations and individuals that are unable to do their own filtering, the third \nparty approach offers a secure good alternative. Both inbound and outbound traffic \non the user and organization gateways are channeled through the third party filters. \nThe third party may use proxy servers like the ISPs or just dedicated servers like \norganization servers. Third-party filters offer a high degree of security and a variety \nof filtering options.\n15.3  Virus Filtering\nOur discussion of viruses started in Chapter 3, where we introduced viruses as a \nthreat to system security. We discussed the big virus incidents that have hit the \nInternet causing huge losses. In Section 5.3.5, we looked at viruses as hackers’ \n" }, { "page_number": 349, "text": "tools. Although we did not specifically define the virus, we discussed several types \nof viruses and worms that hackers use to attack systems. Now we are ready to define \na computer virus on our way to filtering it.\n15.3.1  Viruses\nA computer virus is a self-propagating computer program designed to alter or destroy \na computer system resource. The term virus is derived from a Latin word virus which \nmeans poison. For generations, even before the birth of modern medicine, the term \nhad remained mostly in medical circles, meaning a foreign agent injecting itself in \na living body, feeding on it to grow and multiply. As it reproduces itself in the new \nenvironment, it spreads throughout the victim’s body slowly, disabling the body’s \nnatural resistance to foreign objects, weakening the body’s ability to perform needed \nlife functions, and eventually causing serious, sometimes fatal, effects to the body.\nComputer viruses also parallel the natural viruses. However, instead of using the \nliving body, they use software (executable code) to attach themselves, grow, repro­\nduce, and spread in the new environment. Executing the surrogate program starts \nthem off and they spread in the new environment, attacking major system resources \nthat sometimes include the surrogate software itself, data, and sometimes hardware, \nweakening the capacity of these resources to perform the needed functions, and \neventually bringing the system down.\nThe word virus was first assigned a nonbiological meaning in the 1972 science \nfiction stories about the G.O.D. machine that were compiled into a book When \nHarly Was One by David Gerrod (Ballantine Books, First Edition, New York, NY, \n1972). Later association of the term with a real world computer program was by \nFred Cohen, then a graduate student at the University of Southern California. Cohen \nwrote five programs, actually viruses, to run on a VAX 11/750 running Unix, not \nto alter or destroy any computer resources but for class demonstration. During the \ndemonstration, each virus obtained full control of the system within an hour [2]. \nFrom that simple and harmless beginning, computer viruses have been on the rise. \nComputer viruses are so far the most prevalent, most devastating, and the most \nwidely used form of computer system attack. And of all types of systems attacks, \nit is the fastest growing. As we reported in Chapter 2, Symantec reports that on the \naverage there are between 400 to 500 new viruses per month [3]. The virus is, so far, \nthe most popular form of computer system attack because of the following factors:\nEase of generation. Considering all other types of system attacks, viruses are the \n• \neasiest to generate because the majority of them are generated from computer \ncode. The writing of computer code has been becoming easier every passing day \nbecause, first, programming languages are becoming easier to learn and develop \nprograms; second, there are more readily available virus code floating around \non the Internet; and finally, there is plenty of help for would-be virus developers \nin terms of material and physical support. Material support in form of how-to \nmanuals and turn-key virus programs is readily available free on the Internet.\n15.3  Virus Filtering\b\n337\n" }, { "page_number": 350, "text": "338\b\n15  Virus and Content Filtering\nScope of reach. Because of the high degree of interconnection of global \n• \ncomputers, the speed at which viruses are spread is getting faster and faster. The \nspeed at which the “Code Red” virus spread from Philippines through Asia to \nEurope to North American attest to this. Within a few days of release, Code Red \nhad the global networks under its grip.\nSelf-propagating nature of viruses. The new viruses now are far more dangerous \n• \nthan their counterparts several years ago. New viruses self-propagate, which \ngives them the ability to move fast and create more havoc faster. One of the \nreasons that the Code Red virus was able to move so fast was that it was self-\npropagating.\nMutating viruses. The new viruses are not only self-propagating, which \n• \ngives them speed, but they are also mutating which gives them a double \npunch of delaying quick eradication and consuming great resources and \ntherefore destroying more in their wake, fulfilling the intended goals of the \ndevelopers.\nDifficult to apprehend the developer. As the Code Red virus demonstrated, owing \n• \nto legal and other limitations, it is getting more and more difficult to apprehend \nthe culprits. This in itself is giving encouragement to would-be virus developers \nthat they can really get way with impunity.\n15.3.1.1  Virus Infection/Penetration\nThere are three ways viruses infect computer systems and are transmitted: boot sec­\ntor, macro penetration, and parasites [4].\nBoot Sector Penetration\nAlthough not very common nowadays, boot sectors are still being used somehow \nto incubate viruses. A boot sector is usually the first sector on every disk. In a boot \ndisk, the sector contains a chunk of code that powers up a computer. In a nonboot­\nable disk, the sector contains a File Allocation Table (FAT), which is automatically \nloaded first into computer memory to create a roadmap of the type and contents \nof the disk for the computer to access the disk. Viruses imbedded in this sector are \nassured of automatic loading into the computer memory.\nMacros Penetration\nSince macros are small language programs that can execute only after imbedding \nthemselves into surrogate programs, their penetration is quite effective. The rising \npopularity in the use of script in Web programming is resulting in macro virus pen­\netration as one of the fastest forms of virus transmission.\n" }, { "page_number": 351, "text": "Parasites\nThese are viruses that do not necessarily hide in the boot sector, nor use an incubator \nlike the macros, but attach themselves to a healthy executable program and wait for any \nevent where such a program is executed. These days, due to the spread of the Internet, \nthis method of penetration is the most widely used and the most effective. Examples of \nparasite virus include Friday the 13th, Michelangelo, SoBig, and the Blaster viruses.\nOnce a computer attack is launched, most often a virus attack, the attacking \nagent scans the victim system looking for a healthy body for a surrogate. If it is \nfound, the attacking agent tests to see if it has already been infected. Viruses do not \nlike to infect themselves, hence wasting their energy. If an uninfected body is found, \nthen the virus attaches itself to it to grow, multiply, and wait for a trigger event to \nstart its mission. The mission itself has three components:\nto look further for more healthy environments for faster growth, thus spreading \n• \nmore,\nto attach itself to any newly found body, and\n• \nonce embedded, either to stay in the active mode ready to go at any trigger event \n• \nor to lie dormant until a specific event occurs.\n15.3.1.2  Sources of Virus Infections\nComputer viruses, just like biological viruses, have many infection sources. \nAgain like biological viruses, these sources are infected first either from first \ncontact with a newly released virus or a repeat virus. One interesting fact about \ncomputer virus attacks, again following their cousins the biological viruses, is \nthat a majority of them are repeat attacks. So like in human medicine, a certain \ntype of proven medications is routinely used to fight them off. Similarly with \ncomputer viruses, the same anti-virus software is routinely used to fight many \nof the repeat viruses. Of late, however, even known viruses have been mutating, \nmaking anti-virus companies work harder to find the code necessary to eliminate \nthe mutating virus.\nOf the known viruses, there are mainly four infection sources: movable com­\nputer disks such as floppies, zips, and tapes; Internet downloadable software such \nas beta software, shareware, and freeware; e-mail and e-mail attachments; and \nplatform-free executable applets and scripts. It is important to note that just like \nbiological viruses, infections are caused by coming in close contact with an infected \nbody. Likewise in computer viruses, viruses are caught from close contact with \ninfected bodies – system resources. So the most frequently infected bodies that can \nbe sources of viruses are as follows [4]:\nMovable computer disks: Although movable computer disks like floppies, zips, \n• \nand tapes used to be the most common way of sourcing and transmitting viruses, \nnew Internet technologies have caused this to decline. Viruses sourced from \nmovable computer disks are either boot viruses or disk viruses.\n15.3  Virus Filtering\b\n339\n" }, { "page_number": 352, "text": "340\b\n15  Virus and Content Filtering\nBoot viruses: These viruses attack boot sectors on both hard and floppy \n• \ndisks. Disk sectors are small areas on a disk that the hardware reads in single \nchunks. For DOS formatted disks, sectors are commonly 512 bytes in length. \nDisk sectors, although invisible to normal programs, are vital for the correct \noperation of computer systems because they form chunks of data the computer \nuses. A boot sector is the first disk sector or first sector on disk or diskette that \nan operating system is aware of. It is called a boot sector because it contains \nan executable program the computer executes every time the computer \nis powered up. Because of its central role in the operations of computer \nsystems, the boot sector is very vulnerable to virus attack and viruses use it as \na launching pad to attack other parts of the computer system. Viruses like this \nsector because from it, they can spread very fast from computer to computer, \nbooting from that same disk. Boot viruses can also infect other disks left in \nthe disk drive of an infected computer.\nDisk viruses: Whenever viruses do not use the boot sector, they embed \n• \nthemselves, as macros, in disk data or software. A macro is a small program \nembedded in another program and executes when that program, the surrogate \nprogram, executes. Macro viruses mostly infect data and document files, \ntemplates, spreadsheets, and database files\nInternet Downloadable Software: Historically, it used to be that computer viruses \n• \nwere actually hand carried. People carried viruses on their floppy disks whenever \nthey transferred these infected disks from one computer to the other. Those were \nthe good old days before the Internet and the concept of downloads. The advent \nof the Internet created a new communication and virus transmission channel. In \nfact, the Internet is now the leading and fastest virus transmission channel there \nis. Internet downloads, bulletin boards, and shareware are the actual vehicles that \ncarry the deadly virus across the seas in a blink of an eye.\nE-mail attachments: As recent mega virus attacks such as the “Code Red,” \n• \n“SoBig,” and the “Blaster” have demonstrated, no computer connected to the \nInternet is safe any longer. E-mail attachments are the fastest growing virus \ntransmission method today. With more than one half of all today’s Internet \ntraffic made up of e-mails and millions of emails being exchanged a day going \nthrough millions of other computers, the e-mail communication is the most \npotent channel of infecting computers with viruses. Incidentally straight-\ntexted e-mails, these are e-mails without attachments, are free from viruses. \nSince attachment-free emails are pure texts, not executables, they cannot \ntransport viruses. Viruses, as we have already seen are executable programs or \ndocument macros that can be embedded into other executables or application \ndocuments.\nPlatform-free executable applets and scripts: Dynamism has made Web application \n• \nvery popular these days. Web dynamism has been brought about by the birth \nof scripting languages such as Java, Pearl, and C/C +  + . As we discussed in \nChapter 6, the Common Gateway Interface (CGI) scripts let developers create \ninteractive Web scripts that process and respond to user inputs on both the client \n" }, { "page_number": 353, "text": "side and the server side. Both CGI scripts, which most often execute on the \nserver side, and JavaScript and VBScript that execute within the user’s browser \non the client side, create loopholes in both the server and the client to let in \nviruses. One way of doing this is through a hacker gaining access to a site and \nthen changing or replacing the script file. The hacker can also lay a “man-in-\nthe-middle” attack by breaking in a current session between the client browser \nand the server. By doing so, the hacker can then change the message the client is \nsending to the server script.\n15.3.1.3  Types of Viruses\nJust like living viruses, there are several types of digital (computer) viruses and \nthere are new brands almost every the other day. We will give two classifications of \ncomputer viruses based on transmission and outcomes [4,5].\nVirus Classification Based on Transmission\nTrojan horse viruses\n• \n: These viruses are labeled Trojan horse viruses because \njust like in the old myth in which the Greeks, as enemies of Troy, used a large \nwooden horse to hide in and enter the city of Troy, these viruses use the tricks \nthese legendary Greeks used. During transmission, they hide into trusted common \nprograms such as compilers, editors, and other commonly used programs. Once \nthey are safely into the target program, they become alive whenever the program \nexecutes.\nPolymorphic viruses\n• \n: These viruses are literally those that change form. Before \na polymorphic virus replicates itself, it must change itself into some other form \nin order to avoid detection. This means that if the virus detector had known \nthe signature for it, this signature then changes. Modern virus generators have \nlearned to hide the virus signatures from anti-virus software by encrypting the \nvirus signatures and then transforming them. These mutations are giving virus \nhunters a really hard time. The most notorious mutating virus was the “Code \nRed” virus which mutated into almost a different form every the other day, \nthrowing virus hunters off track.\nStealth virus\n• \n: Just like the polymorphic virus uses mutation to distract its \nhunters from its track, a stealth virus makes modifications to the target files \nand the system’s boot record, then it hides these modifications. It hides these \nmodifications by interjecting itself between the application programs the \noperating system must report to and the operating system itself. In this position, \nit receives the operating system reports and falsifies them as they are being sent \nto the programs. In this case, therefore, the programs and the anti-virus detector \nwould not be able to detect its presence. Once it is ready to strike then it does so. \nJasma [5] gives two types of stealth viruses: the size stealth which injects itself \ninto a program and then falsifies its size, and the read stealth which intercepts \nrequests to read infected boot records or files and provides falsified readings, \nthus making its presence unknown.\n15.3  Virus Filtering\b\n341\n" }, { "page_number": 354, "text": "342\b\n15  Virus and Content Filtering\nRetro virus\n• \n: A retro virus is an anti-virus fighter. It works by attacking anti-virus \nsoftware on the target machine so that it can either disable it or bypass it. In fact, \nthat is why it is sometimes called an anti-anti-virus program. Other retroviruses \nfocus on disabling the database of integrity information in the integrity-checking \nsoftware, another member of the anti-virus family.\nMultipartite virus:\n• \n Is a multifaceted virus that is able to attack the target \ncomputer from several fronts. It is able to attack the boot record and all boot \nsectors of disks including floppies and it is also able to attack executable files. \nBecause of this, it was nicknamed multipartite.\nArmored virus\n• \n: Probably the name is fitting because this virus works in the \ntarget computer by first protecting itself so that it is more difficult to detect, \ntrace, disassemble, or understand its signature. It gets the coat or armor by using \nan outer layer of protective coat that cannot easily be penetrated by anti-virus \nsoftware. Other forms of this virus work by not using a protective coat but by \nhiding from anti-virus software.\nCompanion virus\n• \n: This is a smarter virus that works by creating companions \nwith executables. Then it piggybacks on the executable file and produces its own \nextension based on the executable file. By so doing, every time the executable \nsoftware is launched, it always executes first.\nPhage virus\n• \n: This virus parallels and is named after its biological counterpart \nthat replaces an infected cell with itself. The computer counterpart also replaces \nthe executable code with its own code. Because of its ability to do this, and just \nlike its biological cousin, it is very destructive and dangerous. It destroys every \nexecutable program it comes into contact with.\nVirus Classifications Based on Outcomes\nError-generating virus\n• \n: Error-generating viruses lunch themselves most often \nin executable software. Once embedded, they attack the software to cause the \nsoftware to generate errors.\nData and program destroyers\n• \n: These are viruses that attach themselves to a \nsoftware and then use it as a conduit or surrogate for growth, replication, and \nas a launch pad for later attacks and destruction to this and other programs and \ndata.\nSystem crusher\n• \n: These, as their name suggests, are the most deadly viruses. \nOnce introduced in a computer system, they completely disable the system.\nComputer time theft virus\n• \n: These viruses are not harmful in any way to system \nsoftware and data. Users use them to steal system time.\nHardware destroyers\n• \n: While most viruses are known to alter or destroy data \nand programs, there are a few that literally attack and destroy system hardware. \nThese viruses are commonly known as “killer viruses” Many of these viruses \nwork by attaching themselves to micro-instructions, or “mic,” such as bios and \ndevice drivers.\nLogic/time bombs\n• \n: Logic bombs are viruses that penetrate the system, embedding \nthemselves in the system’s software, using it as a conduit and waiting to attack \nonce a trigger goes off.\n" }, { "page_number": 355, "text": "15.3.1.4  How Viruses Work\nIn Sections 15.3.1.2 and 15.3.1.3, we discussed how computers get infected with \nviruses and how these viruses are transmitted. We pointed out that the viruses are \nusually contracted from an infected computer resource and then passed on. We \ndiscussed those most likely resources to be infected and from which viruses are \npassed on. We have also pointed out in other parts of this chapter that over time, the \nmethods of virus transmission have actually multiplied. In the beginning, viruses \nused to be transmitted manually by users moving disks and other infected materi­\nals from one victim to another. Since the birth of the Internet, this method has, \nhowever, been relegated to the last position among the popular methods of virus \ntransmission.\nLet us look at how the Internet has transformed virus transmission by focus­\ning on two types of viruses that form the biggest part of virus infection within the \nnetwork environment. These are the macro virus and the file virus. Of the two, the \nmacro viruses have the fastest growing rate of infection in networks. This is a result \nof several factors including the following:\nBig software warehouses innocently intend to provide their users with the \n• \nflexibility of expanding their off-the-shelf products capabilities and functionalities \nby including macro facilities in these products. For example, popular Microsoft \nproducts include these macros [5]. Using these macro facilities, able users can \ncreate their own macros to automate common tasks, for example. But as we saw \nin Section 15.3.1.1, these macros are becoming a vehicle for virus infection and \ntransmission.\nMicro programming languages are now built into popular applications. These \n• \nmicro programming languages are getting more and more powerful and are now \npacking more features. They can be used to build macros to perform a variety of \nfunctions. For example, Microsoft Visual Basic for Applications (VBA) is such \na language that is found in a number of Microsoft popular applications including \nPowerPoint, Excel, and Word. Again as we pointed out in Section 15.3.1.1, this \ncreates ready vehicles to carry viruses.\nThe problem with these macros is that they introduce loopholes in these popular \nInternet applications. For example, VBA can be used by hackers to define viral \ncode within the applications. Other macros that are not built using programming \nand scripting languages are included in applications can be used by hackers as eas­\nily. The fact that macros behave as executable code within the applications is very \nattractive to hackers to use it and introduce viral code into the computer and hence \ninto the network.\nNext to macros in applications software in network transmission capabilities are \nfile viruses. File viruses may be any of the types we have already discussed that \nattack system or user files. File viruses present as much danger to a network as the \nmacro viruses as long as the infected computer is attached to a network. Notice that \nwe would have nothing to say if a computer is not attached to any network. In fact, \nthe safest computers are disconnected computers in bankers.\n15.3  Virus Filtering\b\n343\n" }, { "page_number": 356, "text": "344\b\n15  Virus and Content Filtering\n15.3.1.5  Anti-Virus Technologies\nThere are four types of viruses that anti-virus technologies are targeting. These are \n“in the wild” viruses that are active viruses detected daily on users’ computers all \nover the world, macro viruses, polymorphic viruses, and standard viruses.\nThe “in the wild” viruses are collected and published annually in the WildList \n(a list of those viruses currently spreading throughout a diverse user population). \nAlthough it should not be taken as the list of “most common viruses,” in recent times, \nthe list has been used as the basis for in-the-wild virus testing and certification of \nanti-virus products by a number of anti-virus software producing companies. Addi­\ntionally, a virus collection based upon the WildList is being used by many anti-virus \nproduct testers as the definitive guide to the viruses found in the real world and thus \nto standardize the naming of common viruses. For the archives and current list of the \nWildList see The WildList – (c)1993–2003 by Joe Wells – http://www.wildlist.org.\nThe other three types of viruses – the macro viruses, polymorphic viruses, and \nstandard viruses – have already been discussed in various parts of this chapter. Anti-\nvirus technologies are tested for their ability to detect all types of viruses in all these \nmodes.\n15.4  Content Filtering\nAs we noted in Section 11.2.1, content filtering takes place at two levels: at the \napplication level where the filtering is based on URL which may, for example, result \nin blocking a selected Web page or an FTP site, and filtering at the network level \nbased on packet filtering which may require routers to examine the IP address of the \nevery incoming or outgoing traffic packet. The packet are first captured and then \ntheir IP address both source and destination, port numbers or sequence numbers are \nthen compared with those on either the black or white list.\n15.4.1  Application Level Filtering\nRecall in Sections 11.2.1 and 15.2.4 that application level filtering is based on sev­\neral things that make up the blocking criteria, including URL, keyword, and pattern. \nApplication filtering can also be located at a variety of areas including at the user’s \nPC, at the network gateway, at a third party’s server, and at an ISP. In each one of \nthese locations, quite an effective filtering regime can be implemented successfully. \nWe discussed that when applying application level filtering at either the network or \nat the ISP, a dedicated proxy server may be used. The proxy then prevents inbound \nor outbound flow of content based on the filtering rules in the proxy. With each \nrequest from the user or client, the proxy server compares the clients’ requests with \na supplied “black list” of web sites, FTP sites, or newsgroups. If the URL is on the \nblack list, then effective or selective blocking is done by the proxy server. Besides \n" }, { "page_number": 357, "text": "blocking data flowing into or out of the network or user computer, the proxy also \nmay store (cache) frequently accessed materials. However, the effectiveness of \napplication level blocking using proxy servers is limited as a result of the following \ntechnical and nontechnical factors [6]:\n15.4.1.1  Technical Issues\nUse of translation services in requests can result in requested content from \n• \nunwanted servers and sites: If a user requests for content from a specified server \nor site, and if the requested content cannot be found at this site, the translation \nservice operated by the request can generate requests to secondary sites for the \ncontent. In such cases then, the content returned may not be from the specified \nserver unless secondary requests are specifically blocked.\nThe Domain Name server can be bypassed\n• \n: Since a user’s request for a site access \ncan be processed based on either a domain name or the IP address of the server, \na black list that contains the domain names only without their corresponding IP \naddresses can, therefore, be bypassed. This usually results in several difficulties, \nincluding not processing requests whose IP addresses cannot be found on the \nblack lists and doubling of the size of the black list if both domain names and \nequivalent IP addresses are used for every server on the list.\nThe reliability of the proxy server may be a problem\n• \n: The use of a single proxy \nserver for all incoming and outgoing filtering may cause “bottleneck” problems \nthat include reduced speed, some applications failing to work with specific \nservers, and loss of service should the server were to collapse.\n15.4.1.2  Nontechnical Issues\nISPs problems\n• \n: ISPs involved into the filtering process may face several \nproblems, including the added burden of financially setting up, maintaining, and \nadministering the additional proxy servers, supporting and maintaining reluctant \nclients that are forced to use these servers, and meeting and playing a role of a \nmoral arbiter for their clients, the role they may find difficult to please all their \nclients in. In addition to these problems, ISPs are also faced with the problems \nthat include the creation or updating and hosting black lists that will satisfy all \ntheir clients or creating, updating, and distributing black lists in a secure manner \nto all their clients.\nThe costs of creating and maintaining a black list\n• \n: There is an associated high cost \nof creating and maintaining a black list. The associated costs are high because \nthe black list creation, maintenance, and updates involve highly charged local \npolitics and a high degree of understanding in order to meet the complex nature \nof the list that will meet the basic requirements that cover a mosaic of cultures, \n15.4  Content Filtering\b\n345\n" }, { "page_number": 358, "text": "346\b\n15  Virus and Content Filtering\nreligions, and political views of the users. In addition to these costs, there are \nalso the costs of security of the list. Black lists are high target objects and prime \ntargets for hackers and intruders\n15.4.2  Packet Level Filtering and Blocking\nIn Chapter 2, we saw that every network packet has both source and destination IP \naddresses to enable the TCP protocol to transport the packet through the network \nsuccessfully and to also report failures. In packet level filtering and blocking, the \nfiltering entity has a black list consisting of “forbidden” or “bad” IP addresses. The \nblocking and filtering processes then work by comparing all incoming and outgoing \npacket IP addressees against the IP addressees on the supplied black list. However, \nthe effectiveness of packet level blocking is limited by both technical and nontech­\nnical problems [6]:\n15.4.2.1  Technical Issues\nPacket-level blocking is indiscriminate\n• \n: Blocking based on an IP address of a \nvictim server means that no one from within the protected network will be able to \nreach the server. This means that any service offered by that server will never be \nused by the users in the protected network or on the protected user computer. If the \nintent was to block one Web site, this approach ends up placing the whole server \nout of reach of all users in the protected server or the user PC. One approach to \nlessen the blow of packet-level filtering to the protected network or user PC is the \nuse of port numbers that can selectively block or unblock the services on the victim \nserver. However, this process can affect the performance of the proxy server.\nRouters can easily be circumvented\n• \n: Schemes such as tunneling, where an IP \npacket is contained inside another IP packet, are commonly used, particularly \nin the implementation of virtual private networks for distributed organizations \nand the expansion of IPv4 to IPv6: one can very easily circumvent the inside \nvictim IP address by enveloping it into a new IP address which is then used in \nthe transfer of the encased packet. Upon arrival at the destination, the encased \npacket is then extracted by the receiver to recreate the original message. We will \ndiscuss tunneling in Section 16.4.2. 1.4. \nBlacklisted IP addresses are constantly changing\n• \n: It is very easy to determine that \na server has been blacklisted just by looking at and comparing server accesses. \nOnce it is determined that a server has been blacklisted, a determined owner can \nvery easily change the IP address of the server. This has been done many times \nover. Because of this and other IP address changes due to new servers coming \nonline and older ones being decommissioned, there is a serious need for black \nlist updates. The costs associated with these constant changes can be high.\n" }, { "page_number": 359, "text": "Use of nonstandard port numbers\n• \n:  Although it is not very common, there are \nmany applications that do not use standard port numbers. Use of such nonstandard \nport numbers may fool the server filter and the blocked port number may go \nthrough the filter. This, in addition to other filtering issues, when implementing a \nfirewall may complicate the firewall as well.\n15.4.2.2  Non-technical Issues\nIncreased operational costs and ISP administrative problems\n• \n: As we saw in the \napplication-level blocking, there are significant cost increments associated with \nthe creation, maintenance, and distribution of black lists. In addition, the ISPs \nare made to be moral arbiters and supervisors and must carefully navigate the \ncultural, religious, and political conflicts of their clients in order to maintain an \nacceptable blacklist.\n15.4.3  Filtered Material\nThe list of filtered items varies from user to user, community to community, and \norganization to organization. It is almost impossible, due to conflicting religious, \ncultural, and political beliefs, to come up with a common morality upon which \na list like a “black list” can be based. Lack of such a common basis has created \na mosaic of spheres of interests based on religion, culture, and politics. This has \ncaused groups in communities to come together and craft a list of objectionable \nmaterials that can be universally accepted. The list we give below is a collection of \nmany objectionable materials that we have collected from a variety of sources. This \nlist includes the following items [7, 6].\nNudity\n• \n is defined differently in different cultures. However, in many cultures, \nit means the complete absence of clothing or exposure of certain living human \nbody parts.\nMature content\n• \n is differently defined and lacks universal acceptance. However, \nin many cultures, it refers to material that has been publicly classified as bad and \ncorrupting to minors. The material may be crude or vulgar language or gestures \nor actions.\nSex:\n• \n Verbal and graphic descriptions and depictions of all sexual acts and any \nerotic material as classified by a community based on their culture, religion, and \npolitics.\nGambling:\n• \n There are many forms of gambling, again based on community \nstandards. These forms include physical and online gambling and game batting.\nViolence/profanity:\n• \n Physical display and depictions of all acts that cause or \ninflict physical and psychological human pain including murder, rape, and \ntorture.\n15.4  Content Filtering\b\n347\n" }, { "page_number": 360, "text": "348\b\n15  Virus and Content Filtering\nGross depiction:\n• \n Any graphic images, descriptive or otherwise, that are crude, \nvulgar and grossly deficient in civility and behavior.\nDrug/drug culture and use:\n• \n Graphic images, descriptive or not, that advocate \nany form of illegal use of and encouraging usage of any recreational drugs, \nincluding tobacco and alcohol advertising.\nIntolerance/discrimination:\n• \n Advocating prejudice and denigration of others’ \nrace, religion, gender, disability or handicap, and nationality.\nSatanic or cult:\n• \n Satanic materials that include among others, all graphic images \ndescriptive or otherwise that contain sublime messages that may lead to devil \nworship, an affinity for evil, or wickedness.\nCrime:\n• \n Encouragement of, use of tools for, or advice on carrying out universally \ncriminal acts that include bomb making and hacking.\nTastelessness:\n• \n Excretory functions, tasteless humor, graphic images taken out \nof acceptable norms, and extreme forms of body modification, including cutting, \nbranding, and genital piercing.\nTerrorism/militant/extremists\n• \n: Graphic images in any form that advocate \nextremely aggressive and combatant behaviors or advocacy of lawlessness.\n15.5  Spam\nIt may be difficult to define spam. Some people want to define it as unsolicited com­\nmercial e-mail. This may not fully define spam because there are times when we \nget wanted and indeed desired unsolicited e-mails and we feel happy to get them. \nOthers define spam as automated commercial email. But many e-mails that are \nunsolicited and sometimes automated that are not commercial in nature. Take, for \nexample, the many e-mails you get from actually worthy causes but unsolicited and \nsometimes annoying. So to cover all these bases and hit a balance, we define spam \nas unsolicited automated e-mail.\nBecause Internet use is more than 60 percent e-mail, spamming affects a large \nnumber of Internet users. There are several ways we can fight spam including the \nfollowing:\nLimit e-mail addresses posted in a public electronic place.\n• \n Email addresses \nusually posted at the bottom of personal web pages are sure targets of spammers. \nSpammers have almost perfected a method of cruising the Internet hunting for \nand harvesting these addresses. If you must put personal e-mail on a personal \nWeb-page, find a way of disguising it. Also opt out of job, professional, and \nmember directories that place member e-mail addresses online.\nRefrain from filling out online forms that require email addresses.\n• \n Always \navoid, if you can, supplying e-mail addresses when filling any kind of forms, \nincluding online forms that ask for them. Supply e-mail addresses to forms only \nwhen replies are to be done online.\nUse email addresses that are NOT easy to guess.\n• \n Yes, passwords can be \nsuccessfully guessed and now spammers are also at it trying to guess e-mail \n" }, { "page_number": 361, "text": "addresses. The easiest way to do this is to start with sending mails to addresses \nwith short stem personal fields on common ISPs such as AOL, Yahoo, and \nHotmail, fields like tim@aol, tim26@aol, joe@hotmail, and so on.\nPractice using multiple email addresses.\n• \n Always use several email addresses \nand use one address for strictly personal business. When filling forms for \nnonserious personal business and pleasure, use a different e-mail address. In \nfact, it is always easy to determine who sells your e-mail address this way. By \nnoting which address was used on which form and to whom, one can also easily \ntrack what sites are causing spam. These days there are also one-time disposable \ne-mail addresses one can easily get and use with little effort.\nSpam filtering.\n• \n  Always using spam filters at either the network level or \napplication level to block unwanted emails. In either case, the spam is prevented \nfrom reaching the user by the filter. We will discuss this more in Section 15.3. \nWhile this approach has its problems, as we will see, it can cut down tremendously \nthe amount of spam a user receives. Many ISPs are now offering spam filters.\nSpam laws.\n• \n The outcry caused by spamming has led many national and local \ngovernments to pass spam laws. In Europe, the European Union’s digital privacy \nrules passed and are in force; these rules require companies to get consent before \nsending email, tracking personal data on the Web, or pin-pointing callers’s location \nvia satellite-linked mobile phones. The same rules also limit companies’ ability to \nuse cookies and other approaches that gather user information [8]. In the United \nStates, efforts are being made to enact spam laws both at federal and state levels.\nFederal Spam law: The Senate approved a do-not-spam list and ban on sending \n• \nunsolicited commercial e-mail using a false return address or misleading \nsubject line [8].\nState spam laws. All states have some form of spam laws on the books.\n• \nThe European Union, leading the pack of anti-spam legislators, has passed a \ndigital privacy law that requires companies to seek users’ consent before sending \ne-mails, tracking personal data on the Web, and pointing callers’ location using \nsatellite-linked cell-phones unless it is done by the police or emergency services [9]. \nOther European countries have enacted spam laws with varying success and these \nlaws can be viewed at: http://www.spamlaws.com/eu.html.\nIn the United States, the recently passed Controlling the Assault of Non-\nSolicited Pornography and Marketing Act of 2003, or the CAN-SPAM Act of \n2003, tries to regulate interstate commerce by imposing limitations and pen­\nalties on the transmission of unsolicited commercial electronic mail via the \nInternet. In addition to the federal law, many states have much stronger anti-\nspam legislations.\nIn general, however good and strong anti-spam legislations are, it is extremely \ndifficult and expensive to enforce.\nBeside the United States, the EU, and European countries, several other coun­\ntries outside Europe, including Australia, Canada, Japan, Russia, Brazil, and India, \nhave or are in the process of enacted spam laws. This is an indication that there is a \nglobal movement to fight spam.\n15.5  Spam\b\n349\n" }, { "page_number": 362, "text": "350\b\n15  Virus and Content Filtering\nExercises\n  1.\t What are the major differences between a boot virus and a macro virus. Which \nis more dangerous to a computer system?\n  2.\t List and briefly discuss three most common sources of virus infections.\n  3.\t In this chapter, we did not discuss the likely sources of computer viruses. Dis­\ncuss four most likely sources of computer viruses.\n  4.\t Why is anti-virus software always developed after the virus has strake?\n  5.\t Describe the similarities between biological viruses and computer viruses.\n  6.\t What are the difficulties faced by a community that wants to filters the Internet \ncontent?\n  7.\t Describe how a virus is moved on the Internet.\n  8.\t Why it is that viruses are more dangerous on peer-to-peer networks than in \nclient-server networks?\n  9.\t Study and discuss the virus infection rate in peer-to-peer, client-server, and the \nInternet.\n10.\t Why do macros have the highest infection rate in network virus transmission?\nAdvanced Exercises\n1.\t Research and develop a comprehensive list of the current known viruses.\n2.\t Research, find, and study a virus code. Write an anti-virus for that code.\n3.\t Look at a popular application such as PowerPoint or Excel. Find and disable the \nmicros. How do you enable them again?\n4.\t Discuss and develop a policy for dealing with viruses.\n5.\t What is a virus “in the wild”? Research and draw an estimate of all viruses in the \nwild. How do you justify your number?\nReferences\n1.\t Greenfield, P., McCrea, P., Ran, S. Access Prevention Techniques for Internet Content Filter­\ning. http://www.noie.gov.au/publications/index.html\n2.\t Forcht, K. Computer Security Management. Danvers, MA: Boyd & Fraser Publishing, 1994.\n3.\t Battling the Net Security Threat. http://www.news.bbc.co.uk/2/hi/technology/2386113.stm\n4.\t Kizza, J. M. Computer Network Security and Cyber Ethics. Jefferson, NC: McFarland and \nCompany, 2002.\n5.\t Jasma, K. Hacker Proof: The Ultimate Guide to Network Security. Second Edition. Albany, \nNY: OnWord Press, 2002.\n6.\t Blocking on the Internet: A technical Perspective. http://www.cmis.csiro.au/Reports/blocking.pdf\n7.\t Kizza, J. M. Civilizing the Internet: Global Concerns and Efforts Towards Regulation. Jefferson, \nNC: McFarland & Company, 1998.\n8.\t The Associated Press. “Anti-Spam Law Goes into Force in Europe.” Chattanooga Times-Free \nPress. Saturday, November 1, 2003. C5.\n9.\t The Associated Press. “Anti-Spam Law Goes into Force in Europe.” Chattanooga Times Free \nPress. C5, Saturday, November 1, 2003.\n" }, { "page_number": 363, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_16, © Springer-Verlag London Limited 2009\n\b\n351\nChapter 16\nStandardization and Security Criteria: Security \nEvaluation of Computer Products\n16.1  Introduction\nThe rapid growth of information technology (IT), our growing dependence on it, and \nthe corresponding skyrocketing security problems arising from it have all created \na high demand for comprehensive security mechanisms and best practices mitigate \nthese security problems. Solutions on two fronts are sought for. First well-imple­\nmented mechanisms and best practices are needed for fundamental security issues \nlike cryptography, authentication, access control and audit. Second, comprehensive \nsecurity mechanisms are also needed for all security products so that consumers \nare assured of products and systems that meet their business security needs. The \nresponse to this high demand for security products has been an avalanche of prod­\nucts of all types, capabilities, varying price range, effectiveness and quality. You \nname a product and you get a flood from vendors. As the market place for security \nproducts get saturated, competing product vendors and manufacturers started mak­\ning all sorts of claims about their products in order to gain a market niche. In this \nkind of environment then, how can a customer shop for the right secure product, \nwhat security measures should be used, and how does one evaluate the security \nclaims made by the vendors? Along the way, making a choice of a good effective \nsecurity product for your system or business has become a new security problem we \nwant to focus on in this chapter.\nBuying computer products, even without the thousands of overzerous vendors \nand manufacturers fighting to make a buck, has never been easy because of the \ncomplexity of computer products to the ordinary person. One cannot always rely \non the words of the manufacturers and those of the product vendors to ascertain the \nsuitability and reliability of the products. This is currently the case in both computer \nhardware and software products. It is a new computer security problem all computer \nproduct buyers must grapple with and computer network managers must try to miti­\ngate as they acquire new computer products.\nThere are several approaches to deal with this new security problem but we will \ndiscuss two here: standardization and security evaluation of products. Since stan­\ndardization leads into security evaluation, meaning that product security evaluation \nis done based on established standards, we will start with standardization.\n" }, { "page_number": 364, "text": "352\b\n16  Standardization and Security Criteria\n16.2  Product Standardization\nA standard is a document that establishes uniform engineering or technical specifi­\ncations, criteria, methods, processes, or practices. Some standards are mandatory, \nwhile others are voluntary [1]. Standardization is then a process of agreeing on these \nstandards. The process itself is governed by a Steering Committee that consists of \nrepresentatives from the different engineering and technical areas with interests in \nthe product whose standard is sought. The committee is responsible for drafting the \nstandard legal and technical document, from product specifications, establishing \nprocesses by which the draft standards are reviewed and accepted by the interested \ncommunity.\nTheoretically, the process itself sounds easy and consists of several stages \nthrough which the product specifications must undergo. First, the specifications \nundergo a period of development and several iterations of review by the interested \nengineering or technical community, and the revisions are made based on members’ \nexperiences. These revisions are then adopted by the Steering Committee as draft \nstandards. But as Bradner [2] observes, in practice, the process is more complicated, \ndue to (1) the difficulty of creating specifications of high technical quality; (2) the \nneed to consider the interests of all of the affected parties; (3) the importance of \nestablishing widespread community consensus; and (4) the difficulty of evaluating \nthe utility of a particular specification for the community.\nIn any case, the goals of this process are to create standards that [2]\nare technically excellent;\n• \nhave prior implementation and testing;\n• \nare clear, concise, and easily understood documentation; and\n• \nfoster openness a fairness.\n• \n16.2.1  Need for Standardization of (Security) Products\nWhenever a product is designed to be used by or on another product, the interfaces \nof the two products must agree to meet and talk to each other every time these two \nproducts are connected to each other. What this is saying is that interface specifica­\ntion is the protocol language these two products talk, enabling them to understand \neach other. If there are conflicts in the specification language, the two will never \nunderstand each other and they will never communicate.\nProducts and indeed computer products are produced by many different compa­\nnies with varying technical and financial capabilities based on different technical \ndesign philosophies. But however varied the product market place may be, the inter­\nface specifications for products meant to interconnect must be compatible.\nStandardization reduces the conflicts in the interface specifications. In other \nwords, standardization is needed to enforce conformity in the product interface \nspecifications for those products that are meant to interconnect.\n" }, { "page_number": 365, "text": "16.2  Product Standardization\b\n353\nAccording to Rebecca T. Mercuri [3], standards provide a neutral ground in \nwhich methodologies are established that advance the interest of manufacturers as \nwell as consumers while providing assurances of safety and reliability of the prod­\nucts. Currently, the computer industry has a large variety of standards covering \nevery aspect of the industry.\nStandards are used in setting up of product security testing procedures, the pass­\ning of which results in a certification of the product. However, as Mercuri notes, \ncertification alone does not guarantee security. There are cases where it is only a \nsign of compliance. Because of this and other reasons, many of the major product \nsecurity testing bodies and governments have a collection of standards that best test \nthe security of a product. These standards are called criteria. Many of the criteria \nwe are going to look at have several tiers or levels where each level is supposed to \ncertify one or more requirements by the product.\n16.2.2  Common Computer Product Standards\nThe rapid growth of computer technology has resulted into a mushrooming of stan­\ndards organizations that have created thousands of computer-related standards for \nthe certification of the thousands of computer products manufactured by hundreds of \ndifferent manufacturers. Among the many standards organizations that developed the \nmost common standards used by the computer industry today are the following [4]\nStandards organization\nStandards developed\nAmerican National Standards Institute\n(ANSI)\nHas a lot of American and international standards. \nSee http://webstore.ansi.org/sdo.aspx\nBritish Standards Institute (BSI)\nBS XXX: Year Title where XXX is the number of \nthe standard (many)\nInstitute of Electrical and Electronic\nEngineers Standards Association\n(IEEE-SA)\nHas thousands of standards. See http://www.ieee.\norg/web/publications/subscriptions/prod/stan­\ndards_overview.html\nInternational Organization\nfor Standardization (ISO)\nHas developed over 17000 International Standards \non a variety of subjects with about 1100 new ISO \nstandards are published every year. http://www.\niso.org/iso/iso_catalogue.htm\nNational Institute of Standards\nand Technology (NIST)\nSupports over 1300 different Standards. See http://\nts.nist.gov/MeasurementServices/ReferenceMa­\nterials/PROGRAM_INFO.cfm\nOrganization for the Advancement\nof Structured Information Standards\n(OASIS)\nHas a long list of standards. See http://www.oasis-\nopen.org/specs/index.php\nUnderwriters Laboratories (UL)\nHas developed more than 1000 Standards for Safety. \nSee http://www.ul.com/info/standard.htm\nWorld Wide Web Consortium (W3C)\nW3C creates primarily Web standards and guidelines \ndesigned to ensure long-term growth for the Web. \nSee http://www.w3.org/\n" }, { "page_number": 366, "text": "354\b\n16  Standardization and Security Criteria\n16.3  Security Evaluations\nSecurity evaluation of computer products by independent and impartial bodies \ncreates and provides security assurance to the customers of the product. The job of \nthe security evaluators is to provide an accurate assessment of the strength of the \nsecurity mechanisms in the product and systems based upon a criterion [5]. Based \non these evaluations, an acceptable level of confidence in the product or system is \nestablished for the customer.\nThe process of product security evaluation for certification consists of two com­\nponents: the criteria against which the evaluations are performed and the schemes or \nmethodologies which govern how and who can perform such security evaluations [5]. \nThere are several criteria and methods used internationally, and we are going to dis­\ncuss some in the following sections. The process of security evaluation, based on cri­\nteria, consists of a series of tests based on a set of levels where each level may test for \na specific set of standards. The process itself starts by establishing the following [1]:\nPurpose\n• \nCriteria\n• \nStructure/elements\n• \nOutcome/benefit\n• \n16.3.1  Purpose of Security Evaluation\nBased on the Orange Book, a security assessment of a computer product is done \nfor [1]\nCertification – To certify that a given product meets the stated security criteria \n• \nand therefore is suitable for a stated application. Currently, there is a variety \nof security certifying bodies of various computer products. This independent \nevaluation provides the buyer of the product added confidence in the product.\nAccreditation – To decide whether a given computer product, usually certified, \n• \nmeets stated criteria for and is suitable to be used in a given application. Again, \nthere are currently several firms that offer accreditations to students after they \nuse and get examined for their proficiency in the use of a certified product.\nEvaluation – To assess whether the product meets the security requirements and \n• \ncriteria for the stated security properties as claimed.\nPotential market benefit, if any for the product. If the product passes the \n• \ncertification, it may have a big market potential.\n16.3.2 Security Evaluation Criteria\nAs we have discussed earlier, security evaluation criteria are a collection of secu­\nrity standards that define several degrees of rigor acceptable at each testing level of \n" }, { "page_number": 367, "text": "16.3  Security Evaluations\b\n355\nsecurity in the certification of a computer product. Security evaluation criteria also \nmay define the formal requirements the product needs to meet at each Assurance \nLevel. Each security evaluation criterion consists of several Assurance Levels with \nspecific security categories in each level. See the Orange Book (TCSEC) criteria \nAssurance Levels in Section 16.4.3.\nBefore any product evaluation is done, the product evaluator must state the eval­\nuation criteria to be used in the process in order to produce the desired result. By \nstating the evaluation criteria, the evaluator directly states the Assurance Levels \nand categories in each Assurance Level that the product must meet. The result of \na product evaluation is the statement whether the product under review meets the \nstated Assurance Levels in each evaluation criteria category. The trusted computer \nsystem evaluation criteria widely used today all have their origin in and their Assur­\nance Levels based on the Trusted Computer System Evaluation Criteria (TCSEC) \nin Section 16.4.3.\n16.3.3  Basic Elements of an Evaluation\nThe structure of an effective evaluation process, whether product-oriented or pro­\ncess-oriented, must consider the following basic elements:\nFunctionality:\n• \n Because acceptance of a computer security product depends \non what and how much it can do. If the product has limited utility, and in fact \nif it does not have the needed functionalities, then it is of no value. So the \nnumber of functionalities the product has or can perform enhances the product’s \nacceptability.\nEffectiveness:\n• \n After assuring that the product has enough functionalities to \nmeet the needs of the buyer, the next key question is always whether the product \nmeets the effectiveness threshold set by the buyer in all functionality areas. If \nthe product has all the needed functionalities but these functionalities are not \neffective enough, then the product cannot guarantee the needed security, and \ntherefore, the product is of no value to the buyer.\nAssurance:\n• \n To give the buyer enough confidence in the product, the buyer \nmust be given an assurance, a guarantee, that the product will meet nearly all, \nif not exceed, the minimum stated security requirements. Short of this kind of \nassurance, the product may not be of much value to the buyer.\n16.3.4  Outcome/Benefits\nThe goal of any product producer and security evaluator is to have a product that \ngives the buyer the best outcome and benefits within a chosen standard or criteria. \nThe product outcome may not come within a short time, but it is essential that \neventually the buyers see the security benefits. Although the process to the outcome \n" }, { "page_number": 368, "text": "356\b\n16  Standardization and Security Criteria\nfor both the evaluator and the buyer may be different, the goal must always be the \nsame, a great product. For example, to the product evaluator, it is important to mini­\nmize the expenses on the evaluation process without cutting the stated value of the \nevaluation. That is to say that keeping costs down should not produce mediocre out­\ncomes. However, to the buyer, the process of evaluation of a software product for \nsecurity requirements must ultimately result in the best product ever in enhancing \nthe security of the system where the product is going to be deployed. The process \nof evaluation is worth the money if the product resulting from it meets all buyer \nrequirements and better if it exceeds them.\nThe evaluation process itself can be done using either a standard or criteria. The \nchoice of what to use is usually determined by the size of the product. Mostly, small \nproducts are evaluated using standards while big ones are evaluated using criteria. \nFor example, a computer mouse I am using is evaluated and certified by the stan­\ndards developed by the Underwriters Laboratories, Inc. and the mouse has an insig­\nnia UL in a circle. If you check your computer, you may notice that each component \nis probably certified by a different standard.\nLet us now look at the evaluation process itself. The evaluation of a product can \ntake one of the following directions [1]:\nProduct-oriented: This is an investigative process to thoroughly examine and test \n• \nevery state security criteria and determine to what extent the product meets these \nstated criteria in a variety of situations. Because covering all testable configurations \nmay require an exhaustive testing of the product, which is unthinkable in software \ntesting for example, a variety of representative testing must be chosen. This, \nhowever, indicates that the testing of software products, especially in security, \ndepends heavily on the situation the software product is deployed in. One has to \npay special attention to the various topologies in which the product is tested in and \nwhether those topologies are exhaustive enough for the product to be acceptable.\nProcess-oriented: This is an audit process that assesses the developmental \n• \nprocess of the product and the documentation done along the way, looking for \nsecurity loopholes and other security vulnerabilities. The goal is to assess how \na product was developed without any reference to the product itself. Unlike \nproduct-oriented testing which tends to be very expensive and time consuming, \nprocess-oriented testing is cheap and takes less time. However, it may not be the \nbest approach in security testing because its outcomes are not very valuable and \nreliable. One has to evaluate each evaluation scheme on its own merit.\nWhatever direction of evaluation is chosen, the product security evaluation pro­\ncesses can take the following steps [1]:\nProposal review:\n• \n where the product is submitted by the vendor for consideration \nfor a review. The Market analysis of the product is performed by the evaluator [in \nthe United States, it is usually the Trusted Product Evaluation Program (TREP) \nwithin the National Security Agency (NSA)] based on this proposal.\nTechnical assessment:\n• \n After the initial assessment, the product goes into the \ntechnical assessment (TA) stage where the design of the product is put under \nreview. Documentation from the vendor is important at this stage.\n" }, { "page_number": 369, "text": "16.4  Major Security Evaluation Criteria\b\n357\nAdvice:\n• \n From the preliminary technical review, advise is provided to the vendor \nto aid the vendor in producing a product and supporting documentation that is \ncapable of being evaluated against a chosen criterion.\nIntensive preliminary technical review:\n• \n An independent assessment by the \nevaluator to determine if the product is ready for evaluation. This stage can be \ndone as the vendor’s site and evaluators become familiar with the product.\nEvaluation\n• \n is a comprehensive technical analysis of every aspect of the product. \nRigorous testing of every component of the product is done. At the end, if the \nproduct passes all the tests, it is awarded an Evaluated Products List (EPL) \nentry.\nRating maintenance phase\n• \n provides a mechanism for the vendor to maintain \nthe criteria rating of the product. If security changes are needed to be made, the \nvendor makes them during this phase. At the end of the phase, a full approval of \nthe product is recommended. The rating is then assigned to the product.\n16.4  Major Security Evaluation Criteria\nThe best way product manufacturers and vendors can demonstrate to their custom­\ners the high level of security their products have is through a security evaluation \ncriteria. Through security evaluation, independent but accredited organizations can \nprovide assurance to product customers of the security of product. These evalua­\ntions, based on specified criteria, serve to establish an acceptable level of confi­\ndence for product customers. Consequently, there are two important components \nof product security evaluations; the criteria against which the evaluations are per­\nformed, and the schemes or methodologies which govern how and by whom such \nevaluations can be officially performed [6].\nThere are now several broadly accepted security evaluation criteria to choose \nfrom. However, this is a recent phenomenon. Before that, there were small national \ncriteria without widely used and accepted standard criteria. Every European country \nand the United States, each had its own small criteria. But by the mid-1980s, the \nEuropean countries abandoned their individual national criteria to form the com­\nbined Information Technology Security Evaluation Criteria (ITSEC) (see Section \n16.4.4) to join the U.S.A’s TCSEC that had been in use since the 1960s. Follow­\ning the merger, an international criteria board finally introduced a widely accepted \nInternational Standards Organization (ISO)-based Common Criteria (CC). Let us \nlook at a number of these criteria over time.\n16.4.1  Common Criteria (CC)\nThe Common Criteria (CC) is a joint effort between nations to develop a sin­\ngle framework of mutually recognized evaluation criteria. It is referred to as the \nHarmonized Criteria, a multinational successor to the TCSEC and ITSEC that \n" }, { "page_number": 370, "text": "358\b\n16  Standardization and Security Criteria\n­combined the best aspects of ITSEC, TCSEC, CTCPEC (Canadian Criteria), and \nthe U.S. Federal Criteria (FC). It was internationally accepted and finalized as an \nISO 15408 standard and has been embraced by most countries around the world as \nthe de facto security evaluation criteria. Common Criteria version 2.3 (CC v2.3) \nconsists of three parts:\nIntroduction and general model\n• \nSecurity functional requirements\n• \nSecurity assurance requirements\n• \nBased on these parts, CC v2.3 awards successfully evaluated products’ one of eight \nevaluation assurance level (EAL) ratings from EAL 0 (lowest) to EAL7 (highest). For \nmore information on CC v2.3 see http://www.commoncriteriaportal.org/thecc.html.\n16.4.2  FIPS\nInformation technology (IT) product manufacturers always claim that their products \noffer the desired security for whatever purpose. This claim is difficult to prove espe­\ncially for smaller businesses. IT customers, including the government, in need of pro­\ntecting sensitive data need to have a minimum level of assurance that a product will \nattain a certain level of required security. In addition to this, legislative restrictions \nmay require certain types of technology, such as cryptography and access control, \nto be in all products used by either government or specific businesses. In this case, \ntherefore, those products need to be tested and validated before they are acquired.\nUnder needs like these, the Information Technology Management Reform Act \n(Public Law 104–106) requires that the Secretary of Commerce approves standards \nand guidelines that are developed by the National Institute of Standards and Tech­\nnology (NIST) for Federal computer systems. NIST’s standards and guidelines are \nissued as Federal Information Processing Standards (FIPS) for government-wide \nuse. NIST develops FIPS when there are compelling Federal government require­\nments such as for security and interoperability and there are no acceptable industry \nstandards or solutions.\nUnder these standards and guidelines, products are validated against FIPS at \nranging security levels from lowest to the highest. The testing and validation of \nproducts against the FIPS criteria may be performed by NIST and CSE-approved \nand accredited certification laboratories. Level 2 is the highest level of validation \npursued by software vendors, while level 4 is generally only attempted by hardware \nvendors. For more information, see http://www.itl.nist.gov/fipspubs/.\n16.4.3  The Orange Book/TCSEC\nMost of the security criteria and standards in product security evaluation have \ntheir basis in The Trusted Computer System Evaluation Criteria (TCSEC), the \n" }, { "page_number": 371, "text": "16.4  Major Security Evaluation Criteria\b\n359\nfirst ­collection of standards used to grade or rate the security of computer system \n­products. The TCSEC has come to be a standard commonly referred to as “the \nOrange Book” because of its orange cover. The criteria were developed with three \nobjectives in mind [7]:\nTo provide users with a yardstick with which to assess the degree of trust that \n• \ncan be placed in computer systems for the secure processing of classified or other \nsensitive information.\nTo provide guidance to manufacturers as to what to build into their new, widely-\n• \navailable trusted commercial products in order to satisfy trust requirements for \nsensitive applications.\nTo provide a basis for specifying security requirements in acquisition \n• \nspecifications.\nThe criteria also address two types of requirements:\nspecific security feature requirements\n• \nassurance requirements.\n• \nThe criteria met these objectives and requirements through four broad hierarchi­\ncal divisions of enhanced Assurance Levels. These divisions, as seen in Fig. 16.1, \nlabeled D for minimum protect, C for discretionary protection or need-to-know \nprotection, B for mandatory protection, and A for verified protection are detailed \nas follows [1, 7]:\nClass D: Minimal Protection\n• \n: a division containing one class reserved for \nsystems that have been evaluated but that fail to meet the requirements for a \nhigher evaluation class.\nClass C\n• \n:\nC1: Discretionary Security Protection (DSP):\n• \n This is intended for systems \nin environments where cooperating users process data at the same level of \nintegrity. Discretionary Access Control (DAC) based on individual users or \ngroups of users enabled them to securely share access to objects between \nFig. 16.1  The TCSEC/Orange book class levels\nHighest Protection\nSecurity Functionality and \nAssurance\nLowest Protection\nA1: Verified Design\nB : Trusted Computing Base (TCB).\nB3: Security Domains (SD)\nB2: Structured Protection (SP)\nB1: Labeled Security Protection (LSP)\nC2: Controlled Access Protection (CAP)\nC1: Discretionary Security Protection (DSP)\nD: Minimal Protection\n" }, { "page_number": 372, "text": "360\b\n16  Standardization and Security Criteria\nusers and groups of users after user identification and authentication. This \nmakes it impossible for other users to from accidentally getting access to \nunauthorized data.\nC2: Controlled Access Protection (CAP)\n• \n is a system that makes users \naccountable for their actions. DAC is enforced at a higher granularity level \nthan C1. Subjects with information of another subject must not get access \nrights to an object which makes users accountable for their actions through \nlogin and auditing procedures.\nClass B\n• \n: The notion of a security-relevant portion of a system is called a Trusted \nComputing Base (TCB). A TCB that preserves the integrity of the sensitivity \nlabels and uses them to enforce a set of mandatory access control rules is a major \nrequirement in this division.\nB1: Labeled Security Protection (LSP\n• \n): This is intended for systems \ndealing with classified data. Each system has all the requirements in C2 and \nin addition has an informal requirement of the security policy model, data \nlabels for subjects and objects whose integrity must be strictly guarded, and \nmandatory access control over all subjects and objects.\nB2: Structured Protection (SP):\n• \n To add security requirements to the design \nof the system, thus increasing security assurance. It also requires the TCB \nto be based on a security policy. The TCB interface must be well defined to \nbe subjected to a more thorough testing and complete review. In addition, it \nstrengthens authentication mechanism, trusted facility management provided \nand configuration management imposed. Overall systems with B2 certification \nare supposed to be resistant to penetration.\nB3: Security Domains (SD):\n• \n To ensure a high resistance to penetration of \nsystems. It requires a security administrator and an auditing mechanism to \nmonitor the occurrence or accumulation of security-relevant events. Such \nevents must always trigger an automatic warning. In addition, a trusted \nrecovery must be in place.\nClass A1: Verified Protection: This division is characterized by the use of for­\nmal security verification methods to ensure that the mandatory and discretionary \nsecurity controls employed in the system can effectively protect classified or other \nsensitive information stored or processed by the system. Extensive documentation is \nrequired to demonstrate that the TCB meets the security requirements in all aspects \nof design, development, and implementation.\nMost evaluating programs in use today still use or refer to TCSEC criteria. \nAmong these programs are [3]\nThe Trusted Product Evaluation Program (TPEP). TPEP is a program with which \n• \nthe U.S Department of Defense’s National Computer Security Center (NCSC) \nevaluates computer systems.\nThe Trust Technology Assessment Program (TTAP). TTAP is a joint program of \n• \nthe U.S. National Security Agency (NSA) and the National Institute of Standards \nand Technology (NIST). TTAP evaluates off-the-shelf products. It establishes, \n" }, { "page_number": 373, "text": "16.4  Major Security Evaluation Criteria\b\n361\naccredits, and oversees commercial evaluation laboratories focusing on products \nwith features and assurances characterized by TCSEC B1 and lower level of trust \n(see Section 15.3.1 for details).\nThe Rating Maintenance Phase (RAMP) Program was established to provide a \n• \nmechanism to extend the previous TCSEC rating to a new version of a previously \nevaluated computer system product. RAMP seeks to reduce evaluation time \nand effort required to maintain a rating by using the personnel involved in the \nmaintenance of the product to manage the change process and perform security \nanalysis. Thus, the burden of proof for RAMP efforts lies with those responsible for \nsystem maintenance (i.e., the vendor or TEF) other than with an evaluation team.\nThe Trusted Network Interpretation (TNI) of the TCSEC, also referred to as \n• \n“The Red Book,” is a restating of the requirements of the TCSEC in a network \ncontext.\nThe Trusted Database Interpretation (TDI) of the TCSEC is similar to the \n• \nTrusted Network Interpretation (TNI) in that it decomposes a system into smaller \nindependent parts that can be easily evaluated. It differs from the TNI in that the \nparadigm for this decomposition is the evaluation of an application running on \nan already evaluated system. The reader is also referred to for an extensive \ncoverage of the standard criteria.\n16.4.4  Information Technology Security Evaluation Criteria \n(ITSEC)\nWhile the U.S. Orange Book Criteria were developed in 1967, the Europeans did \nnot define a unified valuation criteria well until the 1980s when the United King­\ndom, Germany, France and the Netherlands harmonized their national criteria into a \nEuropean Information Security Evaluation Criteria (ITSEC). Since then, they have \nbeen updated and the current issue is Version 1.2, published in 1991 followed two \nyears later by its user manual, the IT Security Evaluation Manual (ITSEM), which \nspecifies the methodology to be followed when carrying out ITSEC evaluations. \nITSEC was developed because the Europeans thought that the Orange Book was \ntoo rigid. ITSEC was meant to provide a framework for security evaluations that \nwould lead to accommodate new future security requirements. It puts much more \nemphasis on integrity and availability. For more information on ITSEC see: http://\nwww.radium.ncsc.mil/tpep/library/non-US/ITSEC-12.html\n16.4.5  The Trusted Network Interpretation (TNI): The Red Book\nThe Trusted Network Interpretation (TNI) of the TCSEC, also referred to as “The \nRed Book,” is a restating of the requirements of the TCSEC in a network context. It \nattempted to address network security issues. It is seen by many as a link between \nthe Red Book and new criteria that came after. Some of the shortfalls of the Orange \n" }, { "page_number": 374, "text": "362\b\n16  Standardization and Security Criteria\nBook that the Red Book tries to address include the distinction between two types \nof computer networks [7]:\nNetworks of independent components with different jurisdictions and \n• \nmanagement policies\nCentralized networks with single accreditation authority and policy.\n• \nWhile the Orange Book addresses only the first type, the second type presents many \nsecurity problems that the Red Book tries to address. Including the evaluations of \nnetwork systems, distributed or homogeneous, often made directly against the \nTCSEC without reference to the TNI. TNI component ratings specify the evaluated \nclass as well as which of the four basic security services the evaluated component \nprovides. NTI security services can be found at : \n16.5 Does Evaluation Mean Security?\nAs we noted in Section 16.4, the security evaluation of a product based on a crite­\nrion does not mean that the product is assured of security. No security evaluation of \nany product can guarantee such security. However, an evaluated product can dem­\nonstrate certain security mechanisms and features based on the security criteria used \nand demonstrate assurances that the product does have certain security parameters \nto counter many of the threats listed under the criteria.\nThe development of new security standards and criteria will no doubt continue to \nresult in better ways of security evaluations and certification of computer products \nand will therefore enhance the computer systems’ security. However, as Mercuri \nobserves, product certification should not create a false sense of security.\nExercises\n\t 1.\t The U.S. Federal Criteria drafted in the early 1990s were never approved. Study \nthe criteria and give reasons why they were not developed.\n\t 2.\t One advantage of process-oriented security evaluation is that it is cheap. Find \nother reasons why it is popular. Why, despite its popularity, it is not reliable?\n\t 3.\t For small computer product buyers, it is not easy to apply and use these stan­\ndard criteria. Study the criteria and suggest reasons why this is so.\n\t 4.\t Nearly all criteria were publicly developed; suggest reasons why? Is it possible \nfor individuals to develop commercially accepted criteria?\n\t 5.\t There are evaluated computer products on the market. Find out how one finds \nout whether a computer product has a security evaluation.\n\t 6.\t If you have a computer product, how do you get it evaluated? Does the evalua­\ntion help a product in the market place? Why or why not?\n" }, { "page_number": 375, "text": "References\b\n363\n\t 7.\t Every country participating in the computer products security evaluation has a \nlist of evaluated products. Find out how to find this list. Does the ISO keep a \nglobal list of evaluated products?\n\t 8.\t Why is the product rated as B2/B3/A1 better than that rated C2/B1, or is it?\n\t 9.\t Study the rating divisions of TCSEC and show how product ratings can be \ninterpreted.\n10.\t What does it mean to say that a product is CC or TCSEC compliant?\nAdvanced Exercises\n1.\t Research and find out if there are any widely used computer product security \nevaluation criteria.\n2.\t Using the product evaluation list for computer products, determine the ratings \nfor the following products: DOS, Windows NT, 98, XP, Unix, and Linux.\n3.\t Study the history of the development of computer products security evaluation \nand suggest the reasons that led to the development of ISO-based CC.\n4.\t Study and give the effects of ISO on a criterion. Does ISO affiliation have any \nimpact on the success of a criterion?\n5.\t Does the rapid development of computer technology put any strain on the exist­\ning criteria for updates?\n6.\t Study and compare TCSEC, ITSEC, and CC assurance levels.\n7.\t Trace the evolution of the security evaluation criteria.\n8.\t Discuss how standards influence the security evaluation criteria.\nReferences\n1.\t Wikipedia, http://en.wikipedia.org/wiki/Open_standard\n2.\t Bradner, S. FRC 2026: The Internet Standards Process—Revision 3. Network Working Group. \nhttp://www.ietf.org/rfc/rfc2026.txt\n3.\t Mercuri, R. “Standards Insecurity.” Communications of the ACM, December 2003, 46(12).\n4.\t Computer Security Evaluation FAQ, Version 2.1. http://www.faqs.org/faqs/computer-security/\nevaluations/\n5.\t An Oracle White Paper. Computer Security Criteria: Security Evaluations and Assessment, \nJuly 2001. http://otndnld.oracle.co.jp/deploy/security/pdf/en/seceval_wp.pdf\n6.\t Oracle Technology Network. Security Evaluations. http://www.oracle.com/technology/deploy/\nsecurity/seceval/index.html\n7.\t Department of Defense Standards. Trusted Computer System Evaluation Criteria. http://www.\nradium.ncsc.mil/tpep/library/rainbow/5200.28-STD.html\n" }, { "page_number": 376, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_17, © Springer-Verlag London Limited 2009\n\b\n365\nChapter 17\nComputer Network Security Protocols\n17.1  Introduction\nThe rapid growth of the Internet and corresponding Internet community has fueled a \nrapid growth of both individual and business communications leading to the growth \nof e-mail and e-commerce. In fact, studies now show that the majority of the Inter­\nnet communication content is e-mail content. The direct result of this has been the \ngrowing concern and sometimes demand for security and privacy in electronic com­\nmunication and e-commerce. Security and privacy are essential if individual com­\nmunication is to continue and e-commerce is to thrive in cyberspace. The call for and \ndesire for security and privacy has led to the advent of several proposals for security \nprotocols and standards. Among these are Secure Socket Layer (SSL) and Transport \nLayer Security (TLS) Protocols; secure IP (IPSec); Secure HTTP (S-HTTP), secure \nE-mail (PGP and S/MIME), DNDSEC, SSH, and others. Before we proceed with \nthe discussion of these and others, we want to warn the reader of the need for a firm \nunderstanding of the network protocol stack; otherwise go back and look over the \nmaterial in Chapter 1 before continuing. We will discuss these protocols and stan­\ndards within the framework of the network protocol stack as follows:\nApplication level security – PGP, S/MIME, S-HTTP, HTTPS, SET, and \n• \nKERBEROS\nTransport level security – SSL and TLS\n• \nNetwork level security – IPSec and VPNs\n• \nLink level security – PPP and RADIUS\n• \n17.2  Application Level Security\nAll the protocols in this section are application layer protocols, which means that \nthey reside on both ends of the communication link. They are all communication \nprotocols ranging from simple text to multimedia including graphics, video, audio, \nand so on. In the last ten years, there has been almost an explosion in the use of \nelectronic communication, both mail and multimedia content, that has resulted \n" }, { "page_number": 377, "text": "366\b\n17  Computer Network Security Protocols\nin booming e-commerce and almost unmanageable personal e-mails, much of \nit private or intended to be private anyway, especially e-mails. Along with this \nexplosion, there has been a growing demand for confidentiality and authenticity \nof private communications. To meet these demands, several schemes have been \ndeveloped to offer both confidentiality and authentication of these communica­\ntions. We will look at four of them here, all in the application layer of the network \nstack. There are PGP and Secure/Multipurpose Internet Mail Extension (S/MIME), \nS-HTTP, HTTPS, and Secure Electronic Transaction (SET) standard. These four \nprotocols and the standards are shown in the application layer of the network stack \nin Fig. 17.1.\n17.2.1  Pretty Good Privacy (PGP)\nThe importance of sensitive communication cannot be underestimated. Sensitive \ninformation, whether in motion in communication channels or in storage, must be \nprotected as much as possible. The best way, so far, to protect such information is to \nencrypt it. In fact, the security that the old snail mail offered was based on a seem­\ningly protective mechanism similar to encryption when messages were wrapped \nand enclosed in envelopes. There was, therefore, more security during the days of \nsnail mail because it took more time and effort for someone to open somebody’s \nmail. First, one had to get access to it, which was no small task. Then one had to \nsteam the envelope in order to open it and seal it later so that it looks unopened after. \nThere were more chances of being caught doing so. Well, electronic communication \nhas made it easy to intercept and read messages in the clear.\nSo encryption of e-mails and any other forms of communication is vital for the \nsecurity, confidentiality, and privacy of everyone. This is where PGP comes in and \nthis is why PGP is so popular today. In fact, currently PGP is one of the popular \nencryption and digital signatures schemes in personal communication.\nPretty Good Privacy (PGP), developed by Phil Zimmermann, is a public-key \ncryptosystem. As we saw in Chapter 9, in public key encryption, one key is kept \nsecret and the other key is made public. Secure communication with the receiving \nparty (with a secret key) is achieved by encrypting the message to be sent using the \nrecipient’s public key. This message then can be decrypted only using the recipi­\nent’s secret key.\nPGP works by creating a circle of trust among its users. In the circle of trust, \nusers, starting with two, form a key ring of public key/name pairs kept by each \nFig. 17.1  Application Layer Security Protocols and Standard\nPGP\nS/MIME\nS-HTTP\nHTTPS\nSET\nKERBEROS\nTransport Layer\n  Network Layer\n" }, { "page_number": 378, "text": "17.2  Application Level Security\b\n367\nuser. Joining this “trust club” means trusting and using the keys on somebody’s key \nring. Unlike the standard PKI infrastructure, this circle of trust has a built-in weak­\nness that can be penetrated by an intruder. However, since PGP can be used to sign \nmessages, the presence of its digital signature is used to verify the authenticity of \na document or file. This goes a long way in ensuring that an e-mail message or file \njust downloaded from the Internet is both secure and untampered with.\nPGP is regarded as hard encryption, that which is impossible to crack in the \nforeseeable future. Its strength is based on algorithms that have survived exten­\nsive public review and are already considered by many to be secure. Among these \nalgorithms are RSA which PGP uses for encryption, DSS, and Diffie-Hellman \nfor public key encryption; CAST-128, IDEA, and 3DES for conventional encryp­\ntion; and SHA-1 for hashing. The actual operation of PGP is based on five \nservices: authentication, confidentiality, compression, e-mail compatibility, and \nsegmentation [1].\n17.2.1.1  Authentication\nPGP provides authentication via a digital signature scheme. The hash code (MAC) \nis created using a combination of SHA-1 and RSA to provide an effective digital \nsignature. It can also create an alternative signature using DSS and SHA-1. The \nsignatures are then attached to the message or file before sending. PGP, in addition, \nsupports unattached digital signatures. In this case, the signature may be sent sepa­\nrately from the message.\n17.2.1.2  Confidentiality\nPGP provides confidentiality by encrypting messages before transmission. PGP \nencrypts messages for transmission and storage using conventional encryption \nschemes such as CAST-128, IDEA, and 3DES. In each case, a 64-bit cipher feed­\nback mode is used. As in all cases of encryption, there is always a problem of key \ndistribution; so PGP uses a conventional key once. This means for each message \nto be sent, the sender mints a brand new 128-bit session key for the message. The \nsession key is encrypted with RSA or Diffie-Hallman using the recipient’s public \nkey; the message is encrypted using CAST-128 or IDEA or 3DES together with the \nsession key. The combo is transmitted to the recipient. Upon receipt, the receiver \nuses RSA with his or her private key to encrypt and recover the session key which \nis used to recover the message. See Fig. 17.8.\n17.2.1.3  Compression\nPGP compresses the message after applying the signature and before encryption. \nThe idea is to save space.\n" }, { "page_number": 379, "text": "368\b\n17  Computer Network Security Protocols\n17.2.1.4  E-mail Compatibility\nAs we have seen above, PGP encrypts a message together with the signature (if \nnot sent separately) resulting into a stream of arbitrary 8-bit octets. But since many \ne-mail systems permit only use of blocks consisting of ASCII text, PGP accommo­\ndates this by converting the raw 8-bit binary streams into streams of printable ASCII \ncharacters using a radix-64 conversion scheme. On receipt, the block is converted \nback from radix-64 format to binary. If the message is encrypted, then a session key \nis recovered and used to decrypt the message. The result is then decompressed. If \nthere is a signature, it has to be recovered by recovering the transmitted hash code \nand comparing it to the receiver’s calculated hash before acceptance.\n17.2.1.5  Segmentation\nTo accommodate e-mail size restrictions, PGP automatically segments email mes­\nsages that are too long. However, the segmentation is done after all the housekeeping \nis done on the message, just before transmitting it. So the session key and signature \nappear only once at the beginning of the first segment transmitted. At receipt, the \nreceiving PGP strips off all e-mail headers and re-assembles the original mail.\nPGP’s popularity and use has so far turned out to be less than anticipated because \nof two reasons: first, its development and commercial distribution after Zimmer­\nmann sold it to Network Associates, which later sold it to another company did not \ndo well; second, its open source cousin, the OpenPGP, encountered market prob­\nlems including the problem of ease of use. Both OpenPGP and commercial PGP \nare difficult to use because it is not built into many e-mail clients. This implies that \nany two communicating users who want to encrypt their e-mail using PGP have \nto manually download and install PGP, a challenge and an inconvenience to many \nusers.\n17.2.2  Secure/Multipurpose Internet Mail Extension (S/MIME)\nSecure/Multipurpose Internet Mail Extension (S/MIME) extends the protocols of \nMultipurpose Internet Mail Extensions (MIME) by adding digital signatures and \nencryption to them. To understand S/MIME, let us first make a brief digression and \nlook at MIME. MIME is a technical specification of communication protocols that \ndescribes the transfer of multimedia data including pictures, audio, and video. The \nMIME protocol messages are described in RFC 1521; a reader with further interest \nin MIME should consult RFC 1521. Because Web contents such as files consist \nof hyperlinks that are themselves linked onto other hyperlinks, any e-mail must \ndescribe this kind of inter-linkage. That is what a MIME server does whenever a \nclient requests for a Web document. When the Web server sends the requested file \nto the client’s browser, it adds a MIME header to the document and transmits it [2]. \n" }, { "page_number": 380, "text": "17.2  Application level security\b\n369\nThis means, therefore, that such Internet e-mail messages consist of two parts: the \nheader and the body.\nWithin the header, two types of information are included: MIME type and ­subtype. \nThe MIME type describes the general file type of the transmitted content type such \nas image, text, audio, application, and others. The subtype carries the specific file \ntype such as jpeg or gif, tiff, and so on. For further information on the structure of \na MIME header, please refer to RFC 822. The body may be unstructured or it may \nbe in MIME format which defines how the body of an e-mail message is structured. \nWhat is notable here is that MIME does not provide any security services.\nS/MIME was then developed to add security services that have been missing. It \nadds two cryptographic elements: encryption and digital signatures [1].\n17.2.2.1  Encryption\nS/MIME supports three public key algorithms to encrypt sessions keys for transmis­\nsion with the message. These include Diffie-Hallman as the preferred algorithm, \nRSA for both signature and session keys, and triple DES.\n17.2.2.2  Digital Signatures\nTo create a digital signature, S/MIME uses a hash function of either 160-bit SHA-1 \nor MD5 to create message digests. To encrypt the message digests to form a digital \nsignature, it uses either DSS or RSA.\n17.2.3  Secure-HTTP (S-HTTP)\nSecure HTTP (S-HTTP) extends the Hypertext Transfer Protocol (HTTP). When \nHTTP was developed, it was developed for a Web that was simple, that did not \nhave dynamic graphics, and that did not require, at that time, hard encryption for \nend-to-end transactions that have since developed. As the Web became popular for \nbusinesses, users realized that current HTTP protocols needed more cryptographic \nand graphic improvements if it were to remain the e-commerce backbone it had \nbecome.\nResponding to this growing need for security, the Internet Engineering Task \nForce called for proposals that will develop Web protocols, probably based on \ncurrent HTTP, to address these needs. In 1994, such protocol was developed by \nEnterprise Integration Technologies (EIT). IET’s protocols were, indeed, exten­\nsions of the HTTP protocols. S-HHTP extended HTTP protocols by extending \nHTTP’s instructions and added security facilities using encryptions and sup­\nport for digital signatures. Each S-HTTP file is either encrypted, contains a \ndigital certificate, or both. S-HTTP design provides for secure communications, \n" }, { "page_number": 381, "text": "370\b\n17  Computer Network Security Protocols\nprimarily commercial transactions, between a HTTP client and a server. It does \nthis through a wide variety of mechanisms to provide for confidentiality, authen­\ntication, and integrity while separating policy from mechanism. The system is \nnot tied to any particular cryptographic system, key infrastructure, or crypto­\ngraphic format [3].\nHTTP messages contain two parts: the header and the body of the message. The \nheader contains instructions to the recipients (browser and server) on how to process \nthe message’s body. For example, if the message body is of the type like MIME, \nText, or HTML, instructions must be given to display this message accordingly. In \nthe normal HTTP protocol, for a client to retrieve information (text-based message) \nfrom a server, a client-based browser uses HTTP to send a request message to the \nserver that specifies the desired resource. The server, in response, sends a message \nto the client that contains the requested message. During the transfer transaction, \nboth the client browser and the server use the information contained in the HTTP \nheader to negotiate formats they will use to transfer the requested information. Both \nthe server and client browser may retain the connection as long as it is needed, oth­\nerwise the browser may send message to the server to close it.\nThe S-HTTP protocol extends this negotiation between the client browser and \nthe server to include the negotiation for security matters. Hence, S-HTTP uses addi­\ntional headers for message encryption, digital certificates, and authentication in the \nHTTP format which contains additional instructions on how to decrypt the message \nbody. Tables 17.1 and 17.2 show header instructions for both HTTP and S-HTTP. \nThe HTTP headers are encapsulated into the S-HTTP headers. The headers give a \nvariety of options that can be chosen from as a client browser and the server negoti­\nate for information exchange. All headers in S-HTTP are optional, except “Content \nType” and “Content-Privacy-Domain.”\nTo offer flexibility, during the negotiation between the client browser and the \nserver, for the cryptographic enhancements to be used, the client and server must \nagree on four parts: property, value, direction, and strength. If agents are unable to \ndiscover a common set of algorithms, appropriate actions are then be taken. Adam \nShastack [2] gives the following example as a negotiation line:\nTable 17.1  S-HTTP protocol headers\nS-HTTP header\nPurpose\nOptions\nContent-Privacy-Domain\nFor compatibility with PEM\n  based secure HTTP\nRSA’s PKCS-7 (Public Key\n  \u0007Cryptography Standard 7, \n“Cryptographic Message\nSyntax Standard,”, RFC-1421 \nstyle PEM, and PGP 2.6 \nformat.\nContent-transfer-encoding\nExplains how the content of\n  the message is encoded\n7, 8 bit\nContent-type\nStandard header\nHTTP\nPrearranged-Key-Info\nInformation about the keys\n  used in the encapsulation\nDEK (data exchange key) used\n  to encrypt this message\n" }, { "page_number": 382, "text": "17.2  Application level security\b\n371\nSHTTP-Key-Exchange-Algorithms: recv-required = RSA, Kerb-5\nThis means that messages received by this machine are required to use Kerberos \n5 or RSA encryption to exchange keys. The choices for the (recv-required) modes \nare (recv  orig)-(optional  required  refused). Where key lengths specifications are \nnecessary in case of variable key length ciphers, this is then specifically referred to \nas cipher[length], or cipher[L1-L2], where length of key is length, or in the case of \nL1-L2, is between L1 and L2, inclusive [2].\nOther headers in the S-HTTP negotiations could be [3, 2]:\nSHTTP-Privacy-Domains\n• \nSHTTP-Certificate-Types\n• \nSHTTP-Key-Exchange-Algorithms\n• \nSHTTP-Signature-Algorithms\n• \nSHTTP-Message-Digest-Algorithms\n• \nSHTTP-Symmetric-Content-Algorithms\n• \nSHTTP-Symmetric-Header-Algorithms\n• \nSHTTP-Privacy-Enhancements\n• \nYour-Key-Pattern\n• \nWe refer a reader interested in more details of these negotiations to Adam \nShastack’s paper.\nWe had pointed out earlier that S-HTTP extends HTTP by adding message \nencryption, digital signature, and message and sender authentication. Let us see \nhow these are incorporated into HTTP to get S-HTTP.\n17.2.3.1  Cryptographic Algorithm for S-HTTP\nS-HTTP uses a symmetric key cryptosystem in the negotiations that prear­\nranges symmetric session keys and a challenge – response mechanism between \nTable 17.2  HTTP headers\nHTTP header\nPurpose\nOptions\nSecurity scheme\nMandatory, specifies protocol name\n  and version\nS-HTTP/1.1\nEncryption identity\nIdentity names the entity for which a\n  \u0007message is encrypted. Permits return \nencryption under public key without \nothers ­signing first.\nDN-1485 and Kerberos\nCertificate info\nAllows a sender to send a public key\n  certificate in a message.\nPKCS-7, PEM\nKey assign (exchange)\nThe message used for actual key\n  exchanges\nKrb-4, Krb-5 (Kerberos)\nNonces\nSession identifiers, used to ­indicate\n  the freshness of a session\n" }, { "page_number": 383, "text": "372\b\n17  Computer Network Security Protocols\n­communicating parties. Before the server can communicate with the client \nbrowser, both must agree upon an encryption key. Normally the process would go \nas follows: The client’s browser would request the server for a page. Along with \nthis request, the browser lists encryption schemes it supports and also includes its \npublic key. Upon receipt of the request, the server responds to the client browser \nby sending a list of encryption schemes it also supports. The server may, in addi­\ntion, send the browser a session key encrypted by the client’s browser’s public \nkey, now that it has it. If the client’s browser does not get a session key from the \nserver, it then sends a message to the server encrypted with the server’s public \nkey. The message may contain a session key or a value the server can use to gener­\nate a session key for the communication.\nUpon the receipt of the page/message from the server, the client’s browser, if \npossible, matches the decryption schemes in the S-HTTP headers (recall this was \npart of the negotiations before the transfer), that include session keys, then decrypts \nthe message [3]. In HTTP transactions, once the page has been delivered to the cli­\nent’s browser, the server would disconnect. However, with S-HTTP, the connection \nremains until the client browser requests the server to do so. This is helpful because \nthe client’s browser encrypts each transmission with this session key.\nCryptographic technologies used by S-HTTP include Privacy Enhanced Mail \n(PEM), Pretty Good Privacy (PGP), and Public Key Cryptography Standard 7 \n(PKGS-7). Although S-HTTP uses encryption facilities, non-S-HTTP browsers can \nstill communicate with an S-HTTP server. This is made possible because S-HTTP \ndoes not require that the user pre-establishes public keys in order to participate in \na secure transaction. A request for secure transactions with an S-HTTP server must \noriginate from the client browser. See Fig. 17.1.\nBecause a server can deal with multiple requests from clients browsers, S-HTTP \nsupports multiple encryptions by supporting two transfer mechanisms: one that uses \npublic key exchange, usually referred to as in-band, and one that uses a third party \nPublic Key Authority (PKA) that provides session keys using public keys for both \nclients and servers.\n17.2.3.2  Digital Signatures for S-HTTP\nS-HTTP uses SignedData or SignedAndEnvelopedData signature enhancement of \nPKCS-7 [3]. S-HTTP allows both certificates from a Certificate Authority (CA) and \na self-signed certificate (usually not verified by a third party). If the server requires \na digital certificate from the client’s browser, the browser must attach a certificate \nthen.\n17.2.3.3  Message and Sender Authentication in S-HTTP\nS-HHTP uses an authentication scheme that produces a MAC. The MAC is actually \na digital signature computed from a hash function on the document using a shared \nsecret code.\n" }, { "page_number": 384, "text": "17.2  Application level security\b\n373\n17.2.4  \u0007Hypertext Transfer Protocol over Secure Socket Layer \n(HTTPS)\nHTTPS is the use of Secure Sockets Layer (SSL) as a sub-layer under the regular \nHTTP in the application layer. It is also referred to as Hypertext Transfer Proto­\ncol over Secure Socket Layer (HTTPS) or HTTP over SSL, in short. HTTPS is a \nWeb protocol developed by Netscape, and it is built into its browser to encrypt and \ndecrypt user page requests as well as the pages that are returned by the Web server. \nHTTPS uses port 443 instead of HTTP port 80 in its interactions with the lower \nlayer, TCP/IP. Probably to understand well how this works, the reader should first \ngo over Section 17.3.1, where SSL is discussed.\n17.2.5  Secure Electronic Transactions (SET)\nSET is a cryptographic protocol developed by a group of companies that included \nVisa, Microsoft, IBM, RSA, Netscape, MasterCard, and others. It is a highly spe­\ncialized system with complex specifications contained in three books with book \none dealing with the business description, book two a programmer’s guide, and \nbook three giving the formal protocol description. Book one spells out the business \nrequirements that include the following [1]:\nConfidentiality of payment and ordering information\n• \nIntegrity of all transmitted data\n• \nAuthentication of all card holders\n• \nAuthenticating that a merchant can accept card transactions based on relationship \n• \nwith financial institution\nEnsuring the best security practices and protection of all legitimate parties in the \n• \ntransaction\nCreating protocols that neither depend on transport security mechanism nor \n• \nprevent their use\nFacilitating and encouraging interoperability among software and network \n• \nproviders.\nOnline credit and debit card activities that must meet those requirements may \ninclude one or more of the following: cardholder registration, merchant registration, \npurchase request, payment authorization, funds transfer, credits reversals, and debit \ncards. For each transaction, SET provides the following services: authentication, \nconfidentiality, message integrity, and linkage [1, 4].\n17.2.5.1  Authentication\nAuthentication is a service used to authenticate every one in the transacting party \nthat includes the customer, the merchant, the bank that issued the customer’s card, \nand the merchant’s bank, using X.509v3 digital signatures.\n" }, { "page_number": 385, "text": "374\b\n17  Computer Network Security Protocols\n17.2.5.2  Confidentiality\nConfidentiality is a result of encrypting all aspects of the transaction to prevent \nintruders from gaining access to any component of the transaction. SET uses DES \nfor this.\n17.2.5.3  Message Integrity\nAgain this is a result of encryption to prevent any kind of modification to the data \nsuch as personal data and payment instructions involved in the transaction. SET \nuses SHA-1 hash codes used in the digital signatures.\n17.2.5.4  Linkage\nLinkage allows the first party in the transaction to verify that the attachment is cor­\nrect without reading the contents of the attachment. This helps a great deal in keep­\ning the confidentiality of the contents.\nSET uses public key encryption and signed certificates to establish the identity \nof every one involved in the transaction and to allow every correspondence between \nthem to be private.\nThe SET protocols involved in a transaction have several representations, but \nevery one of those representations has the following basic facts: the actors and the \npurchase-authorization-payment control flow.\nThe actors involved in every transactions are as follows [1]:\nThe buyer – usually the cardholder\n• \nThe merchant – fellow with the merchandise the buyer is interested in\n• \nThe merchant bank – the financial institution that handles the merchant’s financial \n• \ntransactions\nThe customer bank – usually the bank that issues the card to the customer. \n• \nThis bank also authorizes electronic payments to the merchant account upon \nauthorization of payment request from the customer. This bank may sometimes \nset up another entity and charge it with payment authorizations.\nCertificate authority (CA) – that issues X.509v3 certificates to the customer, and \n• \nmerchant.\nPurchase-authorization-payment control flow. This flow is initiated by the cus­\ntomer placing a purchase order to the merchant and is concluded by the customer \nbank sending a payment statement to the customer. The key cryptographic authenti­\ncation element in SET is the dual signature. The dual signature links two messages \n(payment information and order information) intended for two different recipients, \nthe merchant getting merchandise information and the customer bank getting pay­\nment information. The dual signature keeps the two bits of information separate \nletting the intended party see only the part they are authorized to see. The customer \n" }, { "page_number": 386, "text": "17.2  Application level security\b\n375\ncreates a dual signature by hashing the merchandise information and also payment \ninformation using SHA-1, concatenates the two, hashes them again, and encrypts \nthe result using his or her private key before sending them to the merchant. For \nmore details on dual signatures, the reader is referred to Cryptography and Network \nSecurity: Principles and Practice, Second Edition, by William Stallings. Let us now \nlook at the purchase-authorization-payment control flow [1, 4].\nCustomer initiates the transaction by sending to the merchant a purchase order \n• \nand payment information together with a dual signature.\nThe merchant, happy to receive an order from the customer, strips off the merchant \n• \ninformation, verifies customer purchase order using his or her certificate key, and \nforwards the payment information to his or her bank.\nThe merchant bank forwards the payment information from the customer to the \n• \ncustomer bank\nThe customer bank, using the customer’s certificate key, checks and authorizes \n• \nthe payments and informs the merchant’s bank.\nThe merchant’s bank passes the authorization to the merchant, who releases the \n• \nmerchandise to the customer.\nThe customer bank bills the customer.\n• \n17.2.6  Kerberos\nKerberos is a network authentication protocol. It is designed to allow users, cli­\nents and servers, authenticate themselves to each other. This mutual authentication \nis done using secret-key cryptography. Using secret-key encryption, or as it is com­\nmonly known, conventional encryption, a client can prove its identity to a server \nacross an insecure network connection. Similarly, a server can also identify itself \nacross the same insecure network connection. Communication between the client \nand the server can be secure after the client and server have used Kerberos to prove \ntheir identities. From this point on, subsequent communication between the two can \nbe encrypted to ensure privacy and data integrity.\nIn his paper “The Moron’s Guide to Kerberos, Version 1.2.2,” Brian Tung [5], in \na simple but interesting example, likens the real-life self-authentication we always \ndo with the presentation of driver licenses on demand, to that of Kerberos.\nKerberos client/server authentication requirements are as follows [2]:\nSecurity – that Kerberos is strong enough to stop potential eavesdroppers from \n• \nfinding it to be a weak link.\nReliability – that Kerberos is highly reliable, employing a distributed server \n• \narchitecture where one server is able to back up another. This means that Kerberos \nsystem is fail safe, meaning graceful degradation, if it happens.\nTransparency – that users are not aware that authentication is taking place beyond \n• \nproviding passwords.\nScalability – that Kerberos systems accept and support new clients and servers.\n• \n" }, { "page_number": 387, "text": "376\b\n17  Computer Network Security Protocols\nTo meet these requirements, Kerberos designers proposed a third-party-trusted \nauthentication service to arbitrate between the client and server in their mutual \nauthentication. Figure 17.2 shows the interaction between the three parties.\nThe actual Kerberos authentication process is rather complex, probably more \ncomplex than all the protocols we have seen so far in this chapter. So to help the \nreader grasp the concept, we are going to follow what many writers on Kerberos \nhave done, go via an example and Fig. 17.2. And here we go [2]:\nOn a Kerberos network, suppose user A wants to access a document on server B. \nBoth principals in the transaction do not trust each other. So the server must demand \nFig. 17.2  Kerberos Authentication System\nKerberos Authenticating Server\n4. The Server\nsends the User\nan Access Ticket\ncontaing session\nkey.\nUser Seeking Authentication\n1. User sends a\nrequest to the\nServer.\n3. TheUser sends\nthe Server the\nTicket-Granting\nTicket containg a\nMaster Session\nKey\n2. Server\nresponds with a\nTicket-Granting\nTicket containg a\nMaster Session\nKey\n" }, { "page_number": 388, "text": "17.2  Application level security\b\n377\nassurances that A is who he or she says he or she is. So just like in real life, when \nsomebody you are seeking a service from demands that you show proof of what you \nclaim you are by pulling out a drivers license with a picture of you on it, Kerberos \nalso demands proof. In Kerberos, however, A must present a ticket to B. The ticket \nis issued by a Kerberos authentication server (AS). Both A and B trust the AS. So A \nanticipating that B will demand proof works on it by digitally signing the request to \naccess the document held by B with A’s private key and encrypting the request with \nB’s public key. A then sends the encrypted request to AS, the trusted server. Upon \nreceipt of the request, AS verifies that it is A who sent the request by analyzing A’s \ndigital signature. It also checks A’s access rights to the requested document. AS \nhas those lists for all the servers in the Kerberos system. AS then mints a ticket that \ncontains a session key and B’s access information, uses A’s public key to encrypt \nit, and sends it to A. In addition, AS mints a similar ticket for B which contains \nthe same information as that of A. The ticket is transmitted to B. Now AS’s job is \nalmost done after connecting both A and B. They are now on their own. After the \nconnection, both A and B compare their tickets for a match. If the tickets match, the \nAS has done its job, and A and B start communicating as A accesses the requested \ndocument on B. At the end of the session, B informs AS to recede the ticket for this \nsession. Now if A wants to communicate with B again for whatever request, a new \nticket for the session is needed.\n17.2.6.1  Ticket-Granting Ticket\nThe Kerberos system may have more than one AS. While this method works well, \nit is not secure. This is so because the ticket stays in use for some time and is, there­\nfore, susceptible to hacking. To tackle this problem, Kerberos systems use another \napproach that is more secure. This second approach uses the first approach as the \nbasis. An authentication process goes as follows:\nA, in need of accessing a document on server B, sends an access request text \nin the clear to AS for a ticket-granting ticket, which is a master ticket for the login \nprocess with B. AS, using a shared secret such as a password, verifies A and sends \nA a ticket-granting ticket. A then uses this ticket-granting ticket instead of A’s public \nkey to send to AS for a ticket for any resource from any server A wants. To send \nthe ticket to A, the AS encrypts it with a master session key contained in the ticket-\ngranting ticket.\nWhen a company has distributed its computing by having Kerberos networks \nin two separate areas, there are ways to handle situations like this. One Kerberos \nsystem in one network can authenticate a user in another Kerberos network. Each \nKerberos AS operates in a defined area of the network called a realm. To extend the \nKerberos’ authentication across realms, an inter-realm key is needed that allows cli­\nents authenticated by an AS in one realm to use that authentication in another realm. \nRealms sharing inter-realm keys can communicate.\nKerberos can run on top of an operating system. In fact it is a default protocol \nin Windows 2000 and later versions of Windows. In a non-native environment, \n" }, { "page_number": 389, "text": "378\b\n17  Computer Network Security Protocols\nKerberos must be “kerberized” by making server and client software make calls to \nKerbros library.\n17.3  Security in the Transport Layer\nUnlike the last five protocols we have been discussing in the previous section, in \nthis section we look at protocols that are a little deeper in the network infrastruc­\nture. They are at the level below the application layer. In fact they are at the trans­\nport layer. Although several protocols are found in this layer, we are only going \nto discuss two: Secure Socket Layer (SSL) and Transport Layer Security (TLS). \nCurrently, however, these two are no longer considered as two separate protocols \nbut one under the name SSL/TLS, after the SSL standardization was passed over \nto IETF, by the Netscape consortium, and Internet Engineering Task Force (IETF) \nrenamed it TLS. Figure 17.3 shows the position of these protocols in the network \nprotocol stack.\n17.3.1  Secure Socket Layer (SSL)\nSSL is a widely used general purpose cryptographic system used in the two major \nInternet browsers: Netscape and Explorer. It was designed to provide an encrypted \nend-to-end data path between a client and a server regardless of platform or OS. \nSecure and authenticated services are provided through data encryption, server \nauthentication, message integrity, and client authentication for a TCP connection \nthrough HTTP, LDAP, or POP3 application layers. It was originally developed by \nNetscape Communications and it first appeared in a Netscape Navigator browser in \n1994. The year 1994 was an interesting year for Internet security because during the \nsame year, a rival security scheme to SSL, the S-HTTP, was launched. Both systems \nwere designed for Web-based commerce. Both allow for the exchange of multiple \nmessages between two processes and use similar cryptographic schemes such as \ndigital envelopes, signed certificates, and message digest.\nAlthough these two Web giants had much in common, there are some differences \nin design goals, implementation, and acceptance. First, S-HTTP was designed to \nwork with only Web protocols. Because SSL is at a lower level in the network stack \nthan S-HTTP, it can work in many other network protocols. Second, in terms of \nimplementation, since SSL is again at a lower level than S-HTTP, it is implemented \nas a replacement for the sockets API to be used by applications requiring secure \nApplication Layer\nSSL\nTLS\nNetwork Layer\nFig. 17.3  Transport Layer \nSecurity Protocols and \nStandards\n" }, { "page_number": 390, "text": "17.3  Security in the Transport Layer\b\n379\ncommunications. On the other hand, S-HTTP has its data passed in named text \nfields in the HTTP header. Finally in terms of distribution and acceptance, history \nhas not been so good to S-HTTP. While SSL was released in a free mass circulat­\ning browser, the Netscape Navigator, S-HTTP was released in a much smaller and \nrestricted NCSA Mosaic. This unfortunate choice doomed the fortunes of S-HTTP.\n17.3.1.1  SSL Objectives and Architecture\nThe stated SSL objectives were to secure and authenticate data paths between serv­\ners and clients. These objectives were to be achieved through several services that \nincluded data encryption, server and client authentication, and message integrity [6]:\nData encryption – to protect data in transport between the client and the server \n• \nfrom interception and could be read only by the intended recipient.\nServer and client authentication – the SSL protocol uses standard public key \n• \nencryption to authenticate the communicating parties to each other.\nMessage integrity – achieved through the use of session keys so that data cannot \n• \nbe either intentionally or unintentionally tampered with.\nThese services offer reliable end-to-end secure services to Internet TCP con­\nnections and are based on an SSL architecture consisting of two layers: the top \nlayer, just below the application layer, that consists of three protocols, namely the \nSSL Handshake protocol, the SS Change Cipher Specs Protocol, and the SSL Alert \nprotocol. Below these protocols is the second SSL layer, the SSL Record Protocol \nlayer, just above the TCP layer. See Fig. 17.4.\nFig. 17.4  The SSL Protocol \nStack\nIP Protocol\nTCP Protocol\nSSL Record Protocol\nChange\nCipher\nSpecification\nSSL Alert\nProtocol\nSSL\nHandshake\nProtocol\nApplication Layer\nSSL\n" }, { "page_number": 391, "text": "380\b\n17  Computer Network Security Protocols\n17.3.1.2  The SSL Handshake\nBefore any TCP connection between a client and a server, both running under SSL, is \nestablished, there must be almost a process similar to a three-way handshake we dis­\ncussed in Section 3.2.2. This get-to-know-you process is similarly called the SSL hand­\nshake. During the handshake, the client and server perform the following tasks [7]:\nEstablish a cipher suite to use between them.\n• \nProvide mandatory server authentication through the server sending its certificate \n• \nto the client to verify that the server’s certificate was signed by a trusted CA.\nProvide optional client authentication, if required, through the client sending its \n• \nown certificate to the server to verify that the client’s certificate was signed by a \ntrusted CA. The CA may not be the same CA who signed the client’s certificate. \nCAs may come from a list of trusted CAs. The reason for making this step \noptional was a result of realization that since few customers are willing, know \nhow, or care to get digital certificates, requiring them to do this would amount \nto locking a huge number of customers out of the system which would not make \nbusiness sense. This, however, presents some weaknesses to the system.\nExchange key information using public key cryptography, after mutual \n• \nauthentication, that leads to the client generating a session key (usually a random \nnumber) which, with the negotiated cipher, is used in all subsequent encryption \nor decryption. The customer encrypts the session key using the public key of \nthe merchant server (from the merchant’s certificate). The server recovers the \nsession key by decrypting it using its private key. This symmetric key, which \nnow both parties have, is used in all subsequent communication.\n17.3.1.3 SSL  Cipher Specs Protocol\nThe SSL Cipher Specs protocol consists of an exchange of a single message in a \nbyte with a value of 1 being exchanged, using the SSL record protocol (see Section \n17.3.1.4), between the server and client. The bit is exchanged to establish a pending \nsession state to be copied into the current state, thus defining a new set of protocols \nas the new agreed on session state.\n17.3.1.4  SSL Alert Protocol\nThe SSL Alert protocol, which also runs over the SSL Record protocol, is used by \nthe two parties to convey session warning messages associated with data exchange \nand functioning of the protocol. The warnings are used to indicate session problems \nranging from unknown certificate, revoked certificate, and expired certificate to \nfatal error messages that can cause immediate termination of the SSL connection. \nEach message in the alert protocol sits within two bytes, with the first byte taking a \nvalue of (1) for a warning and (2) for a fatal error. The second byte of the message \ncontains one of the defined error codes that may occur during an SSL communica­\ntion session [6]. For further working of these error codes, see [6].\n" }, { "page_number": 392, "text": "17.3  Security in the Transport Layer\b\n381\n17.3.1.5  SSL Record Protocol\nThe SSL record protocol provides SSL connections two services: confidentiality \nand message integrity [2]:\nConfidentiality\n• \n is attained when the handshake protocol provides a shared secret \nkey used in the conventional encryption of SSL messages.\nMessage integrity\n• \n is attained when the handshake defines a secret shared key \nused to form a message authentication code (MAC).\nIn providing these services, the SSL Record Protocol takes an application mes­\nsage to be transmitted and fragments the data that needs to be sent, compresses it, \nadds a MAC, encrypts it together with the MAC, adds an SSL header, and transmits \nit under the TCP protocol. The return trip undoes these steps. The received data is \ndecrypted, verified, and decompressed before it is forwarded to higher layers. The \nrecord header that is added to each data portion contains two elementary pieces of \ninformation, namely, the length of the record and the length of the data block added \nto the original data. See Fig. 17.5.\nThe MAC, computed from a hash algorithm such as MD5 or SHA-1 as \nMAC = Hash function [secret key, primary data, padding, sequence number], is \nFig. 17.5  SSL Record \nProtocol Operation Process\nApplication Data\nData Unit 1\nData Unit 2\nCompressed Data\nEncrypted Data with a MAC\nText\nEncryption\nMAC\n" }, { "page_number": 393, "text": "382\b\n17  Computer Network Security Protocols\nused to verify the integrity of the message included in the transmitted record. The \nverification is done by the receiving party computing its own value of the MAC and \ncomparing it with that received. If the two values match, this means that data has not \nbeen modified during the transmission over the network.\nSSL protocols are widely used in all Web applications and any other TCP con­\nnections. Although they are mostly used for Web applications, they are gaining \nground in e-mail applications also.\n17.3.2  Transport Layer Security (TLS)\nTransport Layer Security (TLS) is the result of the 1996 Internet Engineering Task \nForce (IETF) attempt at standardization of a secure method to communicate over \nthe Web. The 1999 outcome of that attempt was released as RFC 2246 spelling out \na new protocol – the Transport Layer Security or TLS. TLS was charged with pro­\nviding security and data integrity at the transport layer between two applications. \nTLS version 1.0 was an evolved SSL 3.0. So, as we pointed out earlier, TLS is the \nsuccessor to SSL 3.0. Frequently, the new standard is referred to as SSL/TLS.\nSince then, however, the following additional features have been added [6]:\nInteroperability\n• \n – ability to exchange TLS parameters by either party, with no \nneed for one party to know the other’s TLS implementation details.\nExpandability\n• \n – to plan for future expansions and accommodation of new \nprotocols.\n17.4  Security in the Network Layer\nIn the previous section, we discussed protocols in the transport part of the stack that \nare being used to address Internet communication security. In this section, we are \ngoing one layer down, to the Network layer and also look at the protocols and prob­\nably standards that address Internet communication security. In this layer, we will \naddress IPSec and VPN technologies shown in Fig. 17.6.\n17.4.1  Internet Protocol Security (IPSec)\nIPSec is a suite of authentication and encryption protocols developed by the Inter­\nnet Engineering Task Force (IETF) and designed to address the inherent lack of \nApplication Layer\nTransport Layer\nIPSec\nVPN\nFig. 17.6  Network Layer \nSecurity Protocols and \nStandards\n" }, { "page_number": 394, "text": "17.4  Security in the Network Layer\b\n383\nsecurity for IP-based networks. IPSec, unlike other protocols we have discussed \nso far, is a very complex set of protocols described in a number of RFCs including \nRFC 2401 and 2411. It runs transparently to transport layer and application layer \nprotocols which do not see it. Although it was designed to run in the new version of \nthe Internet Protocol, IP Version 6 (IPv6), it has also successfully run in the older \nIPv4 as well. IPSec sets out to offer protection by providing the following services \nat the network layer:\nAccess control – to prevent an unauthorized access to the resource.\n• \nConnectionless integrity – to give an assurance that the traffic received has not \n• \nbeen modified in any way.\nConfidentiality – to ensure that Internet traffic is not examined by nonauthorized \n• \nparties. This requires all IP datagrams to have their data field, TCP, UDP, ICMP, \nor any other datagram data field segment, encrypted.\nAuthentication – particularly source authentication so that when a destination \n• \nhost receives an IP datagram, with a particular IP source address, it is possible to \nbe sure that the IP datagram was indeed generated by the host with the source IP \naddress. This prevents spoofed IP addresses.\nReplay protection – to guarantee that each packet exchanged between two parties \n• \nis different.\nIPSec protocol achieves these two objectives by dividing the protocol suite into \ntwo main protocols: Authentication Header (AH) protocol and the Encapsulation \nSecurity Payload (ESP) protocol [8]. The AH protocol provides source authentica­\ntion and data integrity but no confidentiality. The ESP protocol provides authen­\ntication, data integrity, and confidentiality. Any datagram from a source must be \nsecured with either AH or ESP. Figures 17.7 and 17.8 show both IPSec’s ESP and \nAH protections.\n17.4.1.1  Authentication Header (AH)\nAH protocol provides source authentication and data integrity but not confidenti­\nality. This is done by a source that wants to send a datagram first establishing an \nSA, through which the source can send the datagram. A source datagram includes \nFig. 17.7  IPSec’s AH \n­Protocol Protection\nIP Header\nAuthentication\nHeader\nProtected Data\nAuthentication and Message Integrity\n" }, { "page_number": 395, "text": "384\b\n17  Computer Network Security Protocols\nan AH inserted between the original IP datagram data and the IP header to shield \nthe data field which is now encapsulated as a standard IP datagram. See Fig. 17.4. \nUpon receipt of the IP datagram, the destination host notices the AH and processes \nit using the AH protocol. Intermediate hosts such as routers, however, do their usual \njob of examining every datagram for the destination IP address and then forwarding \nit on.\n17.4.1.2  Encapsulating Security Payload (ESP)\nUnlike the AH protocol, ESP protocol provides source authentication, data integrity, \nand confidentiality. This has made ESP the most commonly used IPSec header. Sim­\nilar to AH, ESP begins with the source host establishing an AS which it uses to send \nsecure datagrams to the destination. Datagrams are secured by ESP by surrounding \ntheir original IP datagrams with a new header and trailer fields all encapsulated \ninto a new IP datagram. See Fig. 17.5. Confidentiality is provided by DES_CBC \nencryption. Next to the ESP trailer field on the datagram is the ESP Authentication \nData field.\n17.4.1.3  Security Associations\nIn order to perform the security services that IPSec provides, IPSec must first get \nas much information as possible on the security arrangement of the two communi­\ncating hosts. Such security arrangements are called security associations (CAs). A \nsecurity association is a unidirectional security arrangement defining a set of items \nand procedures that must be shared between the two communicating entities in \norder to protect the communication process.\nRecall from Chapter 1 that in the usual network IP connections, the network \nlayer IP is connectionless. However, with security associations, IPSec creates \nFig. 17.8  IPSEc’s ESP Protocol Protection\nIP Header\nESP Header\nProtected Data\nESP Trailer\nAuthentication and Message Integrity\nConfidentiality\n" }, { "page_number": 396, "text": "17.4  Security in the Network Layer\b\n385\nlogical connection-oriented channels at the network layer. This logical connec­\ntion-oriented channel is created by a security agreement established between the \ntwo hosts stating specific algorithms to be used by the sending party to ensure \nconfidentiality (with ESP), authentication, message integrity, and anti-replay \nprotection.\nSince each AS establishes a unidirectional channel, for a full duplex communica­\ntion between two parties, two SAs must be established. An SA is defined by three \nparameters [2, 9]:\nSecurity Parameter Index (SPI) – a 32-bit connection identifier of the SA. For \n• \neach association between a source and destination host, there is one SPI that is \nused by all datagrams in the connection to provide information to the receiving \ndevice on how to process the incoming traffic.\nIP Destination Address – address of a destination host.\n• \nA Security Protocol (AH or ESP) to be used and specifying if traffic is to be \n• \nprovided with integrity and secrecy. The protocol also defines the key size, key \nlifetime, and the cryptographic algorithms.\nSecret key – which defines the keys to be used.\n• \nEncapsulation mode – defining how encapsulation headers are created and which \n• \nparts of the header and user traffic are protected during the communication \nprocess.\nFigure 17.9 shows the general concept of a security association.\n17.4.1.4  Transport and Tunnel Modes\nThe security associations discussed above are implemented in two modes: trans­\nport and tunnel. This means that IPSec is operating in two modes. Let us look at \nthese [2].\nFig. 17.9  A General Concept of IPSec’s Security Association\nConsisting of:\nEncapsulation Mode,\nSecurity Protocol,\nSecret Key,\nDestination Address,\nSecurity Parameter\nIndex\nComputer A\nComputer B\nIP Secure Tunnel\n" }, { "page_number": 397, "text": "386\b\n17  Computer Network Security Protocols\nTransport Mode\nTransport mode provides host-to-host protection to higher layer protocols in the \ncommunication between two hosts in both IPv4 and IPv6. In IPv4, this area is the \narea beyond the IP address as shown in Fig. 17.10. In IPv6, the new extension \nto IPv4, the protection includes the upper protocols, the IP address, and any IPv6 \nheader extensions as shown in Fig. 17.7. The IP addresses of the two IPSec hosts is \nin the clear because they are needed in routing the datagram through the network.\n Tunnel Mode\nTunnel mode offers protection to the entire IP datagram both in AH and ESP between \ntwo IPSec gateways. This is possible because of the added new IP header in both \nIPv4 and IPv6 as shown in Fig. 17.11. Between the two gateways, the datagram is \nsecure and the original IP address is also secure. However, beyond the gateways, the \ndatagram may not be secure. Such protection is created when the first IPSec gate­\nway encapsulate the datagram including its IP address into a new shield datagram \nwith a new IP address of the receiving IPSec gateway. At the receiving gateway, \nthe new datagram is unwrapped and brought back to the original datagram. This \ndatagram, based on its original IP address, can be passed on further by the receiving \ngateway, but from this point on unprotected.\n17.4.1.5  Other IPsec Issues\nAny IPSec compliant system must support single-DES, MD5, and SHA-1 as an \nabsolute minimum; this ensures that a basic level of inter-working is possible with \ntwo IPSec compliant units at each end of the link. Since IPSec sits between the \nNetwork and Transport layers, the best place for its implementation is mainly in \nhardware.\nFig. 17.10  IPSec’s Transport Mode\nIP Header\nIPSec\nTCP/UDP\nHeader\nProtected \nData\nESP\nTrailer\nFig. 17.11  IPSec’s Tunnel Mode\nIP Header\nIPSec\nIP Header\nTCP/UDP\nHeader\nProtected \nData\nESP\nTrailer\n" }, { "page_number": 398, "text": "17.4  Security in the Network Layer\b\n387\n17.4.2  Virtual Private Networks (VPN)\nA VPN is a private data network that makes use of the public telecommunication \ninfrastructure, such as the Internet, by adding security procedures over the unse­\ncure communication channels. The security procedures that involve encryption are \nachieved through the use of a tunneling protocol. There are two types of VPNs: \nremote access which lets single users connect to the protected company network \nand site-to-site which supports connections between two protected company net­\nworks. In either mode, VPN technology gives a company the facilities of expensive \nprivate leased lines at much lower cost by using the shared public infrastructure like \nthe Internet. See Fig. 17.12.\nFigure 17.8 shows two components of a VPN [10]:\nTwo terminators which are either software or hardware. These perform encryption, \n• \ndecryption and authentication services. They also encapsulate the information.\nA tunnel – connecting the end-points. The tunnel is a secure communication link \n• \nbetween the end-points and networks such as the Internet. In fact this tunnel is \nvirtually created by the end-points.\nVPN technology must do the following activities:\nIP encapsulation – this involves enclosing TCP/IP data packets within another \n• \npacket with an IP-address of either a firewall or a server that acts as a VPN end-\npoint. This encapsulation of host IP-address helps in hiding the host.\nEncryption – is done on the data part of the packet. Just like in SSL, the encryption \n• \ncan be done either in transport mode which encrypts its data at the time of \ngeneration or tunnel mode which encrypts and decrypts data during transmission \nencrypting both data and header.\nAuthentication – involves creating an encryption domain which includes \n• \nauthenticating computers and data packets by use for public encryption.\nVPN technology is not new; phone companies have provided private shared \nresources for voice messages for over a decade. However, its extension to making \nFig. 17.12  Virtual Private Network (VPN) Model\nLaptop\nLaptop\nRouter\nLaptop\nLaptop\nRouter\nFirewall with IPSec\nFirewall with IPSec\nTunnel\nProtected Network\nProtected Network\n" }, { "page_number": 399, "text": "388\b\n17  Computer Network Security Protocols\nit possible to have the same protected sharing of public resources for data is new. \nToday, VPNs are being used for both extranets and wide-area intranets. Probably \nowing to cost savings, the popularity of VPNs by companies has been phenomenal.\n17.4.2.1  Types of VPNs\nThe security of VPN technologies falls into three types: trusted VPNs, secure VPNs, \nand hybrid VPNs.\nTrusted VPNs\nAs we have noted above, before the Internet, VPN technology consisted of one or \nmore circuits leased from a communications provider. Each leased circuit acted like \na single wire in a network that was controlled by a customer who could use these \nleased circuits in the same way that he or she used physical cables in his or her \nlocal network. So this legacy VPN provided customer privacy to the extent that the \ncommunications provider assured the customer that no one else would use the same \ncircuit. Although leased circuits ran through one or more communications switches, \nmaking them susceptible to security compromises, a customer trusted the VPN pro­\nvider to safeguard his or her privacy and security by maintaining the integrity of the \ncircuits. This security based on trust resulted into what is now called trusted VPNs.\nTrusted VPN technology comes in many types. The most common of these types \nare what is referred to as layer 2 and layer 3 VPNs. Layer 2 VPNs include: ATM cir­\ncuits, frame relay circuits, and transport of layer 2 frames over Multiprotocol Layer \nSwitching (MPLS). Layer 3 VPNs include MPLS with constrained distribution of \nrouting information through Border Gateway Protocol (BGP) [11].\nBecause the security of trusted VPNs depends only on the goodwill of the pro­\nvider, the provider must go an extra mile to assure the customers of the security \nresponsibility requirements they must expect. Among these security requirements \nare the following [11]:\nNo one other than the trusted VPN provider can affect the creation or modification \n• \nof a path in the VPN. Since the whole trust and value of trusted VPN security \nrides on the sincerity of the provider, no one other than the provider can change \nany part of the VPN.\nNo one other than the trusted VPN provider can change data, inject data, or \n• \ndelete data on a path in the VPN. To enhance the trust of the customer, a trusted \nVPN should secure not only a path, but also the data that flows along that path. \nSince this path can be one of the shared paths by the customers of the provider, \neach customer’s path itself must be specific to the VPN and no one other than the \ntrusted provider can affect the data on that path.\nThe routing and addressing used in a trusted VPN must be established before the \n• \nVPN is created. The customer must know what is expected of the customer and \n" }, { "page_number": 400, "text": "17.4  Security in the Network Layer\b\n389\nwhat is expected of the service provider so that they can plan for maintaining the \nnetwork that they are purchasing.\nSecure VPNs\nSince the Internet is a popular public communication medium for almost everything \nfrom private communication to businesses and the trusted VPN actually offers only \nvirtual security, security concerns in VPN have become urgent. To address these \nconcerns, vendors have started creating protocols that would allow traffic to be \nencrypted at the edge of one network or at the originating computer, moved over the \nInternet like any other data, and then decrypted when it reaches the corporate net­\nwork or a receiving computer. This way it looks like encrypted traffic has traveled \nthrough a tunnel between the two networks. Between the source and the destination \npoints, although the data is in the clear and even an attacker can see the traffic, still \none cannot read it and one cannot change the traffic without the changes being seen \nby the receiving party and therefore rejected. Networks that are constructed using \nencryption are called secure VPNs.\nAlthough secure VPNs are more secure than trusted VPNs, they too require \nassurance to the customer just like trusted VPNs. These requirements are as fol­\nlows [11]:\nAll traffic on the secure VPN must be encrypted and authenticated.\n• \n In order for \nVPNs to be secure, there must be authentication and encryption. The data is \nencrypted at the sending network and decrypted at the receiving network.\nThe security properties of the VPN must be agreed to by all parties in the VPN.\n• \n \nEvery tunnel in a secure VPN connects two endpoints who must agree on the \nsecurity properties before the start of data transmission.\nNo one outside the VPN can affect the security properties of the VPN.\n• \n To make it \ndifficult for an attacker, VPN security properties must not be changed by anyone \noutside the VPN.\nHybrid VPNs\nHybrid VPN is the newest type of VPN technologies that substitutes the Internet \nfor the telephone system as the underlying structure for communications. The \ntrusted VPN components of the new VPN still do not offer security but they \ngive customers a way to easily create network segments for wide area networks \n(WANs). On the other hand, the secure VPN components can be controlled from \na single place and often come with guaranteed quality-of-service (QoS) from the \nprovider.\nBecause of the inherited weaknesses from both components that make up this \nnew hybrid VPN, a number of security requirements must be adhered to. Among the \nrequirements is to clearly mark the address boundaries of the secure VPN within the \n" }, { "page_number": 401, "text": "390\b\n17  Computer Network Security Protocols\ntrusted VPN because in hybrid VPNs, the secure VPNs segments can run as subsets \nof the trusted VPN and vice versa. Under these circumstances, the hybrid VPN is \nsecure only in the parts that are based on secure VPNs.\nVPN Tunneling Technology\nOld VPN firewalls used to come loaded with software that constructed the tunnel. \nHowever, with new developments in tunneling technology, this is no longer the \ncase. Let us now look at some different technologies that can be used to construct \nVPN tunnels:\nIPsec with encryption\n• \n used in either tunnel or transport modes. Since IPSec \ncannot authenticate users, IPSec alone is not good in some host-to-network VPN \n[10]. However, this is usually overcome by supplementing IPSec with other \nauthentication methods such as Kerberos. In combination with Internet Key \nExchange (IKE) which provides a trusted public key exchange, IPSec is used to \nencrypt data between networks and between hosts and networks. According to \nHoden, the sequence of events in the process of establishing an IPSec/IKE VPN \nconnection goes as follows:\nThe host/gateway at one end of a VPN sends a request to the host/gateway at \n• \nthe other end to establish a VPN connection.\nThe remote host/gateway generates a random number and sends a copy of it \n• \nto the requesting host/gateway\nThe requesting host/gateway, using this random number, encrypts its pre-\n• \nshared key it got from the IKE (shared with the remote host/gateway) and \nsends it back to the remote host/gateway.\nThe remote host/gateway also uses its random number and decrypts its pre-\n• \nshared key and compares the two keys for a match. If there is a match with \nanyone of its keys on the keyring, then it decrypts the public key using this \npre-shared key and sends the public key to the requesting host/gateway.\nFinally, the requesting host/gateway uses the public key to establish the IPSec \n• \nsecurity association (SA) between the remote host/gateway and itself. This \nexchange establishes the VPN connection. See Fig. 17.13.\nPoint-to-Point Tunneling protocol (PPTP). This is a Microsoft-based dial-up \n• \nprotocol used by remote users seeking a VPN connection with a network. It is an \nolder technology with limited use.\nLayer 2 Tunneling Protocol [\n• \nL2TP inside IPsec (see RFC 3193)]. This is an \nextension of PPP, a dial-up technology. Unlike PPTP which uses Microsoft dial-\nup encryption, L2TP uses IPSec in providing secure authentication of remote \naccess. L2TP protocol makes a host connect to a modem and then it makes a \nPPP to the data packets to a host/gateway where it is unpacked and forwarded to \nthe intended host.\nPPP over SSL and PPP over SSH. These are Unix-based protocols for constructing \n• \nVPNs.\n" }, { "page_number": 402, "text": "17.5  Security in the Link Layer and over LANS\b\n391\n17.5 Security in the Link Layer and over LANS\nFinally, our progressive survey of security protocols and standards in the network \nsecurity stack ends with a look at the security protocols and standards in the Data \nLink Layer. In this layer, although there are several protocols including those applied \nin the LAN technology, we will look at only two: PPP and RADIUS. Figure 17.14 \nshows the position of these protocols in the stack.\n17.5.1  Point-to-Point Protocol (PPP)\nThis is an old protocol because early Internet users used to dial into the Internet \nusing a modem and PPP. It is a protocol limited to a single data link. Each call in \nwent directly to the remote access serve (RAS) whose job was to authenticate the \ncalls as they came in.\nA PPP communication begins with a handshake which involves a negotiation \nbetween the client and the RAS to settle the transmission and security issues before \nthe transfer of data could begin. This negotiation is done using the Link Control \nProtocol (LCP). Since PPP does not require authentication, the negotiation may \nresult in an agreement to authenticate or not to authenticate.\nFig. 17.13  Establishing a VPN Using IPSec and IKE\nLaptop\nRouter\nLaptop\nLaptop\nRouter\nRemote Firewall\nRequesting Firewall\nTunnel\nProtected Network A\nProtected Network B\nApplication Layer\nTransport Layer\nNetwork Layer\nPPP\nRADIUS\nTACACS+\nFig. 17.14  Data Link Layer \nSecurity Protocols and \nStandards\n" }, { "page_number": 403, "text": "392\b\n17  Computer Network Security Protocols\n17.5.1.1  PPP Authentication\nIf authentication is the choice, then several authentication protocols can be used. \nAmong these are Password Authentication Protocol (PAP), Challenge-Handshake \nAuthentication Protocol (CHAP), and Extensible Authentication Protocol (EAP) \namong others [12].\nPassword Authentication Protocol (PAP)\n• \n requires the applicant to repeatedly \nsend to the server authentication request messages, consisting of a user name and \npassword, until a response is received or the link is terminated. However, this \ninformation is sent in the clear.\nChallenge-Handshake Authentication Protocol (CHAP\n• \n) works on a “shared \nsecret” basis where the server sends to the client a challenge message and waits \nfor a response from the client. Upon receipt of the challenge, the client adds \non a secret message, hashes both, and sends the result back to the server. The \nserver also adds a secret message to the challenge, hashes with an agreed-upon \nalgorithm and then compares the results. Authentication of the client results if \nthere is a match. To harden the authentication process, the server periodically \nauthenticates the client.\nExtensible Authentication Protocol (EAP)\n• \n is open-ended, allowing users to \nselect from among a list of authentication options.\n17.5.1.2  PPP Confidentiality\nDuring the negotiations, the client and server must agree on the encryption that \nmust be used. IETF has recommended two such encryptions that include DES and \n3DES.\n17.5.2  Remote Authentication Dial-In User Service (RADIUS)\nRADIUS is a server for remote user authentication and accounting. It is one of a \nclass of Internet dial-in security protocols that include Password Authentication Pro­\ntocol (PAP) and Challenge-Handshake Authentication Protocol (CHAP). Although \nit is mainly used by Internet Service Providers (ISPs) to provide authentication and \naccounting for remote users, it can be used also in private networks to centralize \nauthentication and accounting services on the network for all dial-in connections \nfor service. A number of organizations are having their employees work off-site and \nmany of these employees may require to dial-in for services. Vendors and contrac­\ntors may need to dial-in for information and specifications. RADIUS is a good tool \nfor all these types of off-site authentication and accounting.\nLet us look at RADIUS’s main components: authentication and accounting pro­\ntocols.\n" }, { "page_number": 404, "text": "17.5  Security in the Link Layer and over LANS\b\n393\n17.5.3.1  Authentication Protocols\nUpon contact with the RADIUS server, a user is authenticated from the data sup­\nplied by the user to the server either directly by answering the terminal server’s \nlogin/password prompts, or using PAP or CHAP protocols. The user’s personal data \ncan also be obtained from one of the following places [13]:\nSystem Database:\n• \n The user’s login ID and password are stored in the password \ndatabase on the server.\nInternal Database:\n• \n The user’s login ID and password can also be encrypted \nby either MD5 or DES hash, and then stored in the internal RADIUS \ndatabase.\nSQL authentication:\n• \n The user’s details can also be stored in an SQL database.\n17.5.3.2  Accounting Protocols\nRADIUS has three built-in accounting schemes: Unix accounting, detailed account­\ning, and SQL accounting.\n17.5.3.3  Key Features of RADIUS\nRADIUS has several features including [14]:\nClient/Server Model:\n• \n In client/server model, the client is responsible for \npassing user information to designated RADIUS servers, and then acting on the \nresponse which is returned. RADIUS servers, on their part, are responsible for \nreceiving user connection requests, authenticating the user, and then returning \nall configuration information necessary for the client to deliver service to the \nuser.\nNetwork Security:\n• \n All correspondence between the client and RADIUS server \nis authenticated through the use of a shared secret, which is never sent over the \nnetwork. User passwords are sent encrypted between the client and RADIUS \nserver.\nFlexible Authentication Mechanisms:\n• \n The RADIUS server can support a \nvariety of methods to authenticate a user. When it is provided with the user name \nand the original password given by the user, it can support PPP PAP or CHAP, \nUnix login, and other authentication mechanisms.\nExtensible Protocol:\n• \n RADIUS is a fully extensible system. It supports two \nextension languages: the built-in Rewrite language and Scheme. Based on \nRFC 2138, all transactions are comprised of variable length Attribute-Length-\nValue 3-tuples. New attribute values can be added without disturbing existing \nimplementations of the protocol.\n" }, { "page_number": 405, "text": "394\b\n17  Computer Network Security Protocols\n17.5.3  Terminal Access Controller Access Control System \n(TACACS+)\nThis protocol is commonly referred to as “tac-plus” is a commonly used method of \nauthentication protocol. Developed by Cisco Systems, TACACS+ is a strong proto­\ncol for dial-up and it offers the following [10]:\nAuthentication – arbitrary length and content authentication exchange which \n• \nallows many authentication mechanisms to be used with it.\nAuthorization\n• \nAuditing – a recording of what a user has been doing and in TACASCS+, it \n• \nserves two purposes:\nTo account for services used\n• \nTo audit for security services.\n• \nTACACS+ has a “+” sign because Cisco has extended it several times and has \nderivatives that include the following:\nTACACS – the original TACACS, which offers combined authentication and \n• \nauthorization\nXTACACS, which separated authentication, authorization, and accounting.\n• \nTACACS+, which is XTACACS plus extensions of control and accounting \n• \nattributes.\nExercises\n\t 1.\t PGP has been a very successful communication protocol. Why is this so? What \nfeatures brought it that success?\n\t 2.\t Discuss five benefits of IPSec as a security protocol.\n\t 3.\t Discuss the differences between the transport mode and the tunnel mode of \nIPsec. Is one mode better than the other? Under what conditions would you use \none over the other?\n\t 4.\t Study the IPv4 and IPv6 and discuss the differences between them? By the mid-\n1990s, it was thought that IPv6 was destined to be a replacement of IPv4 in less \nthan five years. What happened? Is there a future for IPv6?\n\t 5.\t Discuss the differences between RADIUS, as a remote authentication protocol, \nand Kerberos when using realms.\n\t 6.\t What are Kerberos authentication path? How do they solve the problem of \nremote authentication?\n\t 7.\t The Kerberos system has several bugs that pose potential security risks. Study \nthe Kerberos ticketing service and discuss these bugs.\n\t 8.\t The Kerberos system is built on the principle that only a limited number of \nmachines on any network can possibly be secure. Discuss the validity of this \nstatement.\n" }, { "page_number": 406, "text": "References\b\n395\n\t 9.\t Investigate how Netscape Navigator and Internet Explorer implemented SSL \ntechnology.\n10.\t Study both SSL and S-HTTP. Show that these two protocols have a lot in com­\nmon. However, the two protocols have some differences. Discuss these differ­\nences. Discuss what caused the decline in the use of S-HTTP.\nAdvanced Exercises\n1.\t X.509 is a good security protocol. Study X.509 and discuss how it differs from \nS-HTTP and IPSec.\n2.\t SSL3.0 has been transformed into TLS 1.0. Study the TLS protocol specifica­\ntions and show how all are met by SSL. There are some differences in the proto­\ncol specifications of the two. Describe these differences.\n3.\t S/MIME and PGP are sister protocols; they share a lot. Study the two protocols \nand discuss the qualities they share. Also look at the differences between then. \nWhy is PGP more successful? Or is it?\n4.\t Study the SET protocols and one other payment system protocol such as Dig­\nCash. What sets SET above the others? Is SET hacker proof? What problems does \nSET face that may prevent its becoming a standard as desired by Netscape?\n5.\t Both S-MIME and PGP are on track for standardization by the IETF. In your \njudgment, which one of the two is likely to become the standard and why?\nReferences\n\t 1.\t Stallings, W. Cryptography and Network Security: Principles and Practice. Second Edition. \nUpper Saddle River, NJ: Prentice Hall, 1999\n\t 2.\t Shastack, A. An Overview of S-HTTP. http://www.homeport.org/∼adam/shttp.html\n\t 3.\t Jasma, K. Hacker Proof: The Ultimate Guide to Network Security. Second Edition, Albany, \nNY: OnWord Press, 2002\n\t 4.\t Stein, L. Web Security: A Step-by-Step Reference Guide. Reading, MA: Addison-Wesley, \n1998\n\t 5.\t Tung, B. The Moron’s Guide to Kerberos, Version 1.2.2. http://www.isi.edu/∼brian/security/\nkerberos.html\n\t 6. \tOnyszko, T. Secure Socket Layer: Authentication, Access Control and Encryption. Win­\ndowSecurity.com http://www.windowsecurity.com/articles/Secure_Socket_Layer.html\n\t 7.\t SSL Demystified: The SSL encryption method in IIS. http://www.windowswebsolutions.com/\nArticles/Index.cfm?ArticleID = 16047\n\t 8.\t Kurose, J. and Keith, W. R. Computer Networking: A Top-Down Approach Featuring the \nInternet. Reading, MA: Addison-Wesley. 2003\n\t 9.\t Black, U. Internet Security Protocols: Protecting IP Traffic. Upper Saddle River, NJ: Prentice \nHall, 2000\n\t10.\t Hoden, G. Guide to Firewalls and Network Security: Intrision Detection and VPNs. Clifton \nPark, NY: Thomson Delmar Learning, 2004\n\t11.\t VPN Consortium. “VPN Technologies: Definitions are Requirements,” January 2003. http://\nvpnc.org\n" }, { "page_number": 407, "text": "396\b\n17  Computer Network Security Protocols\n\t12.\t Panko, R. Corporate Computer and Network Security. Upper Saddle River, NJ: Prentice Hall, \n2004\n\t13.\t Radius. http://www.gnu.org/software/radius/radius.html#TOCintroduction\n\t14.\t RFC 2138. http://www.faqs.org/rfcs/rfc2138.html\n" }, { "page_number": 408, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n­Networks, DOI 10.1007/978-1-84800-917-2_16, © Springer-Verlag London Limited 2009\n\b\n397\nChapter 18\nSecurity in Wireless Networks and Devices\n18.1  Introduction\nIt is not feasible to discuss security in wireless networks without a thorough under­\nstanding of the working of wireless networks. In fact, as we first set out to teach the \ncomputer network infrastructure in Chapter 1 in order to teach network security, we \nare going, in the first parts of this chapter, to discuss the wireless network infrastruc­\nture. As was the case in Chapter 1, it is not easy to discuss a network infrastructure \nin a few paragraphs and expect a reader to feel comfortable enough to deal with the \nsecurity issues based on the infrastructure. So, although we are promising the reader \nto be brief, our discussion of the wireless infrastructure may seem long to some read­\ners and sometimes confusing to others. Bear with us as we dispose of the necessary \ntheory for a good understanding of wireless security. A reader with a firm under­\nstanding of wireless infrastructure can skip Sections 18.1, 18.2, 18.3, and 18.4.\nWireless technology is a new technology that started in the early 1970s. The \nrapid technological developments of the last twenty years have seen wireless tech­\nnology as one of the fastest developing technologies of the communication industry. \nBecause of its ability and potential to make us perform tasks while on the go and \nbring communication in areas where it would be impossible with the traditional \nwired communication, wireless technology has been embraced by millions. There \nare varying predictions all pointing to a phenomenal growth of the wireless technol­\nogy and industry.\nTo meet these demands and expectations, comprehensive communication infra­\nstructure based on wireless networking, technology based on wireless LAN, WAN, \nand Web; and industry for the wireless communication devices have been devel­\noped. We will now focus on these.\n18.2  Cellular Wireless Communication Network Infrastructure\nThe wireless communication infrastructure is not divorced from its cousin, the wired \ncommunication infrastructure. In fact, while the wired communication infrastruc­\nture can work and support itself independently, the wireless infrastructure, because \n" }, { "page_number": 409, "text": "398\b\n18  Security in Wireless Networks and Devices\nof distance problems, is in most parts supported and complemented by other wired \nand other communication technologies such as satellite, infrared, microwave, and \nradio.\nIn its simplest form, wireless technology is based on a concept of a cell. That is \nwhy wireless communication is sometimes referred to as cellular communication. \nCellular technology is the underlying technology for mobile telephones, personal \ncommunication systems, wireless Internet, and wireless Web applications. In fact, \nwireless services telecommunications is one of the fastest growing areas in tele­\ncommunication today. Personal communications services (PCS) are increasing in \npopularity as a mass market phone service, and wireless data services are appearing \nin the form of cellular digital packet data (CDPD), wireless local area networks \n(LANs), and wireless modems.\nThe cell concept is based on the current cellular technology that transmits analog \nvoice on dedicated bandwidth. This bandwidth is split into several segments perma­\nnently assigned to small geographical regions called cells. This has led to the tiling \nof the whole communication landscape with small cells of roughly ten square miles \nor less depending on the density of cellular phones in the geographical cell. See \nFig. 18.1. Each cell has, at its center, a communication tower called the base station \n(BS) which the communication devices use to send and receive data. See also Fig. \n18.3. The BS receives and sends data usually via a satellite. Each BS operates two \ntypes of channels:\nThe control channel which is used in the exchange when setting up and \n• \nmaintaining calls\nThe traffic channel to carry voice/data.\n• \nThe satellite routes the data signal to a second communication unit, the Mobile Tele­\nphone Switching Office (MTSO). The MTSO, usually some distance off the origination \ncell, may connect to a land-based wired communication infrastructure for the wired \nreceiver or to another MTSO or to a nearest BS for the wireless device receiver.\nFig. 18.1  Tessellation of the cellular landscape with hexagon cell units.\nF\nB\nD\nE\nC\nA\n" }, { "page_number": 410, "text": "18.2  Cellular Wireless Communication Network Infrastructure\b\n399\nAn enabled wireless device such as a cellular phone must be constantly in con­\ntact with the provider. This continuous contact with the provider is done through the \ncell device constantly listening to its provider’s unique System Identification Code \n(SID) via the cell base stations. If the device moves from one cell to another, the \ncurrent tower must hand over the device to the next tower and so on; so the con­\ntinuous listening continues unabated. As long as the moving device is able to listen \nto the SID, it is in the provider’s service area and it can, therefore, originate and \ntransmit calls. In order to do this, however, the moving device must identify itself \nto the provider. This is done through its own unique SID assigned to the device by \nthe provider. Every call originating from the mobile device must be checked against \na database of valid device SIDs to make sure that the transmitting device is a legiti­\nmate device for the provider.\nThe mobile unit, usually a cellphone, may originate a call by selecting the strongest \nsetup idle frequency channel from among its surrounding cells by examining infor­\nmation in the channel from the selected BS. Then using the reverse of this frequency \nchannel, it sends the called number to the BS. The BS then sends the signal to the \nMTSO. As we saw earlier, the MTSO attempts to complete the connection by sending \nthe signal, called a page call, to a select number of BSs via a land-based wired.\nMTSO or another wireless MTSO, depending on the called number. The receiving \nBS broadcasts the page call on all its assigned channels. The receiving unit, if active, \nrecognizes its number on the setup channel being monitored and responds to the \nnearest BS which sends the signal to its MTSO. The MTSO may backtrack the routes \nor select new ones to the call initiating MTSO which selects a channel and notifies \nthe BS which notifies its calling unit. See Fig. 18.2 for details of this exchange.\nDuring the call period, several things may happen including the following:\nCall block which happens when channel capacity is low due to high unit density \n• \nin the cell. This means that at this moment all traffic channels are being used\nCall termination when one of two users hangs up\n• \nCall drop which happens when there is high interference in the communication \n• \nchannel or weak signals in the area of the mobile unit. Signal strength in an area must \nbe regulated by the provider to make sure that the signal is not too weak for calls \nto be dropped or too strong to cause interference from signals from neighboring \nFig. 18.2  Initiating and receiving wireless calls.\nCommunication Tower\nCommunication Tower\nMTSO\nWireless Laptop\nWireless Laptop\n" }, { "page_number": 411, "text": "400\b\n18  Security in Wireless Networks and Devices\ncells. Signal strength depends on a number of factors, including human-generated \nnoise, nature, distance, and other signal propagation effects.\nHandoff when a BS changes assignment of a unit to another BS. This happens \n• \nwhen the mobile unit is in motion such as in a moving car and the car moves \nfrom one cell unit to another adjacent cell unit.\nAs customers use the allocated channels in a cell, traffic may build up, leading to \n• \nsometimes serious frequency channel shortages in the cell to handle all the calls \neither originating or coming into the cell.\nThe capacity of the communication channels within the cell is controlled by \n• \nchannel allocation rules. This capacity can be increased per cell by adding \ncomplexity and relaxing these channel allocation rules. Cell channel capacity \ncan be expanded through [1]\nCell splitting: By creating smaller geographical cells, better use can be made \n• \nof the existing channels allocation. Usually cells have between 5 and 10 miles \nradius. Smaller cells may have about 2 miles radius. Attention must be paid for a \nminimum cell radius. It is possible to reach this minimum because as cells become \nsmaller, power levels must be reduced to keep the signals within the cells, and \nalso, there is an increase in the complexity resulting from more frequent handoffs \nas calls enter and leave the cells in higher numbers and high interferences due \nto smaller cells. Cell splitting can lead to microcelling, which is a concept of \ncreating very small cells within the minimum limits called microcells which \nare small tessellations of a bigger cell and making BS antennas smaller and \nputting them on top of buildings and lamp posts. This happens often in large \ncities. Another version of microcelling is cell sectoring which also subdivides \nthe original cell into small sectors and allocates a fixed number of frequency \nchannels to each sector. The new sectors still share the BS but direction antennas \nface the BS to direct calls to and from the sector.\nAllocation of new channels: Once all free channels are used up, additional \n• \nfrequency channels may be added to meet the demand.\nFrequency borrowing: This calls for literally taking frequency channels from \n• \nadjacent cells with redundant unused channels. Sometimes frequency channels from \nless congested adjacent cells can be dynamically allocated to congested cells.\nAlternative multiple access architectures: The transition from analog to digital \ntransmission is also significantly increasing capacity.\n18.2.1  Development of Cellular Technology\nIn the United States, the development of wireless cellular communication began in \n1946 when AT&T introduced the mobile telephone service (MTS).\nHowever, this was preceded by the pre-cellular wireless communication that \nbegan in the early 1920s with the development of mobile radio systems using ampli­\ntude modulation (AM). Through the 1940s, their use became very popular, espe­\ncially in police and other security and emergency communications. The channel \ncapacity quickly became saturated as the number of users grew.\n" }, { "page_number": 412, "text": "18.2  Cellular Wireless Communication Network Infrastructure\b\n401\nThe actual cellular wireless communication started in 1946 when AT&T developed \nits first mobile communications service in St. Louis using a single powerful transmit­\nter to radiate a frequency modulated (FM) wave in an area with a radius of 50 miles. \nThe popularity of this mode of communication forced the Federal Communications \nCommission (FCC) in 1950 to split the one 120-kHz channel that had been used into \ntwo equal 60-kHz channels to double capacity. This led to an unprecedented increase \nin the number of users so that by 1962 the technology had attracted up to 1.4 million \nusers. Since then, the technology has gone through three generations.\n18.2.1.1  First Generation\nIn the first generation, phones were very large and were based on analog technol­\nogy. The most known technology of this era was based on a 1971 Bell Labs pro­\nposal to the FCC for a new analog cellular FM telecommunications system. The \nproposal gave rise to Advanced Mobile Phone Service (AMPS) cellular standard. \nAlong with other cellular radio systems developed in other countries, all using FM \nfor speech and frequency division multiplexing (FDMA) as the access technique, \nthey formed the first generation of cellular technology. These systems became very \npopular, resulting in high levels of use. Although they are still in use today, their \nlimited, uncoordinated, and independently chosen frequency band selections led to \nthe development of the second generation digital cellular technology.\n18.2.1.2  Second Generation\nSecond generation systems overcame most of the limitations of the first generation \nand improved on others. They offered higher quality signals, greater capacity, better \nvoice quality, and more efficient spectrum utilization through provision of several \nchannels per cell and allowing dynamic sharing of these channels between users via \nmultiplexing both TDMA and CDMA (seen in Chapter 1) and digital modulation \ntechniques. Since they were backward compatible, they could use the same fre­\nquency range and signaling as their analog predecessors and therefore could receive \nand place calls on the first generation analog network. In fact, ten percent of the sec­\nond generation digital network is allocated to analog traffic. Also second generation \nsystems introduced encryption techniques, using digitized control and user traffic, \nto prevent unauthorized intrusions such as eavesdropping into the communication \nchannels. The digitalization of traffic streams also led to the development and inclu­\nsion into the systems of error detection and correction mechanisms. These second-\ngeneration systems included the following [2]:\nIS-54 (USA):\n• \n Interim Standard-54 (1991) was designed to support large cities \nthat had reached saturation within the analog system. It uses TDMA to increase \ncapacity in the AMPS spectrum allocation.\nIS-95\n• \n (USA): Interim Standard-95 (1994) operated in the dual mode (CDMA/\nAMPS) in the same spectrum allocation as AMPS. It is the most widely used \nsecond generation CDMA.\n" }, { "page_number": 413, "text": "402\b\n18  Security in Wireless Networks and Devices\nGSM\n• \n (Europe): The Global System for Mobile Communications (GSM) \n(1990) is a pan-European, open digital standard accepted by the European \nTelecommunications Standards Institute (ETSI), and now very popular the world \nover. As an open standard, it allowed interoperability of mobile phones in all \nEuropean countries. It has a greater capacity/voice quality than the previous \nanalog standard.\nPDC\n• \n (Japan): The Personal Digital Communications (PDC) (1991) is similar to \nIS-54 and also uses TDMA technology.\nPHS\n• \n (Japan): The Personal Handyphone System (PHS) (1995) uses smaller \ncells, therefore leading to a dense network of antennas each with a range of \n100–200 m, which allows lower power and less expensive handsets to be used. It \nalso allows 32-kbit/s digital data transmission. Because of all these advantages \nand its compaction, it has been very successful in Japan.\nWith digital cellular systems, usage of phones increased and as people became \nmore mobile and new possibilities emerged for using the phones for data trans­\nfer such as uploading and downloading information from the Internet, and send­\ning video and audio data streams, a stage was set for a new generation that would \nrequire high-speed data transfer capabilities. But owing to unexpected problems, \nincluding higher development costs and a downturn in the global economy, the roll-\nout of the third generation (3G) cellular systems has proven to be slow.\n18.2.1.3  Third Generation\nIn general, the third generation cellular technology, known as (3G), was aiming \nto offer high-speed wireless communications to support multimedia, data, video, \nand voice using a single, unified standard incorporating the second-generation digi­\ntal wireless architectures. The 3G standard was set to allow existing wireless infra­\nstructure to continue to be employed after carrier transition as they offer increased \ncapacities of at least 144 Kbps for full mobility, 384 Kbps for limited mobility in \nmicro- and macro-cellular environments, and 2 Mbps for low mobility application \n[3]. These goals could be achieved following these specific objectives: universality, \nbandwidth, flexibility, quality of service, and service richness [1, 2].\n18.2.1.4  Universality\nUniversality is one of the driving forces behind modern communication involv­\ning universal personal communications services (PCSs), personal communication \nnetworks (PSNs), and universal communications access. It requires achieving the \nfollowing:\nA high degree of commonality of design worldwide\n• \nGlobal compatibility of services within 3G wireless and fixed networks\n• \nService availability from multiple providers in any single coverage area\n• \n" }, { "page_number": 414, "text": "18.2  Cellular Wireless Communication Network Infrastructure\b\n403\nService reception on any terminal in any network based on a unique personal \n• \nnumber\nAbility to economically provide service over a wide range of user densities and \n• \ncoverage areas\n18.2.1.5  Bandwidth\nBandwidth is used to limit channel usage to 5 MHz to improve on the receiver’s \nability to resolve multipath problems; 5 MHZ is adequate to support a mobile \ndata rate of 144 Kbps, a portable data rate of 384 Kbps, and fixed data rate of \n2 Mbps.\n18.2.1.6  Flexibility\nFlexibility requires:\nA framework for the continuing expansion of mobile network services and access \n• \nto fixed network facilities\nA modular structure that will allow the system to start from a simple configuration \n• \nand grow as needed in size and complexity\nOptimal spectrum usage for services, despite their differing demands for data \n• \nrates, symmetry, channel quality, and delay\nTerminals that can adapt to varying demands for delay and transmission quality\n• \nNew charging mechanisms that allow tradeoffs of data vs. time\n• \nAccommodation of a variety of terminals including the pocket-sized terminal\n• \n18.2.1.7  Quality of Service\nThis requires quality comparable to that of a fixed network.\n18.2.1.8  Service Richness\nIt involves:\nIntegration of cellular, cordless, satellite, and paging systems\n• \nSupport for both packet and circuit switched services (e.g., IP traffic and video \n• \nconference)\nSupport for multiple, simultaneous connections (e.g., Web browsing and voice)\n• \nAvailability of a range of voice and non-voice services\n• \nEfficient use of the radio spectrum at an acceptable cost\n• \nAn \n• \nopen architecture to facilitate technological upgrades of different applications­\n" }, { "page_number": 415, "text": "404\b\n18  Security in Wireless Networks and Devices\nWhen the International Telecommunication Union Radio-communication Stan­\ndardization Sector (ITU-R), a body responsible for radio technology standardization, \ncalled for 3G proposals in 1998, responses to develop the standards came from the \nfollowing national Standards Development Organizations (SDO): ETSI (Europe), \nARIB (Japan), TIA and TIPI (USA), TTA (Korea), and one from China [3].\n18.2.2  Limited and Fixed Wireless Communication Networks\nBefore we settle down to discuss standardizations, protocols, and security of cel­\nlular wireless technology, let us digress a bit to talk about limited area wireless, \nknown mainly as cordless wireless, that is commonly found in homes and offices. \nWe will also talk about wireless local loop, or what is commonly known as fixed \nwireless.\nCordless telephones were developed for the purpose of providing users with \nmobility. With the development of digital technology, cordless wireless communi­\ncation also took off. Cordless has been popular in homes with a single base station \nthat provides voice and data support to enable in-house and a small perimeter around \nthe house communication. It is also used in offices where a single BS can support a \nnumber of telephone handsets and several data ports. This can be extended, if there \nis a need, especially in a big busy office, to multiple BSs connected to a single pub­\nlic branch exchange (PBX) of a local land telephone provider. Finally, cordless can \nalso be used in public places like airports as telepoints.\nCordless wireless is limited in several areas including the following:\nThe range of the handset is limited to an average radius of around 200 m from \n• \nthe BS\nFrequency flexibility is limited since one or a few users own the BS and handset \n• \nand therefore do not need a range of choices they are not likely to use.\nWith the development of wireless communications and the plummeting prices of \nwireless communication devices, the traditional subscriber loop commonly based \nFig. 18.3  A limited wireless \nunit with its base station\nWireless Laptop\nCommunication Tower\n" }, { "page_number": 416, "text": "18.2  Cellular Wireless Communication Network Infrastructure\b\n405\non fixed twisted pair, coaxial cable, and optical fiber (seen in Chapter 1) is slowly \nbeing replaced by wireless technology, referred to as wireless loop (WLL) or fixed \nwireless access.\nA wireless loop provides services using one or a few cells, where each cell has \na BS antenna mounted on something like a tall building or a tall mast. Then each \nsubscriber reaches the BS via a fixed antenna mounted on one’s building with an \nunobstructed line of sight to the BS. The last link between the BS and the provider \nswitching center can be of wireless or fixed technology. WLL offers several advan­\ntages including the following[1]:\nIt is less expensive after the start up costs.\n• \nIt is easy to install after obtaining a usable frequency band.\n• \nThe FCC has allocated several frequency bands for fixed wireless communica­\ntion because it is becoming very popular. Because the technology is becoming very \npopular, new services such as the local multipoint distribution service (LMDS) and \nthe multi-channel multipoint distribution service (MMDS) have sprung up. LMDS \nis an WLL service that delivers TV signals and two-way broadband communica­\ntions with relatively high data rates and provides video, telephone, and data for \nlow cost. MMDS are also WLL services that compete with cable TV services and \nprovide services to rural areas not reached by TV broadcast or cable.\nDue to the growing interest in WLL services such as LMDS and MMDS, the \nwireless communication industry has developed the IEEE 802.16.X as the standard \nfor wireless technology. The IEEE 802.16.X is used to standardize air interface, \ncoexistence of broadband wireless access, and air interface for licensed frequencies. \nSee Table 18.1 for the IEEE 802.16 Protocol Architecture. We will talk more about \nwireless communication standards in the next section.\nBoth the Convergence and MAC layers of the IEEE 802.16 make up the data link \nlayer of the OSI model. Similarly, the transmission and physical layers of the IEEE \n802.16 make up the physical layer of the OSI model.\nThe convergence layer of the IEEE 802.16 supports many protocols including \nthe following [1]:\nDigital audio and video multicast\n• \nDigital telephony\n• \nATM\n• \nIP\n• \nBridged LAN\n• \nTable 18.1  IEEE 802.16 protocol architecture\nApplication\nNetwork\nTransport\nConvergence\nMedium Access Control (MAC)\nTransmission\nPhysical\n" }, { "page_number": 417, "text": "406\b\n18  Security in Wireless Networks and Devices\nVirtual PPP\n• \nFrame relay\n• \n18.3  Wireless LAN (WLAN) or Wireless Fidelity (Wi-Fi)\nIn the last few years, wireless local area networking has gone from an activity only \nresearchers and hobbyists play with to a craze of organizations and industry. It is \nalso becoming a popular pastime for home computer enthusiasts. Many organiza­\ntions and businesses including individuals are finding that a wireless LAN (WLAN) \nor just Wi-Fi, as it is commonly known in industry, is becoming something industry \nand individuals cannot do without when placed in cooperation with the traditional \nwired LAN. A wireless LAN offers many advantages to a business to supplement \nthe traditional LAN. It is cheap to install it is fast and it is flexible to cover tradition­\nally unreachable areas, and because most new machines such as laptops are now all \noutfitted with wireless ports, configuring a network either in the home or office does \nnot need a network guru. It can be done out of the box in a few minutes saving the \nindividual or the company substantial amounts of money. But the most important \nadvantage of wireless technology is mobility.\nA wireless LAN is a LAN that uses wireless transmission medium. Current wire­\nless LANS have applications in four areas: LAN extension, cross-building intercon­\nnection, nomadic access, and ad hoc networks as discussed below:\nLAN extensions are wireless LANs (WLANs) linked to wired backbone networks \n• \nas extensions to them. The existing LAN may be an Ethernet LAN, for example. The \nWLAN is interfaced to a wired LAN using a control module that includes either a \nbridge or a router. Cross-building interconnection WLANs are connected to nearby \nor adjacent backbone fixed LANs in the building by either bridges or routers.\nNomadic access is a wireless link that connects a fixed LAN to a mobile IP \n• \ndevice such as a laptop. We will talk more about Mobile IP in the next section \nand also in the security section because most of the wireless communication \nsecurity problems are found in this configuration.\nAd Hoc Networking involves a peer-to-peer network temporarily and quickly set \n• \nup to meet an urgent need. Figure 18.4 illustrates an ad hoc network.\n18.3.1  WLAN (Wi-Fi) Technology\nWLAN technology falls in three types based on the type of transmission used by the \nLAN. The three most used are infrared, spectrum spread, and narrowband micro­\nwave as discussed below [1]:\nInfrared (IR) LANs are LANs in which cells are formed by areas, without \n• \nobstructing objects between network elements that the network is in. This is \nnecessitated by the fact that infrared light does not go through objects.\n" }, { "page_number": 418, "text": "18.3  Wireless LAN (WLAN) or Wireless Fidelity (Wi-Fi)\b\n407\nSpread spectrum LANs use spread spectrum transmission technology. If the \n• \ntransmission band is kept within a certain frequency range, then no FCC licensing \nis required. This means they can be used in a relatively larger area than a single \nroom.\nNarrowband microwave LANs operate at microwave frequencies, which means \n• \nthat they operate in large areas and therefore require FCC licensing.\n18.3.2  Mobile IP and Wireless Application Protocol (WAP)\nWLANs have become extremely popular as companies, organizations, and indi­\nviduals are responding to the cheap and less tenuous tasks of installing WLANs \nas compared to fixed wire traditional LANs. The growth in popularity of WLANs \nhas also been fueled by the growing number of portable communication devices \nwhose prices are plummeting. Now hotels and airports and similar establishments \nare receiving growing demands from mobile business customers who want fast con­\nnectivity for data. In response, new technologies such as Mobile IP and WAP, and \nstandards such as the IEEE 803.11 (as we will shortly see) have been developed.\n18.3.2.1  Mobile IP\nMobile IP wireless technology was developed in response to the high and increas­\ning popularity of mobile communication devices such as laptops, palms, and cell­\nphones and the demand for these devices to maintain Internet connectivity for busy \ntravelers. Recall from Chapter 1 that network datagrams are moved from clients to \nservers and from server to server using the source and destination addresses (the IP \nFig. 18.4  Ad hoc network\nLaptop\nLaptop\nLaptop\nLaptop\nLaptop\n" }, { "page_number": 419, "text": "408\b\n18  Security in Wireless Networks and Devices\naddresses) in the datagram header. While this is not a problem in fixed networks, in \nwireless networks with a moving transmitting and receiving element, keeping con­\nnectivity in a dynamically changing IP addressing situation is a challenge. Let us \nsee how Mobile IP technology deals with this problem.\nA mobile node is assigned a particular network, its network. Its IP address on \nthis network is its home IP address and it is considered static. For the mobile unit \nto move from this home base and still communicate with it while in motion, the \nfollowing protocol handshake must be done. Once the mobile unit moves, it seeks \na new attachment to a new network; this new network is called a foreign network. \nBecause it is in a foreign network, the mobile unit must make its presence known \nto the new network by registering with a new network node on the foreign network, \nusually a router, known as a foreign agent [1]. The mobile unit must then choose \nanother node from the home network, the home agent, and give that node a care-of \naddress. This address is its current location in the foreign network. With this in \nplace, communication between the mobile unit and the home network can begin.\nStallings outlines the following operations for the mobile unit to correspond with \nthe home network [1]:\nA datagram with a mobile unit’s IP address as its destination address is forwarded \n• \nto the unit’s home network.\nThe incoming datagram is intercepted by the designated home agent who \n• \nencapsulates the datagram into a new datagram with the mobile unit’s care-\nof address as the destination address in its IP header. This process is called \ntunneling.\nUpon receipt of the new tunneled datagram, the foreign agent opens the datagram \n• \nto reveal the inside old datagram with the mobile unit’s original IP address. It \nthen delivers the datagram to the mobile unit.\nThe process is reversed for the return trip.\n• \n18.3.2.2  Wireless Application Protocol (WAP)\nJust as the Mobile IP wireless technology was dictated by the mobility of customers, \nWAP technology was also dictated by the mobility of users and their need to have \naccess to information services, including the Internet and the Web. WAP works with \nall wireless technologies such as GSM, CDMA, and TDMA and is based on Internet \ntechnologies such as XML, HTML, IP, and HTTP. Although the technology is fac­\ning limitations dictated by size of the devices, bandwidth, and speed, the technology \nhas received wide acceptance in the mobile world. WAP technology includes the \nflowing facilities [1]:\nProgramming facilities based on WWW programming model\n• \nWireless Markup Language (WML) similar to XML\n• \nA wireless browser\n• \nA wireless communications protocol stack – see Fig. 18.5\n• \nA wireless telephony applications (WTA) framework\n• \nA number of other protocols and modules.\n• \n" }, { "page_number": 420, "text": "18.3  Wireless LAN (WLAN) or Wireless Fidelity (Wi-Fi)\b\n409\nTo understand the working of WAP, one has to understand the WAP program­\nming model which is based on three elements: the client, the gateway, and the origi­\nnal server as depicted in Fig. 18.6.\nIn the WAP model, HTTP is placed and is used between the gateway and the \noriginal server to transfer content. The gateway component is actually a proxy \nserver for the wireless domain.\nIt provides services that process, convert, and encode content from the Internet \nto a more compact format to fit on wireless devices. On a reverse service, it decodes \nFig. 18.5  The WAP protocol stack\nWireless Application Environment (WAE)\nWireless Session Protocol (WSP)\nWireless Transaction Protocol (WTP)\nWireless Transport Layer Security (WTLS)\nWireless Datagram Protocol (WDP)\nUDP/IP\nGSM\nIS-95\n3G\nBluetooth\nCDMA\nIS-136\nBearers\nFig. 18.6  The WAP programming model\nWireless Client\nWAE User\nComm. Tower\nGateway\nEncodes and Decodes\nOriginal Server\nContains: CGI SCripts\nEncoded response\nResponse\n1. Encoded requests\nRequests\nContent\n" }, { "page_number": 421, "text": "410\b\n18  Security in Wireless Networks and Devices\nand converts information from wireless devices that are compact to fit the Internet \narchitecture infrastructure. See Fig. 18.7.\n18.4  Standards for Wireless Networks\nEver since the time communication and communication devices and technologies started \ngoing beyond personal use, there was a need for a formal set of guidelines (protocols) \nand specifications (standards) that must be followed for meaningful communications. \nWhile protocols spell out the “how to” framework for the two or more communicating \ndevices, standards govern the physical, electrical, and procedural characteristics of the \ncommunicating entities. To discuss security of wireless technology, we need to under­\nstand both the wireless protocols and wireless standards. Let us start with the standards. \nThere are two widely used wireless standards: IEEE 802.11 and Bluetooth.\n18.4.1  The IEEE 802.11\nDeveloped by the IEEE 802.11 working group, IEEE 802.11 or more commonly \n802.11, as a whole, is the most well known and most widely used and most prominent \nwireless LAN specification standard. It is a shared, wireless local area network (LAN) \nstandard. Like its cousin, the fixed LAN architecture, the IEEE 802.11 architecture \nis based on layering of protocols on which basic LAN functions are based. The IEEE \n802.11 layering is based on the OSI layering model of the fixed LAN including a \nsimilar physical layer. The upper layers, concerned with providing services to LAN \nWireless Tower\nWML Web Server\nInternet\nHTML Filter\nBinary WML\nover WAP\nHTML over\nHTTP/TCP/IP\nWML over\nHTTP/TCP/IP\nWML over\nHTTP/TCP/IP\nWireless Laptop\nWeb Server\nWAP Proxy\nFig. 18.7  The WAP architecture infrastructure\n" }, { "page_number": 422, "text": "18.4  Standards for Wireless Networks\b\n411\nusers, are different from those of OSI by including two additional layers, the logical \nlink control (LLC) and the medium access control (MAC) layers. See Fig. 18.8\nIn additional to similar layering like OSI models, the IEEE 802.11 model also \nuses the carrier sense multiple access (CSMA), and medium access control (MAC) \nprotocol with collision avoidance (CA) (see Chapter 1). The standard allows also \nfor both direct sequence (DS) and frequency-hopping (FH) spread spectrum trans­\nmissions at the physical layer. The maximum data rate initially offered by this stan­\ndard was 2 megabits per second. This rate has been increasing in newer and more \nrobust versions of the 802.11.\nIn fact the IEEE 802.11 is an umbrella standard of many different standards vary­\ning in speed, range, security, and management capabilities as shown in Table 18.2.\n18.4.2  Bluetooth\nBluetooth was developed in 1994 by Ericsson, a Swedish mobile-phone company, to \nlet small mobile devices such as a laptop make calls over a mobile phone. It is a short-\nrange always-on radio hookup embedded on a microchip. It uses a low-power 2.4 GHz \nband, which is available globally without a license, to enable two Bluetooth devices \nwithin a small limited area of about 5 m radius to share up to 720 kbps of data.\nBluetooth Radio\nBaseband\nLink Manager Protocol (LMP)\nLogical Link Control and Adaption\nProtocol (L2CAP)\nSDP\nAudio\nControl\nWAE\nPPP\nUDP/\nTCP\nIP\nWAP\nAdopted\nprotocols\nFig. 18.8  OSI and IEEE 802.11 protocol layer model\n" }, { "page_number": 423, "text": "412\b\n18  Security in Wireless Networks and Devices\nBluetooth has a wide range of potential applications and gives users a low-power, \ncheap, untethered, and confined ability to [1]\nCreate wireless connections among computers, printers, keyboards, and the mouse\n• \nWirelessly use MP3 players with computers to download and play music\n• \nRemotely and wirelessly monitor devices in a home, including remotely turning \n• \non home devices from a remote location outside the home.\nBluetooth can allow users to wirelessly hookup up to 8 devices, creating a small \nnetwork called a piconet. Piconet can be a good tool in ad hoc networking.\nLike its counterpart the IEEE 802.11, Bluetooth is a layered protocol architecture \nsimilar to that of OSI. In fact its core protocols are grouped in five-layered stacks. \nSee Fig. 18.9.\nTable 18.2  The 802.11 standards\n802.11 Standard\t\nCharacteristics\nOriginal 802.11\n802.11b\t\n\u0007Data rate of up to 11 megabits per second using DS spread spectrum \ntransmission operates in 2.4 GHz band most widely adapted\n802.11a\t\n\u0007Orthogonal frequency-division multiplexing (OFDM) that permits data \ntransfer rates up to 54 megabits per second.\n\t\nOperates in less crowded 5 GHz band\n802.11g\t\nBackward compatibility with 802.11b\n802.11e\t\nHandles voice and multimedia\n802.11i\t\nNewest and more robust\n802.11X\t\nSubset of the 802.11i security standard\nPhysical\nDatalink\nNetwork\nTransport\nSession\nPresentation\nApplication\n2.4-Ghz frequency-\nhopping spread, 2.4 Ghz\ndirect sequence spread,\nand Infrared\n5-Ghz orthogonal FDM\n2.4-Ghz direct\nsequence spread\nDitributed coordination function (DCF)\nPoint coordination\nfunction (PCF)\nLogical link Control (LLC)\nIEEE\n802.11\nIEEE\n802.11a\nIEEE\n802.11b\nISO\nMAC\nFig. 18.9  Bluetooth protocol stack\n" }, { "page_number": 424, "text": "18.5  Security in Wireless Networks\b\n413\n18.5  Security in Wireless Networks\nIn Chapter 19, we will discuss security in wireless sensor networks, a more specific \ntype of wireless network. In this chapter, we are discussing security in a general \nwireless network. As the wireless revolution rages on and more and more organi­\nzations and individuals are adapting wireless technology for their communication \nneeds, the technology itself and the speed are barely keeping pace with the demand. \nThe Wi-Fi craze has been driven by the seemingly many Wi-Fi advantages over its \ncousin the fixed LAN. Such advantages include the following [4]:\nAffordability: Although Wi-Fi networks are still more expensive compared to \n• \ntheir cousins the fixed LANs, yet given the advantage of mobility they offer and \nthe plummeting prices for devices using wireless technology such as laptops \nand personal digital assistants, the technology has become more affordable, \nespecially to large companies that need it more.\nThe ease of network connections without having to “plug-in” and without using \n• \nexpensive network gurus for network setups.\nIncreasing employee productivity by organizations and businesses by having \n• \nemployees remain productive even when on the road. End-users, regardless of \nwhere they are either in the office facility or outside, are not very far from an \nuntethered computing device.\nWLAN technology does not need licenses to use.\n• \nAs newer standards such as the IEEE 802.11g are poised to deliver 5 times the \nspeed of the current IEEE 80211b which currently is delivering about 11 Mbps of \nbandwidth, there is increasing concern and closer scrutiny regarding the security \nof wireless technology in general and WLANs in particular. WLANs need to not \nonly provide users with the freedom and mobility which is so crucial for their popu­\nlarity but also the privacy and security of all users and the information on these \nnetworks.\n18.5.1  WLANs Security Concerns\nAs Russ Housely and William Arbaugh state in their paper “Security Problems in \n802.11-Based Networks” [5], one of the goals of WLAN standard was to provide \nsecurity and privacy that was “wired equivalent” and to meet these goals, the design­\ners of the standard implemented several security mechanisms to provide confiden­\ntiality, authentication, and access control. The “wired equivalent” concept for the \nIEEE 802.11 WLAN standard was to define authentication and encryption based on \nthe Wired Equivalent Privacy (WEP) algorithm. This WEP algorithm defines the \nuse of a 40-bit secret key for authentication and encryption. But all these mecha­\nnisms failed to work fully as intended.\nAttacks by hackers and others on the WLAN have been documented. Although \nsometimes exaggerated, there is a genuine security concern. Hackers armed with \n" }, { "page_number": 425, "text": "414\b\n18  Security in Wireless Networks and Devices\nlaptops, WLAN cards, and beam antennas are cruising the highways and city streets, \nindustrial boulevards, and residential streets, sometimes called “war drives,” access­\ning both public and private WLANs with impunity [6].\nWireless networks are inherently insecure. This problem is compounded by the \nuntraceable hackers who use invisible links to victimize WLANs and the increasing \nnumber of fusions between LANs and WLANs, thus adding more access points (the \nweak points) to the perimeters of secure networks.\nAs a result, the WLAN found itself facing severe privacy and security problems \nincluding the following [5, 7].\n18.5.1.1  Identity in WLANs\nIdentity is a very important component of security mechanism. The WALN protocol \ncontains a media access control (MAC) protocol layer in its protocol stack. The \nWLAN standard uses the MAC address of the WLAN card as its form of identity \nfor both devices and users. Although in the early versions of the WLAN device \ndrivers this MAC address was not changeable, in the newer open source device \ndrivers, this is changeable, creating a situation for malicious intruders to masquer­\nade as valid users. In addition, WLAN uses a Service Set Identifier (SSID) as a \ndevice identifier (name) in a network. As a configurable identification for a network \ndevice, it allows clients to communicate with the appropriate BS. With proper con­\nfiguration, only clients configured with the same SSID as the BS can communicate \nwith the BS. So SSIDs are shared passwords between BSs and clients. Each BS \ncomes with a default SSID, but attackers can use these SSIDs to penetrate a BS. As \nwe will see later, turning off SSID broadcasts cannot stop hackers from getting to \nthese SSIDs.\n18.5.1.2  Lack of Access Control Mechanism\nThe WLAN standard does not include any access control mechanism. To deal with \nthis seemingly overlooked security loophole in the standard, many users and ven­\ndors have used a MAC-address-based access control list (ACL), already discussed \nin Chapter 8. When ACL is used on MAC addresses, the lists consist of MAC \naddresses indicating what resources each specific MAC address is permitted to use. \nAs we have indicated earlier, the MAC address can be changed by an intruder. So on \ninterception of a valid MAC address by an intruder and subsequently changing and \nrenaming his or her WLAN cards, he or she is now a legitimate client of the WLAN \nand his or her MAC address now appears in the ACL. Another form of widely used \naccess control is the “closed network” approach in which the client presents to the \naccess point (AP), also common known as the base station, a secret known only to \nthe AP and the client. For WLAN users, the secret is always the network name. But \ngiven the nature of WLAN broadcast, this name is broadcast in the clear and any \neavesdropper or sniffer can get the network name.\n" }, { "page_number": 426, "text": "18.5  Security in Wireless Networks\b\n415\n18.5.1.3  Lack of Authentication Mechanism in 802.11\n802.11 supports two authentication services: Open Systems and Shared Key. The \ntype of authentication to be used is controlled by the Authentication Type param­\neter. The Open System type is a default null authentication algorithm consisting of \na two-step process that starts with the access point demanding from the client an \nidentity followed by authentication of almost every request from the client. With \nthe Shared Key authentication type, a client is authenticated with a challenge and \nresponse. The present 802.11 requires the client to request for authentication using \na secure channel that is independent of the standard. The WEP algorithm currently \nprovides the WLAN standard with this encryption based on 40-bit and 104-bit \nsecret keys. The clients request for authentication using these secret keys gener­\nated by WEP. The AP concatenates the secret key with a 14-bit quality known as an \ninitialization vector (IV), producing a seed to the pseudorandom number generator. \nThe random number generator produces a key sequence which is then combined \nwith the message text and concatenated with the integrity check value (ICV). The \ncomb {IV, message text, ICV} is put in an IEEE 802.11 data frame which is then \nsent to the client requesting authentication as the challenge. The client must then \nencrypt the challenge packet with the right key. If the client has the wrong key or \nno key at all, then authentication fails and the client cannot be allowed to associate \nwith this access point [7]. However, several research outcomes have shown that this \nshared key authentication is flawed. It can be broken by a hacker who successfully \ndetects both the clear text and the challenge from the station encrypted with WEP \nkey. There is another type of WEP key that is not secure either. This key, called the \n“static” key, is a 40-bit or 128-bit key statically defined by the system administrator \non an access point and all clients corresponding with this access point. The use of \nthis type of key requires the administrator manually entering the key to all access \npoints and all clients on the LAN. The key can be lost by a device. This causes a \nsecurity problem. The administrator, in this case, must change all the keys.\n18.5.1.4  Lack of a WEP Key Management Protocol\nAs we have noted above, the IEEE 802.11 WLAN standard does not have an encryp­\ntion and authentication mechanism of its own. This mechanism is provided to the \nstandard by the WEP. So the lack of a WEP key management protocol on the stan­\ndard is another serious limitation of the security services offered by the standard. As \nnoted by Arun Ayyagari and Tom Fout, the current IEEE 802.11 security options of \nusing WEP for access control also do not scale well in large infrastructure network \nmode, in ad hoc, and network mode. The problem is compounded further by the \nlack of inter-access point protocol (IAPP) in a network with roaming stations and \nclients [7]. In addition to the above more structural problems, the WLAN also suf­\nfers from the following topographical problems:\nFirst, data is transmitted through WLANs through broadcasts of radio waves over \n• \nthe air. Because radio waves radiate in all directions and travel through walls that \n" }, { "page_number": 427, "text": "416\b\n18  Security in Wireless Networks and Devices\nmake ceilings and floors, transmitted data may be captured by anyone with a \nreceiver in radio range of the access point. Using directional antennas, anyone \nwho wants to eavesdrop on communications can do so. This further means that \nintruders can also inject into the WLAN foreign packets. Because of this, as we \nwill see shortly, the access points have fallen prey to war-drivers, war-walkers, \nwar-flyers, and war-chalkers [8].\nSecond, WLAN introduced mobility in the traditional LANs. This means that \n• \nthe boundaries of WLANs are constantly changing as well as the APs of mobile \ncomputing devices like laptops, personal assistants, and palms, as mobile nodes \nof the WLANs are everywhere. Perhaps the inability of a WLAN to control \naccess to these APs by intruders is one of the greatest security threats to WLAN \nsecurity. Let us give several examples to illustrate this.\nLet us look at an extensive list of the security risks to the Wi-Fi. The majority of \nthese security risks fall among the following five major categories:\nInsertion attacks\n• \nInterception and monitoring wireless traffic\n• \nMisconfiguration\n• \nJamming\n• \nClient to client attacks\n• \nMost of these can be found at the Wireless LAN Security FAQ site at www.iss.\nnet/WLAN-FAQ.php [9].\n18.5.1.5  War-Driving, War-Walking, War-Flying, and War-Chalking\nBased on the movie, “War Games,” war-walking, war-driving, war-flying, and war-\nchalking all refer to the modes of transportation for going around and identifying \nvarious access points. As pointed out by Beyers et al. [8], war-walking, war-driving, \nand war-flying have resulted in identifying large numbers of wide open unsecure \naccess points in both cities and countryside.\n18.5.1.6  Insertion Attacks\nThese result from trusted employees or smart intruders placing unauthorized devices \non the wireless network without going through a security process and review. There \nare several types of these including the following:\nPlug-in Unauthorized Clients, in which an attacker tries to connect a wireless \n• \nclient, typically a laptop or PDA, to a base station without authorization. Base \nstations can be configured to require a password before clients can access. If there \nis no password, an intruder can connect to the internal network by connecting a \nclient to the base station.\n" }, { "page_number": 428, "text": "18.5  Security in Wireless Networks\b\n417\nPlug-in Unauthorized Renegade Base Station, in which an internal employee \n• \nadds his/her own wireless capabilities to the organization network by plugging a \nbase station into the LAN.\n18.5.1.7  Interception and Monitoring Wireless Traffic Attacks\nThis is a carry over from LANs. These intercepts and monitoring attacks are sniff \nand capture, session hijacking, broadcast monitoring, arpspoof monitoring, and \nhijacking. In arpspoof monitoring, an attacker using the arpspoof technique can trick \nthe network into passing sensitive data from the backbone of the subnet and routing \nit through the attacker’s wireless client. This provides the attacker both access to \nsensitive data that normally would not be sent over wireless and an opportunity to \nhijack TCP sessions. Further intercepts and monitoring attacks include hijacking SSL \n(Secure Socket Layer) and SSH (Secure Shell) connections. In addition, an attacker \ncan also intercept wireless communication by using a base station clone (evil twin) \nby tricking legitimate wireless clients to connect to the attacker’s honeypot network \nby placing an unauthorized base station with a stronger signal within close proximity \nof the wireless clients that mimics a legitimate base station. This may cause unaware \nusers to attempt to log into the attacker’s honeypot servers. With false login prompts, \nthe user unknowingly can give away sensitive data such as passwords.\n18.5.1.8  AP and Client Misconfigurations and Attack\nBase stations out of the box from the factory are configured in the least secure mode or \nnot configured at all. System administrators are left with the task of configuring them \nto their best needs. This is not always the case. Studies have shown that some system \nadministrators configure base stations and others do not. In these studies, each vendor \nand system administrator had different implementation security risks. For example, \neach of the base station models came with default a server set IDs (SSIDs). Lucent, as \none of the three base station manufacturers, has Secure Access mode which requires \nthe SSID of both client and base station to match. By default, this security option is \nturned off at shipping. In the nonsecure access mode, clients can connect to the base \nstation using the configured SSID, a blank SSID, and the SSID configured as “any.” \nIf not carefully configured, an attacker can use these default SSIDs to attempt to pen­\netrate base stations that are still in their default configuration unless such base stations \nare configured right. Also, most base stations today are configured with SSID that \nacts as a single key or password that is shared with all connecting wireless clients. \nThis server set ID suffers from the same problems as the original SSID.\nAdditionally, a base station SSID can be obtained by a bruteforce dictionary attack \nby trying every possible password. Most companies and people configure most \npasswords to be simple to remember and therefore easy to guess. Once the intruder \nguesses the SSID, one can gain access through the base station. There are many \n" }, { "page_number": 429, "text": "418\b\n18  Security in Wireless Networks and Devices\nother ways that SSIDs can be compromised ranging from disgruntled employees to \nsocial engineering methods.\n18.5.1.9  SNMP Community Words\nMany of the wireless base stations that deploy the Simple Network Management \nProtocol (SNMP) may fall victim to community word attacks. If the community \nword such as “public” is not properly configured, an intruder can read and poten­\ntially write sensitive information and data on the base station. If SNMP agents are \nenabled on the wireless clients, the same risk applies to them as well.\n18.5.1.10  Client Side Security Risk\nClients connecting to the base station store sensitive information for authenticating \nand communicating to the base station. For example, Cisco client software stores the \nSSID in the Windows registry, and it stores the WEP key in the firmware. Lucent/\nCabletron client software stores the SSID and WEP-encrypted information in the \nWindows registry as well. Finally 3Com client software also stores the SSID and \nWEP in the Windows registry. If the client is not properly configured, access to this \ninformation by a hacker is easy.\n18.5.1.11  Risks Due to Installation\nBy default, all installations are optimized for the quickest configuration to enable \nusers to successfully install out-of-the-box products.\n18.5.1.12  Jamming\nJamming leading to denial-of-service attacks can also be carried out in wireless net­\nworks. For example, an attacker with the proper equipment and tools can easily flood \nthe 2.4-GHz frequency so that the signal to noise drops so low that the wireless network \nceases to function. Sometimes nonmalicious intents like the use of cordless phones, \nbaby monitors, and other devices such as Bluetooth that operate on the 2.4-GHz fre­\nquency can disrupt a wireless network because there are so many in this band.\n18.5.1.13  Client-to-Client Attacks\nTwo wireless clients can talk directly to each other bypassing the base station. \nBecause of this, each client must protect itself from other clients. For example, \nwhen a wireless client such as a laptop or desktop is running TCP/IP services like \n" }, { "page_number": 430, "text": "18.5  Security in Wireless Networks\b\n419\na Web server or file sharing, communicating with an attacker is vulnerable to this \nattacker exploiting any misconfigurations or vulnerabilities on the client. Similarly, \na wireless client can flood other wireless clients with bogus packets, creating a \ndenial-of-service attack. Finally, a wireless client can infect other wireless clients; \nthis threat is called a hybrid threat.\n18.5.1.14  Parasitic Grids\nParasitic grids are actually self-styled free “metro” wireless networks that pro­\nvide attackers and intruders completely untraceable anonymous access. Trying \nto locate and trace attackers using the parasitic grid becomes an impossible task, \nfor example, something similar to hotspots, although hotspots are not maliciously \nused. Hotspots are Wi-Fi access point areas provided by businesses to give their \ncustomers access to the Internet. Hotspots are becoming very popular as more and \nmore companies such as Starbucks, McDonalds, and start-ups to attract younger \ncustomers. They are also being deployed at airports, hotels, and restaurants for the \nsame reasons.\n18.5.2  Best Practices for Wi-Fi Security\nAlthough the reliance of WLAN on SSID, open or shared key, static or MAC \nauthentication definitely offers some degree of security, there is more that needs \nto be done to secure WLANs. Even though best security practices for WLANs are \nmeant to address unique security issues, specifically suited for wireless technol­\nogy, they must also fit within the existing organization security architecture. Any \nsecure WLAN solution must address the following issues, some of which we have \ndiscussed in the preceding section [4]:\n802.1X authentication standards\n• \nWEP key management\n• \nUser and session authentication\n• \nAccess point authentication\n• \nDetection of rogue access points\n• \nUnicast key management\n• \nClient session accounting records\n• \nMitigation of network attacks\n• \nWLAN management\n• \nOperating system support\n• \nIn addition to addressing those issues, it must also include the following basic \nand minimum set of routine but low-level steps [9]:\nTurn on basic WEP for all access points\n• \nCreate a list of MAC addresses that are allowed to access the WLAN\n• \n" }, { "page_number": 431, "text": "420\b\n18  Security in Wireless Networks and Devices\nUse dynamic encryption key exchange methods as implemented by various \n• \nsecurity vendors\nKeep software and patches on all access points and clients updated\n• \nCreate access point passwords that cannot be guessed easily\n• \nChange the SSID on the access point, and block the SSID broadcast feature\n• \nMinimize radiowave leakage outside the facility housing the WLAN through \n• \naccess point placement and antenna selection.\nFor large organizations that value data, strong protection mechanisms must be \n• \nput in place. Such mechanisms may include Kerberos or RADIUS, end-to-end \nencryption, password protection, user identification, Virtual Private Networks \n(VPN), Secure Socket Layer (SSL), and firewalls. All these are implementable.\nChange the default SSID and password protection drives and folders.\n• \nRealize, however, that these are basic and many things change. They just assure \nyou of minimum security.\n18.5.3 Hope on the Horizon for WEP\nBy the writing of this book, a new IEEE 802.11 Task Group i (TGi) has been set \nup to develop new WLAN security protocols. TGi has defined the Temporal Key \nIntegrity Protocol (TKIP, a data link security protocol), to address WEP vulnerabili­\nties. TKIP, as a front-end process to WEP, is a set of algorithms that adapt the WEP \nprotocol to address WEP-known flaws in three new elements [10]:\nA message integrity code (MIC), called \n• \nMichael, to defeat forgeries\nA packet sequencing discipline, to defeat replay attacks; and\n• \nA per-packet key mixing function, to prevent FMS attacks.\n• \nSince TKIP is not ideal and it is expected to be used only as a temporary patch \nto WEP. TGi is also developing another long-term technology solution to WEP. The \ntechnology is called Counter-Mode-CBC-MAC Protocol (CCMP). CCMP addresses \nall known WEP deficiencies. It uses the Advanced Encryption System (AES) as the \nencryption algorithm. CCMP protocol has many features in common with TKIP \nprotocol [10]. In addition to these two, TGi is also defining WLAN authentication \nand key management enhancements.\nExercises\n  1.\t List the devices that can be used in a wireless network. How are they connected \nto form a wireless network?\n  2.\t Infrared devices exchange beams of light to communicate. Is this the method \nused in wireless communication? Explain how a communication link between \ntwo wireless devices is established.\n" }, { "page_number": 432, "text": "Advanced Exercises\b\n421\n  3.\t Bluetooth devices communicate using radio waves. What are the differences \nbetween Bluetooth technology and 802.11? What are the weaknesses in Blu­\netooth technology compared to 802.11?\n  4.\t We have discussed at length the problems found in the 802.11 technology in \nassuring privacy and authentication. Suppose you are in charge of a LAN and \nyou want to add a few access points to allow a limited use of wireless devices. \nHow would you go about setting this network up?\n  5.\t Unlike Infrared wireless devices, Bluetooth technology uses radiowaves to \ncommunicate. What are the advantages of Bluetooth over these devices and \nalso over 802.11 technology?\n  6.\t Study and discuss the reasons why WEP never realized its stated objectives. \nWhat are those objectives?\n  7.\t How does WPA, the new stop gap measure to plug the loopholes in WEP, go \nabout solving the problems of WEP? Why is the technology considered a short-\nterm technology? What long-term technology is being considered?\n  8.\t Study the security mechanisms in the new 802.11i and discuss how these mech­\nanisms will solve the security problems of Wi-Fi.\n  9.\t One of the weakest points of WLAN is the access points. Discuss the most \neffective ways to close this weak point in the WLAN technology.\n10.\t Many security experts have said that the biggest security problem in wireless \ntechnology is not to use security technology at all. Study this problem. Carry \nout a limited research and try to quantify the problem.\nAdvanced Exercises\n  1.\t Some have likened the alphabet used in the 802.11 standard to an alphabet soup \nof confusion. Study the history of lettering in the 802.11 and how it is supposed \nto improve security. Does it work?\n  2.\t Suppose you are in charge of information security in a large organization where \nthe value of data justifies strong protection in the hybrid network resulting from \nboth LAN and WLAN. What are these additional security measures, and how \nwould you go about implementing them?\n  3.\t Study and discuss how and in what type of wireless network or hybrid each \none of the following methods enhances the security of the chosen network: \nRADIUS, Kerberos, end-to-end encryption, password protection, user identi­\nfication, Virtual Private Network (VPN), Secure Socket Layer (SSL), and fire­\nwalls.\n  4.\t The IEEE 802.11 Task Group i (TGi) is developing new WLAN security pro­\ntocols named TKIP and CCMP. CCMP is envisioned to supersede WEP and \nTKIP. Research and study these efforts and comment on the progress.\n  5.\t It has been given different names, based on the movie War Games. Some have \ncalled it war-driving, others war-walking. Whatever the name, AP scanning is \n" }, { "page_number": 433, "text": "422\b\n18  Security in Wireless Networks and Devices\nbecoming a hobby and a lucrative source of data from the high proliferation of \nwireless cards and mobile computing devices. There is a serious moral and ethi­\ncal dilemma associated with the “sport.” Research and discuss such dilemma \nand propose solutions, if any.\nReferences\n  1.\tStallings, W. Wireless Communication and Networking. Upper Saddle River, NJ: Prentice \nHall, 2002.\n  2.\tRitchie, C. Sutton, R., Taylor, C., and Warneke, B. The Dynamics of Standards Creation in the \nGlobal Wireless Telecommunications Markets. http://www.sims.berkeley.edu/courses/is224/\ns99/GroupD/project1/paper1.html\n  3.\tNicopolitidis, P., Papadimitriou, G.I., Obaidat, M.S., and Pomportsis, A.S.. “Third Generation \nand Beyond Wireless Systems: Exploring the Capabilities of Increased Data Transmission \nRates.” Communications of the ACM, 2003, 46(8).\n  4.\tCisco Aironet Wireless LAN Security Overview. http://www.cisco.com/warp/public/cc/pd/\nwitc/ao350ap/prodlit/a350w_ov.htm\n  5.\tHousely, R. and Arbaugh, W. “Security Problems in 802.11-Based Networks.” Communica­\ntions of the ACM, 2003, 46(5).\n  6.\tKeizer, G. “WLAN Security Neglected, Study Shows.” TechWeb News. June 27, 2003.\nhttp://www.techweb.com/wire/story/TWB20030627S006\n  7.\tAyyagari, A. and Tom, F. Making IEEE 802.11 Networks Enterprise-Ready. Microsoft Devel­\nopment. http://www.microsoft.com/os/\n  8.\tByers, S. and Dave, K. “802.11b Access Point Mapping”. Communications of the ACM, \n2003, 46(5)\n  9.\tCox, J. WLAN Security: Users Face Complex Challenges. http://www.newsfactor.com/\nperl/story/22066.html.\n10.\tCam-Winget, N., Housley, R., Wagner, D., and Walker, J. “Security Flaws in 802.11 Data Link \nProtocols.” Communications of the ACM, 2003, 46(5).\n" }, { "page_number": 434, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n Networks, DOI 10.1007/978-1-84800-917-2_19, © Springer-Verlag London Limited 2009\n \n423\nChapter 19\nSecurity in Sensor Networks\n19.1 Introduction\nThe rapid development of wireless technology in the last few years has created new \ninterest in low-cost wireless sensor networks. Wireless sensor networks (WSNs) or \njust sensor networks are grids or networks made of spatially distributed autonomous \nbut cooperating tiny devices called sensors all of which have sensing capabilities \nthat are used to detect, monitor, and track physical or environmental conditions, \nsuch as temperature, sound, vibration, pressure, motion or pollutants, at different \nlocations [1]. A sensor, similar to that in Fig. 19.1, is a small device that produces \na measurable response to a change in a physical condition. Sensor nodes can be \nindependently used to measure a physical quantity and to convert it into a signal \nthat can be read by an observer or by an instrument [1]. The network may consist \nof just a few or thousands of tiny, mostly immobile, usually, randomly deployed \nnodes, covering a small or large geographical area. In many cases, sensor networks \ndo not require predetermined positioning when they are randomly deployed making \nthem viable for inaccessible terrains where they can quickly self-organize and form \na network on the fly.\nThe use of sensors to monitor physical or environmental conditions is not new. \nSensors have been used in both mechanical and electrical systems for a long time. \nHowever, what is new and exciting is that the new sensor nodes are now fitted with \non-board tiny processors forming a new class of sensors that have the ability to par-\ntially process the collected data before sending it to the fusing node or base station. \nThe sensor nodes now also have sophisticated protocols that help in reducing the \ncosts of communications among sensors and can implement complex power saving \nmodes of operations depending on the environment and the state of the network [2]. \nThe accuracy of the data gathered has also greatly improved.\nThese recent advances have opened up the potential for WSN. According to \nDavid Culler et al. [3], wireless sensor networks could advance many scientific pur-\nsuits while providing a vehicle for enhancing various forms of productivity, includ-\ning manufacturing, agriculture, construction, and transportation. In the military, \nthey are good for command and control, intelligence and surveillance. In health, \nthey are beneficial in monitoring patients, in commercial application they can be \n" }, { "page_number": 435, "text": "424 \n19 Security in Sensor Networks\nused in managing inventory, monitoring production lines and product quality, and \nmonitoring areas prone to disasters [4]. New technologies are creating more power-\nful and yet smaller devices. This miniaturization trend is leading us to ubiquitous \ncomputing capacities that are exponentially faster and cheaper with each passing \nday. With these developments, researchers are refocusing and developing tech-\nniques that use this miniaturization process to build radios and exceptionally small \nmechanical structures like sense fields and forces in physical environments that \ncould only be imagined just a few years ago. Culler et al. believe that these inex-\npensive, low-power communication devices can be deployed throughout a physical \nspace, providing dense sensing close to physical phenomena, processing and com-\nmunicating this information, and coordinating actions with other nodes including a \nbase station [3].\nHowever, as wireless sensor networks with vast potential of applications unfold \nand their role in dealing with sensitive data increases, the security of these networks \nhave become one of the most pressing issues in further development of these net-\nworks. This chapter gives a general discussion of the challenges and limitations of \nWSNs and how these challenges and limitations contribute to the security prob-\nlems faced by the sensor network. We survey several interesting security approaches \naimed at enhancing security, and we conclude by considering several potential \nfuture directions for security solutions.\n19.2 The Growth of Sensor Networks\nWSNs have evolved slowly from simple point-to-point networks with simple \ninterface protocols providing for sensing and control information and analog sig-\nnal providing a single dimension of measurement to the current large number and \nsophisticated wireless sensor nodes networks. The development of the micropro-\ncessor boasted the sensor node with increased onboard intelligence and processing \ncapabilities, thus providing it with different computing capabilities. The sensor node \nis now a microprocessor chip with a sensor on board. The increased intelligence in \nFig. 19.1 A Wireless Sensor Node\n \nPower Source\nCommunication\nModule\nCentral Unit\n(Memory, \nProcessor)\nSensing \nModule\nActuator\n" }, { "page_number": 436, "text": "19.3 Design Factors in Sensor Networks \n425\nthe node and the development of digital standards such as RS-232, RS-422, and \nRS-485 gave impetus to the creation of numerous sensor networking schemes [5]. \nIn addition, the popularization of the microcontrollers and the development BIT-\nBUS, a field bus developed by Intel to interconnect stand-alone control units and \nterminals, thus making them able to interchange data telegrams further improved \nthe sensor node communications capabilities, thus bringing the dream of sensor \nnetworks closer [5].\nAnother outstanding development that further made the road to fully function-\ning sensor networks possible was the development of the Manufacturing Automa-\ntion Protocol (MAP) by General Motors to reduce the cost of integrating various \nnetworking schemes into a plant wide system. As Jay Warrior observes, this further \nresulted in the development of the Manufacturing Messaging Specification (MMS), \na specification that made it possible for the networked nodes to exchange real-time \ndata and supervisory control information [5]. With the development of other com-\nmunication protocols that allowed simultaneous analog and digital communication \nfor smart instruments, the sensor network, as we know it today, was born. Currently, \nthere is a whole spectrum of different sensor network protocols for the many differ-\nent types of sensor networks in use today.\n19.3 Design Factors in Sensor Networks\nSeveral factors influence the design philosophy of sensor networks. Among these \nfactors are first whether the nodes are stationary or moving and whether the network \nis deterministic or self-organizing. Most sensor network applications use stationary \nnodes. However, there are a good number of applications that use mobile nodes. In \nthis case, therefore, the network is bound to use more energy because of the need \nto track the moving node and the increase in bandwidth requirements for periodic \nreporting which increases traffic. In a deterministic topology, the positions of the \nnodes and the routes in the network are predetermined and the nodes are manually \nplaced. In a self-organizing topology, node positions are random and the routes are \nalso random and unreliable. Routing in these networks, therefore, becomes the main \ndesign concern. Also since self-organizing sensor networks demand a lot of energy, \ndirect routing is not desirable and multi-hop routing is more energy efficient. How-\never, multi-hop requires considerable routing management techniques. In addition \nto routing and energy, other factors also influence the design philosophy of sensor \nnetworks [4]:\n19.3.1 Routing\nCommunication in wireless sensor networks, like in traditional network commu-\nnication, is based on a protocol stack with several layers as seen in Fig. 19.2. This \nstack also combines power and routing awareness, integrates data with networking \n" }, { "page_number": 437, "text": "426 \n19 Security in Sensor Networks\nPlanes\nPower Management\nMobility\nTask Management\nprotocols, communicates power efficiently through the wireless medium, and pro-\nmotes cooperation between nodes [4]. To achieve all these, the stack consists of five \nlayers and three management planes and these are: physical layer, data link layer, \ntransport layer, application layer, power management plane, mobility management \nplane, and task management plane. This stack is different from those of the tradi-\ntional networks made of nonsensor nodes like the TCP/IP and the ISO’s OSI.\nPhysical Layer – is responsible for several tasks including frequency selection, \ncarrier frequency generation, signal detection, modulation, and data encryption.\nData Link Layer\n• \n – is responsible for a number of tasks including multiplexing of \ndata streams, data frame detection, medium access, and error control.\nNetwork Layer\n• \n – is responsible for network routing. Routing in sensor networks, \nunlike in the traditional networks, is infl uenced by the following [4]:\nPower effi ciency as an important consideration.\n• \nSensor networks being mostly data-centric,\n• \ndata aggregation being useful only when it does not hinder the collaborative \n• \nefforts of the sensor nodes, and\nan ideal sensor network having attribute-based addressing and location \n• \nawareness.\nTransport Layer –\n• \n not yet in place. Because, unlike traditional networks, protocols \nlike TCP where the end-to-end communication schemes are possible, here there \nis no global addressing. The development of global addressing schemes is still \na challenge.\nApplication Layer –\n• \n also not available. Although there are many application \nareas for sensor networks, application layer protocols are yet to be developed.\nBased on the above discussion, therefore, sensor networks are largely still multi-\nhop wireless networks whose nodes can be either a host or a router, forwarding \npackets to other nodes in the network In many sensor networks, the information \nFig. 19.2 Sensor network protocol stack\nApplication: User Queries, External \nDatabase\nTransport: Application Processing, \nAggregation, Query Processing\nNetwork: Adaptive Topology, \nGeo-Routing\nData Link: MAC, Time, \nLocation Adaptive\nPhysical: Communication, Sensing, \nActuation\n" }, { "page_number": 438, "text": "19.3 Design Factors in Sensor Networks \n427\ncollected from a large number of sensors is either lightly proceeded locally at the \nnode or transmitted unproceeded to the base station or other sensors using one of the \nthree routing techniques: one-to-many, many-to-one, and one-to-one/point-to-point. \nHowever, the two most fundamental communication primitives are broadcast (one-\nto-many) and point-to-point (one-to-one).\nBroadcast Communication The broadcast packet routing technique is used exten-\nsively in wireless sensor networks due to the large number of sensor nodes in any \nwireless sensor network. Broadcasting, as a means of node communication is highly \ndesirable in this environment because of the limited signal range for each node. In \nthe broadcast mode, the node that initiates the broadcast of a packet is referred to \nas the source or sender node and all others as receivers. The receivers of broadcast \npacket then forward the packet to their nearest adjacent neighbors which causes the \npacket to move throughout the network enabling all network nodes to eventually \nreceive the broadcast packet.\nPoint-to-Point Communication Though not common, point-to-point routing is \nstill important for many applications in wireless sensor networks, including games \nbased on wireless sensor networks and data-centric storage where nodes store infor-\nmation about the detected events using the geographic location as the key. Point-to-\npoint routing can also be used to send data from the detection node to the storage \nnode [6].\n19.3.1.1 Routing Protocols\nThere are several routing protocols in use today for sensor networks, including data-\ncentric, hierarchical and location-based [7].\nData-centric routing. Because the sensor network may have thousands of nodes \nwhich are randomly deployed, it is inconceivable to have network-wide external \naddressing and network-layer-managed routing protocols found in traditional net-\nworks. In data-centric routing the sink node, desiring data, sends out an attribute-\nbased query to the surrounding nodes in the region. The attributes in the query \nspecify the desired properties of the data. The sink then waits for the data [7]. If \neach node were to send out data to other nodes in the region, there would result a \nconsiderable redundancy of data and an inefficient use of scarce energy. For these \nreasons, data-centric routing techniques are more resource efficient. Common data-\ncentric routing protocols include sensor protocols for information via Negotiation \n(SPIN) and directed diffusion [7].\nHierarchical routing. Hierarchical routing involves multi-hop communication \nand the aggregation and fusion of data within clusters of nodes in order to decrease \nthe number of transmitted messages to the sink nodes which leads to conservation \nof energy. There are several hierarchical protocols in use today, including LEACH, \nPEGASIS, TEEN, and APTEEN [8].\nLocation-based routing. In location-based routing, each node maintains a location \nlist consisting of location information for a number of nodes in a region of a sensor \nnetwork. Each node periodically updates its location list by receiving information \n" }, { "page_number": 439, "text": "428 \n19 Security in Sensor Networks\nabout locations and location lists of all its direct neighbors. It also, in turn, sends its \nlocation and location list to all its adjacent nodes. This keeps the location list of all \nnodes in the region current and up to date.\n19.3.2 Power Consumption\nMost sensor networks are entirely self-organizing and operate with extremely \nlimited energy and computational resources. Because most nodes may be either \nin inaccessible environments, replenishing them with new power may be almost \nimpossible. The life of a sensor node, therefore, may be in question and it may not \nbe able to transmit critical data when desired. The functionality of the network, \ntherefore, depends on the consumption rate of energy by node units.\n19.3.3 Fault Tolerance\nIf a sensor network is to face any one sensor node failure, we would like the network \nto be able to sustain all its functionalities. That is to say that the sensor network \nshould be as reliable as possible and continue to function as much as possible in \nlight of the failed node.\n19.3.4 Scalability\nWe want to have a network such that the addition of more nodes to the network does \nnot have any diverse effects to the functionality of the network.\n19.3.5 Production Costs\nWireless sensor networks most often use large numbers of sensor nodes. The unit \ncost of each individual sensor node plays a crucial role in determining the overall \ncosts of the entire sensor network. We would like a well functioning network having \na least per unit cost for individual nodes.\n19.3.6 Nature of Hardware Deployed\nA sensor node consists of four basic parts: the sensing unit, the processing unit, \nthe transceiver unit, and the power unit. All these units must be packaged in a very \nsmall, match-box-sized package. In addition, all these units and the overall sensor \nnode must consume very low power.\n" }, { "page_number": 440, "text": "19.4 Security in Sensor Networks \n429\n19.3.7 Topology of Sensor Networks\nBecause a normal sensor network may contain thousands of sensor nodes deployed \nrandomly throughout the field of observation, the wireless sensor network resulting \nmay have uneven densities depending on how the nodes were deployed. Nodes can \nbe deployed by dropping them from a plane, carefully placing them, or dropping \nby artillery. Also not every deployed sensor may work as expected. So the topol-\nogy of the resulting network may determine the functionality of the wireless sensor \nnetwork.\n19.3.8 Transmission Media\nIn a wireless sensor network, the nodes are linked by a wireless medium. The \nmedium could be by radio-like RF and Bluetooth, infrared or optical waves. Both \ninfrared and optical links require no obstructions like objects in the line of sight. \nThe functionality of the network may depend on these media.\n19.4 Security in Sensor Networks\nModern wireless sensor networks many times consist of hundreds to thousands \nof inexpensive wireless nodes, each with some computational power and sensing \ncapability and usually operating in random unsupervised environments. The sen-\nsors in the network act as “sources” as they detect environmental events either con-\ntinuously or intermittently whenever the occurrence of the event triggers the signal \ndetection process. The data picketed up is either lightly processed locally by the \nnode and then sent off or just sent off to the “sink” node or a base station. This kind \nof environment presents several security challenges.\n19.4.1 Security Challenges\nThe most pressing of these challenges include the following:\n19.4.1.1 Aggregation\nData aggregation in sensor networks is the process of gathering data from differ-\nent sensor “source” nodes and expressing it in a summary form before it is sent \noff to a “sink” node or to a base station. There are two types of data aggregation: \nin-stream aggregation, which occurs over a single stream, generally over a time \nwindow, and multi-stream aggregation, which occurs across the values of multiple \n" }, { "page_number": 441, "text": "430 \n19 Security in Sensor Networks\nstreams, either at the same time or over a time window. Data aggregation is essential \nin sensor networks because as it combines data from different “source” nodes, it \neliminates redundancy thus minimizing the number of transmissions and hence sav-\ning energy. In fact, significant energy gains are possible with data aggregation. The \ngains are greatest when the number of sources is large and when the sources are \nlocated relatively close to each other and far from the sink [9]. However, as sen-\nsor network applications expand to include increasingly sensitive measurements of \neveryday life, preserving data accuracy, efficiency, and privacy becomes an increas-\ningly important concern as this is difficult to do with many current data aggregation \ntechniques.\n19.4.1.2 Node Capture/Node Deployment\nNode compromise is a situation where a sensor node can be completely captured \nand manipulated by the adversary [10]. The conditions for node compromise are \nmade possible as a result of sensor nodes in a wireless sensor network being ran-\ndomly deployed many times in inaccessible or hostile environments. Usually these \nnodes are also unsupervised and unattended. In this kind of environments, nodes \nare undefendable and easy to compromise or totally captured by an adversary. \nThere are several ways to capture a sensor node. One approach is the physical \ncapture where an adversary can physically capture the node because of the node \nbeing in a hostile or unprotected environment. In another approach, software is \nused. Software-based capture occurs when an attacker uses software like a virus \nto capture a node.\n19.4.1.3 Energy Consumption\nSensor networks are mostly and entirely self-organizing and operate with extremely \nlimited energy and computational resources. Conservation of energy by sensor \nnodes results in minimizing their transmit power in order to maintain acceptable \nconnectivity. This may prevent the network from maintaining the security solution \nlike good cryptographic algorithms needed to protect critical data.\n19.4.1.4 Large Numbers of Nodes/Communication Challenges\nBecause modern wireless sensor networks consist of hundreds to thousands of inex-\npensive wireless nodes, this large number of nodes presents a challenge of guaran-\nteeing a secure, reliable, sometimes ad hoc communication among sensor nodes or \ngroups of sensor nodes which sometimes can be mobile units. For example, since \nsensor nodes are typically battery-driven, large numbers of them in a network make \nit a challenge to find and replace or recharge batteries.\n" }, { "page_number": 442, "text": "19.4 Security in Sensor Networks \n431\n19.4.2 Sensor Network Vulnerabilities and Attacks\nBecause of these limitations and the high dependency on the physical environment \nof deployment, sensor networks pose unique challenges that traditional security \ntechniques like secrecy, authentication, privacy, cryptography, robustness to denial-\nof-service attacks used in traditional networks cannot be applied directly [11]. This \nmeans that existing security mechanism fit for traditional networks cannot be used \nwhole sale in sensor networks. Yet there are no comprehensive security mechanisms \nand best practices for sensor networks. One of reasons why traditional network \nsecurity mechanisms and best practices fail with sensor networks is because many \nof these security mechanisms and best practices are taken and viewed as standalone. \nTo achieve any semblance of desired security in a sensor network, these security \nmechanisms and best practices must be a part of and be embedded into every design \naspect of the sensor network, including the communication protocols and deploy-\nment topologies. For example, we cannot talk about the security of a sensor network \nif that network lacks secure routing protocols. Secure routing protocols are essential \nsecurity entities in sensor networks because a compromised routing protocol com-\npromises the network nodes and a single compromised network sensor node com-\npletely compromises the entire network. Current sensor network routing protocols \nsuffer from many security vulnerabilities as we will see shortly.\nWe have established that sensor networks have a number of issues that separate \nthem from traditional networks. Among these are the vulnerability of sensor nodes \nto physical compromise, significant power and processing constraints, aggrega-\ntion of node outputs, and compromising individual nodes. Physical vulnerability \nincludes physical node access and compromise or local eavesdropping. Power and \nprocessing constraints prevent sensor networks from running good security encryp-\ntions. And aggregation of output may grossly obscure the effects of a malicious \nattack from spreading throughout the network.\nSensor network adversaries target and exploit these weaknesses and other network \nloopholes embedded within these limitations. Let us look at some of these next.\n19.4.2.1 Attacks\nThere are several attack types, including eavesdropping, disruption, hijacking, and \nrushing [12, 13]:\nEavesdropping. Here, the attacker (eavesdropper) aims to determine the aggre-\ngate data that is being output by either the node or the sensor network. The attacker \ncaptures the message from the network traffic either by listening for some time to \nthe network traffic transmitted by the nodes or directly compromising the nodes. \nThere are two types of eavesdropping:\nPassive:\n• \n The attacker’s presence on the network remains unknown to the sensor \nnodes and uses only the broadcast medium to eavesdrop on all messages.\n" }, { "page_number": 443, "text": "432 \n19 Security in Sensor Networks\nActive:\n• \n The attacker actively attempts to discern information by sending queries \nto sensors or aggregation points, or by attacking sensor nodes.\nDisruption. The intent of the attacker here is to disrupt the sensor’s working. It \nis usually done in two ways:\nSemantically:\n• \n where the attacker injects messages, corrupts data, or changes \nvalues in order to render the aggregated data corrupt or useless. Examples of this \ntype of attacks includes [14] the following:\nRouting loop\n• \n: where an attacker injects malicious routing information that \ncauses other nodes to form a routing loop causing all packets injected into \nthis loop to go round in circles and eventually resulting into wasting precious \ncommunication and battery resources\nGeneral DoS attacks\n• \n: where an attacker injects malicious information or alters \nthe routing setup messages which end up preventing the routing protocol from \nfunctioning correctly.\nSybil attack\n• \n: where a malicious node infl uenced by an attacker creates multiple \nfake identities to perform desired attacks on the network.\nBlackhole attack\n• \n: where a malicious node infl uenced by the attacker advertises \na short distance to all destinations, thus attracting traffi c destined to those \ndestinations into the backhole.\nWormhole attack:\n• \n where two nodes are caused to use an out-of-band channel \nto forward traffi c between each other, enabling them to mount several other \nattacks along the way.\nPhysically:\n• \n where the attacker tries to upsets sensor readings by directly \nmanipulating the environment. For example, generating heat in the vicinity of \nsensors may result in erroneous values being reported.\nHijacking. In this case, the attacker attempts to alter the aggregated output of an \napplication on several network sensor nodes.\nRushing attack: According to YihChun Hu et al. [13], in an on-demand protocol, \na node needing a route to a destination floods the network with ROUTE REQUEST \npackets in an attempt to find a route to the destination. To limit the overhead of \nthis flood, each node typically forwards only one ROUTE REQUEST originating \nfrom any Route Discovery. In fact, all existing on-demand routing protocols, such \nas AODV, DSR, LAR, AODV, and others, only forward the REQUEST that arrives \nfirst from each Route Discovery. In the rushing attack, the attacker exploits this \nproperty of the operation of Route Discovery. The rushing attack is a very powerful \nattack that results in denial of service, and it easy to perform by an attacker.\n19.4.3 Securing Sensor Networks\nThe choice of a good security mechanism for wireless sensor networks depends on \nnetwork application and environmental conditions. It also depends on other factors \n" }, { "page_number": 444, "text": "19.5 Security Mechanisms and Best Practices for Sensor Networks \n433\nlike sensor node processor performance, memory capacity and energy. While in \ntraditional networks, standard security requirements, such as availability, confiden-\ntiality, integrity, authentication, and nonrepudiation, are sufficient for security, in \nsensor networks, special security requirements such as message freshness, intrusion \ndetection, intrusion tolerance, are necessary in addition.\n19.4.3.1 Necessary Conditions for a Secure Sensor Network\nAny security solution to sensor networks must preserve the confidentiality, integ-\nrity, authentication, and nonreplay of data within the network [15, 14].\nData Confidentiality. Confidentiality of data in a sensor network is achievable \nonly if those with access to network data are authorized to do so. Under no circum-\nstances should sensor readings leak outside the network. The standard approach for \npreventing this from happening is to use encryption. This requires the use of a secret \nkey that only intended receivers possess.\nData Integrity. The integrity of data in any network means that data in that net-\nwork is genuine, undiluted without authorization. This implies that data between the \nsender and the receiver is unaltered in transit by an adversary.\nData Authentication. The process of authentication of both network data and \nusers is very important in preserving network data integrity and preventing unau-\nthorized access to the network. Without authenticating mechanisms in place, an \nattacker can easily access the network and inject dangerous messages without the \nreceivers of the new altered data knowing and making sure that the data being used \noriginates from a malicious source.\nData Freshness/Nonreplay. Adrian Perrig et al. [15] define sensor network data \nfreshness to mean recent data which for a sensor network would ensure that no \nadversary replayed old messages. There are two types of freshness: weak fresh-\nness, which provides partial message ordering, but carries no delay information, \nand strong freshness, which provides a total order on a request-response pair, and \nallows for delay estimation. Weak freshness is required by sensor measurements, \nwhile strong freshness is useful for time synchronization within the network [15].\nThese conditions are essential for the security of sensor networks. The problem \nthat remains is how to ensure that these conditions hold throughout the wireless \nsensor network. This is still a big challenge and a problem for current research in \nsensor networks.\n19.5 Security Mechanisms and Best Practices \nfor Sensor Networks\nWe cannot ensure the confidentiality, integrity, authentication, and freshness of data \nin sensor networks without paying attention to the following issues particular to \nsensor networks:\n" }, { "page_number": 445, "text": "434 \n19 Security in Sensor Networks\nData aggregation.\n• \n Aggregation is generally consensus-based compromise, \nwhere missing readings from one or a few nodes may not signifi cantly affect the \noverall system. Data aggregation is used in sensor networks to reduce energy \nconsumption. With aggregation, however, raw data items from sensor nodes are \ninvisible to the base station throwing in doubt the authenticity of the aggregated \ndata. Without securing the data aggregation process, a compromised sensor node \nmay forge an aggregation value and mislead the base station into trusting a false \nreading.\nAntijamming\n• \n – Attackers can cause denial of service by jamming the base \nstation or any other sensor node in the network. Attackers can also jam sensor \nradio frequencies. Protocols and services must be in place to stop this from \nhappening.\nAccess control\n• \n – is a process of granting user the access right to the sensor \nnetwork resources. It is essential to have an effective and effi cient access control \nmechanisms, especially via a base station to authenticate user requests to get \naccess to the network resources.\nKey management\n• \n – key management is crucial in supporting the basic security \ntenants like authentication and encryption in sensor networks. As the number of \napplications for sensor networks grows, an effective key management scheme is \nrequired.\nLink layer encryption\n• \n – Most widely used encryption schemes in sensor networks \ntoday involve the use of pre-distribution of key broadcasts by sensor nodes to \nthousands of sensors for pairwise exchange of information. But this scheme \ndoes not square well with known sensor network security problems like node \ncompromise, low network connectivity, and a large communication overhead. \nHowever, a link-layer key management scheme can mitigate these problems and \ntherefore be more effi cient.\nData replication\n• \n – is the process of storing the same data on several sensor \nnetwork nodes which created enough redundancy which in turn improves on \nreliability and availability and hence security.\nResilience to node capture.\n• \n One of the most challenging issues facing sensor \nnetworks is that of node capture. Online traditional networks can get high physical \nsecurity, however, sensor networks are usually deployed in environments with \nlimited physical security if any.\n19.6 Trends in Sensor Network Security Research\nAlthough we have outlined the difficulties in making a sensor network secure due \nto inherent limitations, it is, however, possible to design security protocols that are \nspecific for a particular security issue. This is the direction that current sensor net-\nwork security research is taking [15].\n" }, { "page_number": 446, "text": "19.6 Trends in Sensor Network Security Research \n435\n19.6.1 Cryptography\nThere are several cryptographic approaches being used to secure sensor networks. \nOne of the first tasks in setting up a sensor network is to establish cryptographic \nsystem with secure keys for secure communication. It is important to be able to \nencrypt and authenticate messages sent between sensor nodes. However, doing this \nrequires prior agreement between the communicating nodes on keys for performing \nencryption and authentication. Due to resource constraints in sensor nodes, includ-\ning limited computational power, many key agreement schemes like trusted-server, \npublic-key, and key pre-distribution used in traditional networks are just not appli-\ncable in sensor networks. Also pre-distribution of secret keys for all pairs of nodes \nis not viable due to the large amount of memory this requires when the network \nsize is large. Although over the years, efforts have been made to propose several \napproaches to do this, the inherent limited computational power of sensor nodes and \nthe huge numbers of network nodes are making public-key cryptographic primi-\ntives too expensive in terms of system overhead in key establishment [16]. Mod-\nern research has tried to handle the key establishment and management problem \nnetwork-wide by use of a shared unique symmetric key between pairs of nodes. \nHowever, this also does not scale well as the number of nodes grows [16].\n19.6.2 Key Management\nBecause of sensor node deployment and other sensor network limitations like \nlimited computation capabilities, it is not possible to use key management as usu-\nally done in traditional networks where there may be a relationship in key sharing \namong members of the network. Because of these difficulties in sensor networks, if \nthere were to be a single shared key, a compromise of just one node, may be through \ncapture, would lay the entire network bare. A new framework of key exchange is \nneeded. Eschenauer and Gligor [17] first proposed a framework of key exchange \nwhere a sensor randomly chooses m keys from the key pool with n keys before the \ndeployment. After the node is deployed, it then contacts all its immediate neighbors \nto see if it shares any key with them. What must be noted in this solution is the non-\ninvolvement of the base station in this key management framework. Several exten-\nsions of this framework have been developed including the following [18]:\nThe q-composite random key pre-distribution framework\n• \n – where two nodes \nshare a common key hashed from q common keys. This approach adds more \nstrength to the above approach. Because now an intruder would need to capture \ncommunication from more nodes in order to be able to compute a shared key.\nMulti-key reinforcement framework\n• \n – where a message from a node is partitioned \ninto several fragments and each fragment is routed through a separate secure \npath. Its advantages are balanced by its high overhead.\n" }, { "page_number": 447, "text": "436 \n19 Security in Sensor Networks\nRandom\n• \n-pairwise framework – where in the pre-deployment phase, N unique \nidentities are generated for each network node. Each node identity is matched up \nwith other m randomly selected distinct node identities and a unique pairwise key \nis generated for each pair of nodes. The new key and the pair of node identities \nare stored on both key rings. After deployment, the nodes then broadcast their \nidentities to their neighbors.\nOther frameworks include a localized encryption and authentication protocol \n(LEAP) by Zhu et al. [19]. Under LEAP, it is observed that there are different types \nof messages in a sensor networks. This leads to the use of four keys: individual, \ngroup, cluster, and pairwise key [18].\n19.6.3 Confidentiality, Authentication, and Freshness\nIt is common knowledge to all security professionals that the use of strong crypto-\ngraphic techniques strengthens the security of communication. In sensor networks, \nlike in traditional networks, this is also the case. During authentication in sensor \nnetworks, the sending node, using a shared key with the receiving node, computes a \nMessage Authentication Code (MAC) on the message about to be transmitted using \na known hash function. The receiving node, upon receipt of the message, applies \nthe shared key and the same hash function to the message to generate a new MAC. \nIf this MAC agrees with the sender node’s MAC, then the message has not been \ntempered with and the receiving node knows that the message has been sent by the \nsending node since it is only this sending node that shares the key with the receiv-\ning node. Several studies including [15] SPINS have used this approach SPINS \nhas two building blocks: Secure Network Encryption Protocol (SNED) providing \ndata confidentiality, a two-part data authentication, and data freshness; and micro-\ntimed, Efficient, Streaming, Loss-tolerant Authentication (µTESLA) which provides \nauthentication to node streaming broadcasts. In addition to SPINS, TinySec [20] \nwhich also supports message confidentiality, integrity, and authentication in wireless \nsensor networks also uses this approach. There are several other works on message \nconfidentiality, authentication, and integrity, including that of Perrig et al. [15].\n19.6.4 Resilience to Capture\nWhile sensor networks, because of their size and deployment, are ideal for informa-\ntion gathering and environmental monitoring, node compromise poses a very seri-\nous security problem in these networks. While existing ad-hoc security solutions \ncan address a few security problems, on a limited number of nodes in a network, \nmany of these solutions cannot scale up when the numbers of nodes in the network \ngrows. Also when the node number is high and typically these nodes are unattended, \nthey are prone to node compromise.\n" }, { "page_number": 448, "text": "Advanced Exercises \n437\nTo overcome this problem, Yang et al. [20] have proposed a novel location-based \nkey management solution through two techniques in which they bind symmetric \nsecret keys to geographic locations and then assign those location-bound keys to \nsensor nodes based on the nodes’ deployed locations. There are two approaches \nto this scheme: location-binding keys and location-based keys. In both of these \napproaches, the network terrain is divided into a grid where each cell on the grid \nis associated with multiple keys. Each node in a grid stores one key for each of \nits local neighboring cells and a few randomly selected remote cells. Any genuine \nreal event must be validated by multiple keys bound to the specific location of that \nevent. This requirement rules out any bogus event which might be a result of an \nattacker obtaining multiple keys from some compromised nodes because such event \ncannot combine all necessary keys to make the event genuine.\nExercises\n1. Sensor networks are different from traditional networks. Discuss five reasons \nwhy.\n2. Wireless sensor networks are different from wireless ad-hoc networks. Discuss \nby giving reasons why this is so.\n3. It is difficult to implement security mechanisms that a proven to work in tradi-\ntional networks, even in wireless ad-hoc networks in sensor networks. Why is \nthis the case?\n4. Discuss several ways to prevent node capture in sensor networks.\n5. Encryption is very difficult to implement in sensor networks. However, there \nhave been several papers exploring limited ways of doing this. Look for one or \ntwo papers and discuss what approaches are being used.\nAdvanced Exercises\n1. Since sensor networks are severely constrained by resources, can the deployment \nof sensor networks under a single administrative domain make it easier to secure \nthese networks?\n2. Based on question 1 above, can introducing redundancy or scaling the network \nhelp in creating secure sensor networks?\n3. Again based on 1 above, is it possible to continue operating a sensor network \nwith a selected number of sensors taken out? Is it possible to identify those \nnodes?\n4. Devise ways (some cryptographic) of securing wireless communication links \nagainst eavesdropping, tampering, traffic analysis, and denial of service.\n5. Is it possible to design an asymmetric encryption protocol with all computations \nbased on the base station?\n" }, { "page_number": 449, "text": "438 \n19 Security in Sensor Networks\nReferences\n 1. Wikipedia. http://en.wikipedia.org/wiki/Sensor.\n 2. Seapahn, M., Koushanfar, F., Potkonjak, M., and Srivastava, M. B. Coverage Problems in Wire-\nless Ad-hoc Sensor Networks. http://www.ece.rice.edu/∼fk1/papers/Infocom_coverage_01.\npdf\n 3. David, C., Estrin, D., and Mani, S. M. “Overview of Sensor Networks.” Computer. August \n2004, pp 41–50. http://www.archrock.com/downloads/resources/IEEE-overview-2004.pdf\n 4. Akyildiz, I. F., Su, W., Sankarasubramaniam, Y., and Cayirci, A. “A Survey on Sensor Net-\nworks,” IEEE Communications Magazine, August 2002, pp 102–114.\n 5. Jay, W. Smart Sensor Networks of the Future. DA Systems. http://archives.sensorsmag.com/\narticles/0397/net_mar/main.shtml\n 6. Ortiz, J., Moon, D., and Baker, C. R. Location Service for Point-to-Point Routing in Wireless \nSensor Networks. http://www.cs.berkeley.edu/∼jortiz/papers/cs268_ortiz_moon_baker.pdf\n 7. Sure, A., Iyengar, S. S., and Cho E. “Ecoinformatics Using Wireless sensor Networks: An \nOverview.” Ecological Informatics. 2006 1, 287–293.\n 8. Subramanian, N. V. K. Survey on Energy-Aware Routing and Routing Protocols for Sensor \nNetworks. http://archives.cs.iastate.edu/documents/disk0/00/00/05/19/00000519–00/energy_\nsr.pdf\n 9. Krishnamachari, B., Estrin, D., and Wicker, S. The Impact of Data Aggregation in Wireless \nSensor Networks. http://lecs.cs.ucla.edu/Publications/papers/krishnamacharib_aggregation.\npdf.\n10. Pradip De, Yonghe Liu, and Sajal K. Das. “Modeling Node Compromise Spread in Wireless \nSensor Networks Using Epidemic Theory,” Proceedings of the 2006 International Symposium \non a World of Wireless, Mobile and Multimedia Networks (WoWMoM’06). http://delivery.\nacm.org/10.1145/1140000/1139411/25930237.pdf?key1=1139411&key2=1877208021&coll\n=ACM&dl=ACM&CFID=63535215&CFTOKEN=62146412\n11. Stankovic, J. A. Research Challenges for Wireless Sensor Networks. http://doi.acm.\norg/10.1145/1121776.1121780.\n12. Anand, M., Ives, Z., and Lee, I. “Quantifying Eavesdropping Vulnerability in Sensor Net-\nworks”, ACM International Conference Proceeding Series; Vol. 96. Proceedings of the \n2nd international workshop on Data management for sensor networks. http://delivery.acm.\norg/10.1145/1090000/1080887/p3-anand.pdf?key1=1080887&key2=9635208021&coll=AC\nM&dl=ACM&CFID=63535215&CFTOKEN=62146412\n13. Hu, Y. C., Perrig, A., and Johnson, D. B. Rushing Attacks and Defense in Wireless Ad Hoc \nNetwork Routing Protocols. http://www.monarch.cs.rice.edu/monarch-papers/wise03.pdf\n14. Perrig, A. Secure Routing in Sensor Networks. http://www.cylab.cmu.edu/default.\naspx?id=1985\n15. Perrig, A., Szewczyk, R., Wen, V., Culler, D., and Tygar, J. D. SPINS: Security Protocols for \nSensor Networks. http://www.ece.cmu.edu/∼adrian/projects/mc2001/mc2001.pdf\n16. Perrig, A., Stankovic, J., and Wagner, D. “Security in Wireless Sensor Networks.” Communi-\ncation of the ACM, 2004 47(6) 53–57.\n17. Eschenauer, L. and Gligor, V.D. “A Key-management Scheme for Distributed Sensor \n Networks.” In ACM Conference on Computer and Communications Security, 2002.\n18. Sabbah, E., Majeed, A., Kyoung-Don, K., Liu, K., and Abu-Ghazaleh, N. “An Application-\nDriven Perspective on Wireless Sensor Network Security,” Q2SWinet’06, October 2, 2006.\n19. Zhu, S., Setia, S., and Jajodia, S. “LEAP: Efficient Security Mechanisms for Large-scale Dis-\ntributed Sensor Networks”. In the 10th ACM Conference on Computer and Communication \nSecurity (CCS ’03), 2003.\n20. Karlof, C., Sastry, N., and Wagner, D. “TinySec: A Link Layer Security Architecture for Wire-\nless Sensor Networks”. In ACM SenSys, 2004.\n" }, { "page_number": 450, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n Networks, DOI 10.1007/978-1-84800-917-2_20, © Springer-Verlag London Limited 2009\n \n439\nChapter 20\nOther Efforts to Secure Information \nand Computer Networks\n20.1 Introduction\nThe rapid advances in computer technology, the plummeting prices of information \nprocessing and indexing devices, and the development of sprawling global networks \nhave all made the generation, collection, processing, indexing, and storage of infor-\nmation easy. Massive information is created, processed, and moved around on a \ndaily basis. The value of information has sky-rocketed and information has all of a \nsudden become a valuable asset for individuals, businesses, and nations. The secu-\nrity of nations has come to depend on computer networks that very few can defend \neffectively. Our own individual privacy and security have come to depend on the \nwhims of the kid next door.\nProtection of information, on which we have come to depend so much, has \nbeen a major challenge since the birth of the Internet. The widespread adoption \nof computer technology for business, organization, and government operations has \nmade the problem of protecting critical personal, business, and national assets more \nurgent. When these assets are attacked, damaged, or threatened, our own individual, \nbusiness, and more importantly national security is at stake.\nThe problem of protecting these assets is becoming a personal, business, and \nnational priority that must involve everyone. Efforts and ways must be sought to this \nend. But getting this massive public involvement will require massive public efforts \non several fronts including legislation, regulation, education, and activism. In this \nchapter, we examine these efforts.\n20.2 Legislation\nAs the Internet Web grows, Internet activities increase, e-commerce booms, and \nglobalization spreads wider, citizens of every nation infuriated by what they see \nas the “bad” Internet are putting enormous and growing pressures on their national \nlegislatures and other lawmaking bodies to enact laws that would curb cyber-\nspace activities in ways that they feel best serve their interests. The citizens’ cause \nhas been joined by special interest groups representing a variety of causes such \n" }, { "page_number": 451, "text": "440 \n20 Other Efforts to Secure Information\nas environmental protection, free speech, intellectual property rights, privacy, \n censorship, and security.\nAlready this has started happening in countries such as the United States, \nUnited Kingdom, Germany, France, China, and Singapore, and the list grows every \npassing day. In all these countries, laws, some good, many repressive, are being \nenacted to put limits on activities in cyberspace. The recent upsurge of illegal cyber-\nspace activities such as the much publicized distributed denial of service and the \n headline-making e-mail attacks have fueled calls from around the world for leg-\nislative actions to be taken to stop such activities. Yet it is not clear and probably \nunlikely that such actions will at best stop and in the least arrest the escalating rate \nof illegal activities in cyberspace. Given the number of cyberspace legislations we \npresently have in place and the seemingly escalating illegal cyberspace incidents, \nit looks like the patchwork of legislation will not in any meaningful way put a stop \nto these malicious activities in the near future. If anything, such activities are likely \nto continue unabated unless and until long-term plans are in place. Such efforts and \nplans should include first and foremost ethical education.\nBesides purely legislative processes which are more public, there are also pri-\nvate initiatives that work either in conjunction with public judicial systems and law \nenforcement agencies or work through workplace forces. Examples abound of large \ncompanies, especially high technology companies such as software, telecommuni-\ncations, and Internet providers coming together to lobby their national legislatures \nto enact laws to protect their interests. Such companies are also forming consortiums \nof some form or partnerships to create and implement private control techniques.\n20.3 Regulation\nAs the debate between the freedom of speech advocates and children’s rights cru-\nsaders heats up, governments around the world are being forced to revisit, amend, \nand legislate new policies, charters, statutes, and acts. As we will see in detail in the \nnext section, this has been one of the most popular, and to politicians, the most vis-\nible means of dealing with the “runaway” cyberspace. Legislative efforts are being \nbacked by judicial and law enforcement machinery. In almost every industrialized \nand many developing countries, large numbers of new regulations are being added \nto the books. Many outdated laws and acts are being revisited, retooled, and brought \nback in service.\n20.4 Self-Regulation\nThere are several reasons why self-regulation as a technique of cyberspace policing \nis appealing to a good cross section of people around the globe. One reason, sup-\nported mostly by the free-speech advocates, is to send a clear signal to governments \naround the world, that the cyberspace and its users are willing to self-regulate, \n" }, { "page_number": 452, "text": "20.4 Self-Regulation \n441\nrather than have the heavy hand of government decide what is or is not acceptable \nto them.\nSecond, there is realization that although legislation and enforcement can go a \nlong way in helping to curb cyber crimes, they are not going to perform the magic \nbullet that will eventually eradicate cyber crimes. It should be taken as one of a \ncombination of measures that must be carried out together. Probably, one of the \nmost effective prevention techniques is to give users enough autonomy to self-regu-\nlate themselves, each taking on the responsibility to the degree and level of control \nand regulation that best suits his or her needs and environment. This self-regulation \ncyberspace can be done through two approaches: hardware and software.\n20.4.1 Hardware-Based Self-Regulation\nThere is a wide array of hardware tools to monitor and police cyberspace to a degree \nsuited for each individual user of cyberspace. Among the tools are those individu-\nally set to control access, authorization, and authentication. Such hardware tools fall \nmainly in six areas, namely,\nPrevention\n• \n: Prevention is intended to restrict access to information on the system \nresources such as disks on network hosts and network servers using technologies \nthat permit only authorized people to the designated areas. Such technologies \ninclude, for example, fi rewalls\nProtection:\n• \n Protection is meant to routinely identify, evaluate, and update system \nsecurity requirements to make them suitable, comprehensive, and effective.\nDetection:\n• \n This involves deploying an early warning monitoring system for \nearly discovery of security breaches both planned and in progress. This category \nincludes all intrusion detection systems (IDS).\nLimitation:\n• \n This is intended to cut the losses suffered in cases of failed \nsecurity.\nReaction:\n• \n To analyze all possible security lapses and plan relevant remedial \nefforts for a better security system based on observed failures.\nRecovery:\n• \n To recover what has been lost as quickly and effi ciently as possible \nand update contingent recovery plans.\n20.4.2 Software-Based Self-Regulation\nUnlike hardware solutions which are few and very specialized, software solutions \nare many and varied in their approaches to cyberspace monitoring and control. They \nare also far less threatening and therefore more user friendly because they are closer \nto the user. This means that they can either be installed by the user on the user’s \ncomputer or by a network system administrator on a network server. If installed by \nthe user, the user can set the parameters for the level of control needed. At a network \n" }, { "page_number": 453, "text": "442 \n20 Other Efforts to Secure Information\nlevel, whether using a firewall or specific software package, controls are set based \non general user consensus. Software controls fall into three categories [1]:\nRating programs\n• \n: Rating programs rate cyberspace content based on a selected \nset of criteria. Among such criteria are violence, language, and sex content. \nSoftware rating labels enable cyberspace content providers to place voluntary \nlabels on their products according to a set of criteria. However, these labels are \nnot uniform for the whole industry; they depend on a rating company. There \nare many rating companies, including Cyber Patrol, Cyber Sitter, Net Nanny, \nand Surf Watch, all claiming to provide a simple yet effective rating system \nfor Web sites to protect children and free speech of everyone who publishes in \ncyberspace. These labels are then used by the fi ltering program on the user’s \ncomputer or server.\nFiltering Programs:\n• \n Filtering software blocks documents and Web sites that \ncontain materials designated on a fi lter list, usually bad words and URLs. They \nalways examine each web document header looking for matching labels to those \non the “bad” list. Filters are either client-based, in which a fi lter is installed \non a user’s computer, or server-based, in which they are centrally located and \nmaintained. Server-based fi lters offer better security because they are not easy \nto tamper with. Even though fi ltering software has become very popular, it still \nhas serious problems and drawbacks such as inaccuracies in labeling, restriction \non unrated material and just deliberate exclusion of certain Web sites by an \nindividual or individuals.\nBlocking:\n• \n As we discussed in Chapter 14, blocking software works by denying \naccess to all except those on a “good” list. Blocking software works best only \nif all web materials are rated. But as we all know, with hundreds of thousands \nof Web sites submitted every day, it is impossible to rate all materials on the \nInternet, at least at the moment.\n20.5 Education\nPerhaps one of the most viable tools to prevent and curb illegal cyberspace activi-\nties is through mass education. Mass education involves teaching as many people \nas possible the values of security, responsible use of computer technology, how to \nhandle security incidents, how to recover from security incidents, how to deal with \nthe evidence if legal actions are to be followed, and how to report security incidents. \nAlthough mass education is good, it has its problems including the length of time it \ntakes to bear fruits. There are many people still not convinced that education alone \ncan do the job. To these people, there is no time; if action is to be taken, then the \ntime to do so is now. However, we are still convinced that the teaching of ethical \nuse of computer technology, as long as it takes, always results in better security \nmeasures than what else we have discussed so far. For without ethics and moral val-\nues, whatever trap we make, one of use will eventually make a better trap. Without \n" }, { "page_number": 454, "text": "20.5 Education \n443\nthe teaching of morality and ethics, especially to the young, there is likely to be no \nbreak in the problems of computer and network security. Along these lines, there-\nfore, education should be approached on two fronts: focused and mass education.\n20.5.1 Focused Education\nFocused education targets groups of the population, for example, children in \nschools, professionals, and certain religious and interest groups. For this pur-\npose, focused education can be subdivided into formal education and occasional \neducation.\nPrivate companies are also conducting focused education. For example, there are \na number of private companies conducting certification courses in security. These \ncompanies include Computer Science Institute (CSI), Cisco, Microsoft, SANS \nInstitute, and others.\n20.5.1.1 Formal Education\nFormal education targets the whole length of the education spectrum from kinder-\ngarten through college. The focus and contact, however, should differ depending on \nthe selected level. For example, in elementary education, while it is appropriate to \neducate children about the dangers of information misuse and computer ethics in \ngeneral, the content and the delivery of that content are measured for that level. In \nhigh schools where there is more maturity and more exploratory minds, the content \nand the delivery system get more focused and more forceful. This approach changes \nin colleges because here the students are more focused on their majors and the \nintended education should reflect this.\n20.5.1.2 Occasional Education\nTeaching morality, ethics, computer security, and responsible use of information \nand information technology should be life-long processes just like teaching respon-\nsible use of a gun should be to a soldier. This responsibility should be and is usually \npassed on to the professions.\nThere are a variety of ways professions enforce this education to their members. \nFor many traditional professions, this is done through introduction and enforce-\nment of professional codes, guidelines, and cannons. Other professions supplement \ntheir codes with a requirement of in-service training sessions, and refresher courses. \nQuite a number of professions require licensing as a means of ensuring continuing \neducation of its members. It is through these approaches of education that informa-\ntion security awareness and solutions should be channeled.\n" }, { "page_number": 455, "text": "444 \n20 Other Efforts to Secure Information\n20.5.2 Mass Education\nThe purpose of mass education is to involve as many people as possible with limited \nresources and maximum effect. The methods to achieve this are usually through \ncommunity involvement through community-based activities such as charity walks \nand other sports-related activities. Using an army of volunteers to organize local, \nregional, and national activities, the approach similar to that of common causes such \nas AIDS, cancer, and other life-threatening diseases, can bring quick and very effec-\ntive awareness which leads to unprecedented education.\n20.6 Reporting Centers\nThe recent sky-rocketing rise in cyber crimes has prompted public authorities look-\ning after the welfare of the general public to open up cyber crime reporting cen-\nters.\nThe purpose of these centers is to collect all relevant information on cyber attacks \nand make that information available to the general public. The centers also func-\ntion as the first point of contact whenever one suspects or is electronically attacked. \nCenters also act as advice giving centers for those who want to learn more about the \nmeasures that must be taken to prevent, detect, and recover from attacks, and in a \nlimited capacity, these centers offer security education.\nIn the United States, there are several federally supported and private report-\ning centers, including NIST Computer Security Resource Clearinghouse, Federal \nComputer Incident Response Capacity, Center for Education and Research in Infor-\nmation Assurance and Security, Carnegie Mellon Emergency Response Team, the \nFedCIRC Center, and the National Infrastructure Protection Center [2]. These cen-\nters fall into two categories:\nNonlaw enforcement to collect, index, and advise the population of all aspects of \n• \ncyber attacks including prevention, detection, and survivability.\nLaw enforcement centers to act as the nation’s clearing house for computer \n• \ncrimes, linking up directly with other national and international Computer \nEmergency Response Teams to monitor and assess potential threats. In addition, \nlaw enforcement centers may provide training for local law enforcement offi cials, \nand in cooperation with private industry and international law enforcement \nagencies.\n20.7 Market Forces\nThe rapid rise in cyber crimes has also prompted collaboration between private \nindustry and government agencies to work together to warn the public of the dan-\ngers of cyber crimes and outline steps to take to remove the vulnerabilities, thereby \n" }, { "page_number": 456, "text": "20.8 Activism \n445\nlessening chances of being attacked. Both major software and hardware manufac-\nturers have been very active and prompt in posting, sending, and widely distributing \nadvisories, vulnerability patches, and anti-virus software whenever their products \nare hit. Cisco, a major Internet infrastructure network device manufacturer, for \nexample, has been calling and e-mailing its customers worldwide, mainly Internet \nService Providers (ISPs), notifying them of the possibilities of cyber attacks that tar-\nget Cisco’s products. It also informs its customers of software patches that could be \nused to resist or repair those attacks. It has also assisted in the dissemination of vital \ninformation to the general public through its Web sites concerning those attacks and \nhow to prevent and recover from them.\nOn the software front, Microsoft, the most affected target in the software arena, \nhas similarly been active posting, calling, and e-mailing its customers with the vital \nand necessary information on how to prevent and recover from attacks targeting \nits products. Besides the private sector, public sector reporting centers have also \nbeen active sending advisories of impending attacks and techniques to recover from \nattacks.\n20.8 Activism\nBeyond those awareness and mass education techniques discussed above, there are \nothers widely used although less effective. They fall under the activism umbrella \nbecause they are organized and driven by the users. They include the following:\n20.8.1 Advocacy\nThis is a mass education strategy that has been used since the beginning of humanity. \nAdvocacy groups work with the public, corporations, and governments to enhance \npublic education through awareness of the use. It is a blanket mass education cam-\npaign in which a message is passed through mass campaigns, magazines, and elec-\ntronic publications, as well as support of public events and mass communication \nmedia like television, radio, and now the Internet.\nAdvocacy is intended to make people part of the intended message. For example, \nduring the struggles for the voting rights in the United States, women’s groups and \nminorities designed and carried out massive advocacy campaigns that were meant \nto involve all women who eventually became part of the movement. Similarly, in the \nminority voting rights struggles, the goal was to involve all minorities whose rights \nhad been trodden upon. The purpose of advocacy is to consequently organize, build, \nand train so that there is a permanent and vibrant structure that people can be part \nof. By involving as many people as possible including the intended audience in the \ncampaigns, the advocacy strategy brings awareness which leads to more pressure \non lawmakers and everyone else responsible. The pressure brought about by mass \nawareness usually results in some form of action, most times the desired action.\n" }, { "page_number": 457, "text": "446 \n20 Other Efforts to Secure Information\n20.8.2 Hotlines\nHotlines is a technique that makes the general public take the initiative to observe, \nnotice, and report incidents. In fact, as we will see in the next chapter, the National \nStrategy for the Security of Cyberspace (NSSC), in one of its priorities advocates \nthis very strategy to make the ordinary users get involved in not only their personal \nsecurity, but also that of their community, and the nation as a whole. In many cases, \nthe strategy is to set up hotline channels through which individuals who observe a \ncomputer security incident can report it to the selected reporting agency for action. \nWhenever a report is made, any technique that works can be applied. In many coun-\ntries such agencies, may include their ISPs and law enforcement agencies.\nExercises\n 1. Do you think education can protect cyberspace from criminal activities? Defend \nyour response.\n 2. Looking at the array of education initiatives and different types of programs \nand the state of security in cyberspace, do you think education can advance/\nimprove system security?.\n 3. The effects of education are not seen in a few years. In fact, education benefits \nmay show twenty to thirty years later. However, security needs are for real and \nfor now. Should we keep educating?\n 4. Choose three hardware solutions used in self-regulation and discuss how they \nare deployed and how they work.\n 5. Choose three software solutions based on self-regulation. Study the solutions \nand discuss how they work.\n 6. Study the various forms of activism. Comment on the effectiveness of each.\n 7. Software rating, although helpful in bringing awareness to concerned individu-\nals, has not been successful. Discuss why.\n 8. Both blocking software and filtering software, although slightly more popular \nthan rating software, suffer from a variety of problems. Discuss these problems \nand suggest solutions.\n 9. Given that wordwide a small percentage of people have college education, but \nin some countries, more than half of the people use computers and get access to \ncyberspace, propose a way to get your education message to those people who \nmay not have enough education to understand the computer lingo. Discuss how \nmuch of the computer lingo is a problem in mass education.\n10. Information security awareness and education are effective if people do under-\nstand the lingo used. Computer technology has generated a basket of words \nthat make it difficult for an average computer user to benefit fully from either \nvendor education or specialized education. Suggest ways to deal with the ever-\nexpanding basket in computer and information security.\n" }, { "page_number": 458, "text": "References \n447\nAdvanced Exercises\n1. Study five countries with strong laws on cyberspace activities, and comment on \nthese laws’ effectiveness.\n2. One of the problems of cyberspace regulation is the hindrance to hot pursuit of \ncyber criminals. Hot pursuit laws prevent law enforcement officers from cross-\ning jurisdictional boundaries without court permissions. However, digital evi-\ndence does not give you that much time to collect court permits. Discuss these \nproblems and suggest ways to overcome them.\n3. Study the big market players, both hardware and software, and discuss their \nefforts in bringing security awareness to their customers. Are they being noble or \nresponding to pressure?\n4. As a follow up to question #3 above, if there was more competition on the mar-\nket, do you think there would be more security responsibility? Why or Why \nnot?\n5. If possible, propose a unique education security solution that is not among those \ndiscussed. Give reasons why your solution might succeed where others have \nfallen short.\nReferences\n1. Evolving the High Performance Computing and Communications Initiative to Support the \nNation’s Information Infrastructure. http://www.nap.edu/readingroom/books/hpcc/contents.\nhtml\n2. High performance and Communication s Implementation plan. National Coordination Office \nfor Computing, Information, and Communications Interagency Working Group on Information \nTechnology Research and Development. http://www.ccic.gov/pubs/imp99/ip99–00.pdf\n" }, { "page_number": 459, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n Networks, DOI 10.1007/978-1-84800-917-2_21, © Springer-Verlag London Limited 2009\n \n449\nChapter 21\nSecurity Beyond Computer Networks:\nInformation Assurance\n21.1 Introduction\nWe have spent a lot of time and of course chapters discussing security of computing \nnetworks which involves security of computer networks but also includes security \nof all computing networks that involves all computing devices. Our focus on these \nwas driven by the fact that they form the core, the first building blocks of all bigger \nnetworks including those that form national infrastructures and the Internet as we \nknow it today. By securing these basic but crucial building blocks, we tend to be on \nour way to securing bigger more essential networks.\nSince what goes on beyond our borders has great implications in what takes \nplace inside our borders, we will take time to discuss what it is we should expect to \ntake place outside our borders that should make our networks more secure. I believe \nwhat we are talking about here are things that have broader implications to every-\nbody else. These are more or less based on policies, directives, and best practices \nthat tend to affect and protect everyone whether that individual has a computing \nnetwork or not.\nIn the last five years, the problem of information security has gone beyond the \nenterprise borders and has become the concern of everyone who uses information \nand that seem to include everyone of us. With this in mind, we are obliged to discuss \nsecurity beyond the network so that everyone is who is unknowingly or knowingly \ninvolved is brought into the dialog of securing the information systems that are \nincreasingly controlling our lives.\nThis discussion therefore, goes beyond the discussion of computer network secu-\nrity and morphs into the discussion of information security in particular and infor-\nmation assurance in general. Efforts to secure information in general, information \nassurance, are currently concentrated in legislations, directives, policies, training, \npersonal efforts, and public and private best practices. In the rest of the chapter, we \nare going to look at these efforts.\n" }, { "page_number": 460, "text": "450 \n21 Security Beyond Computer Networks\n21.2 Collective Security Initiatives and Best Practices\nPredictions or no predictions, our individual and national security is tied up with the \nsecurity of these networks that are here for a long haul. So practices and solutions \nmust be found to deal with the security problems and issues that have arisen and will \narise in the future. Most preferably collective security efforts must be found. There are \nalready projects making headway in this direction. Let us look at two such efforts.\n21.2.1 The U.S. National Strategy to Secure Cyberspace\nThe National Strategy to Secure Cyberspace (NSSC) is part of our overall effort to \nprotect the nation. It is an implementing component of the National Strategy for \nHomeland Security, a much broader security initiative that encompasses all initia-\ntives in various sectors that form the nation’s critical infrastructure grid. The sectors \ninclude the following [1]:\nBanking and fi nance\n• \nInsurance\n• \nChemical\n• \nOil and gas\n• \nElectric\n• \nLaw enforcement\n• \nHigher education\n• \nTransportation (rail)\n• \nInformation technology\n• \nTelecommunications\n• \nWater\n• \nThe aim of the NSSC is to engage and empower individuals to secure the por-\ntions of cyberspace that they own, operate, control, or with which they interact. But \ncyberspace cannot be secured without a coordinated and focused effort from the \nentire society, the federal government, state and local governments, the private sec-\ntor, and the individual, that is, you and me.\nWithin that focus, NSSC has three objectives [1]:\nPrevent cyber attacks against America’s critical infrastructures.\n• \nReduce national vulnerability to cyber attacks.\n• \nMinimize damage and recovery time from cyber attacks that do occur.\n• \nTo attain these objectives, NSSC articulates five national priorities including \ninstituting [1]\n I. A National Cyberspace Security Response System\n II. A National Cyberspace Security Threat and Vulnerability Reduction Program\n" }, { "page_number": 461, "text": "21.2 Collective Security Initiatives and Best Practices \n451\n III. A National Cyberspace Security Awareness and Training Program\n IV. National Security and International Cyberspace Security Cooperation\n V. Securing Governments’ Cyberspace.\nThe full document of NSSC can be found at: http://www.whitehouse.gov/\npcipb/.\n21.2.1.1 A National Cyberspace Security Awareness and Training Program\nUnder the National Cyberspace Security Awareness and Training Program, there \nare three programs that cover government, academia, and industry. Through these \nprograms, the National Information Assurance Education and Training Program \n(NIETP) provides a broad range of services. The NIETP operates under national \nauthority, advocating improvements in information assurance (IA) education and \nresearch, training, and awareness. Under the national authority, NIETP is guided by \nauthority from four directives:\nThe Presidential Decision Directive PDD-63 (22 MAY 98)\n• \n This contains \nthe plan of action on the fi ndings of the President’s Commission on Critical \nInfrastructure Protection (PCCIP) of Oct 97. It requires Vulnerability Awareness \nand Education Programs within both the Government and private sector to \nsensitize people regarding the importance of security and train them to security \nstandards, particularly regarding cyber systems.\nThe National Security Directive (NSD)-42* (5 JUL 90)\n• \n This is the National \nPolicy for the Security of National Security Telecommunications and \nInformation Systems. It designated the Director, NSA, as National Manager \nfor National Security Telecommunications and Information Systems Security. \nThis includes National Security Systems which are those telecommunications \nand information systems operated by the U.S. Government, its contractors, \nor agents, that contain classifi ed information or that involve the following: \nintelligence activities related to national security; command and control of \nmilitary forces; equipment that is an integral part of a weapon or weapon \nsystem; or equipment that is critical to the direct fulfi llment of military or \nintelligence missions.\nThe National Security Telecommunications Information Systems Security \n• \nDirective 500* (25 FEB 93) This is the Information Systems Security (INFOSEC) \nEducation and Training and Awareness which established the requirement for \nfederal departments and agencies to develop and implement INFOSEC education, \ntraining and awareness programs.\nThe National Security Telecommunications Information Systems Security \n• \nDirective 501* (16 NOV 92) This is the National Training for Information \nSystems Security (INFOSEC) Professionals which established the requirement \nfor federal departments and agencies to implement training programs for \nInformation Systems Security (INFOSEC) professionals.\n" }, { "page_number": 462, "text": "452 \n21 Security Beyond Computer Networks\nNIETP also serves as the National Manager for IA education, research and train-\ning relating to national security systems and develops IA training standards with \nthe Committee on National Security Systems (CNSS). The CNSS provides a forum \nfor the discussion of policy issues, sets national policy, and promulgates direction, \noperational procedures, and guidance for the security of national security systems \nthrough the CNSS Issuance System. National security systems contain classified \ninformation that\ninvolve intelligence activities;\n• \ninvolve cryptographic activities related to national security;\n• \ninvolve command and control of military forces;\n• \ninvolve equipment that is an integral part of a weapon or weapons system (is \n• \ncritical to the direct fulfi llment of military or intelligence missions (not including \nroutine administrative and business applications); or\nare critical to the direct fulfi llment of military or intelligence missions (not \n• \nincluding routine administrative and business applications).\nFinally, the NIETP encourages and recognizes universities through the \nNational Centers of Academic Excellence in IA Education (CAEIAE) and the \nNational Centers of Academic Excellence in IA Research (CAE-R). These are out-\nreach programs designed and operated initially by the National Security Agency \n(NSA) in the spirit of Presidential Decision Directive 63, National Policy on \nCritical Infrastructure Protection, May 1998. They are supported by NSA and \nthe Department of Homeland Security (DHS). The goal of the two programs is to \nreduce vulnerability in the U.S. national information infrastructure by promoting \nhigher education and research in information assurance (IA), and producing a \ngrowing number of professionals with IA expertise in various disciplines. Each \nprogram has a list of nationally competitively selected universities hosting these \ncenters.\nThe NIETP also sponsors the national Colloquium for Information Systems Secu-\nrity Education. Established in 1997, the Colloquium was to provide a forum for \ndialogue among leading figures in government, industry, and academia. The Collo-\nquium’s goal is to work in partnership to define current and emerging requirements \nfor information security education and to influence and encourage the develop-\nment and expansion of information security curricula, especially at the graduate \nand undergraduate levels. The Colloquium is active throughout the year and holds \nannual conferences [2].\n21.2.2 Council of Europe Convention on Cyber Crime\nThere is rapid developments in information technology. Such developments have a \ndirect bearing on all sections of modern society. These developments have opened \na whole range of new possibilities, which led the Council of Europe to resolve that \ncriminal law must keep abreast of these technological developments.\n" }, { "page_number": 463, "text": "References \n453\nTo meet these objectives, the council developed a Convention framework to deal \nwith these issues. The Convention’s aims were as follows [3]:\nTo harmonize the domestic criminal substantive law elements of offenses and \n• \nconnected provisions in the area of cyber crime\nTo provide for domestic criminal procedural law powers necessary for the \n• \ninvestigation and prosecution of such offenses as well as other offenses committed \nby means of a computer system or evidence in relation to which is in electronic \nform\nTo set up a fast and effective regime of international cooperation.\n• \nThe Convention documents are in four chapters:\n I Use of terms\n II Measures to be taken at domestic level that include substantive law and \nprocedural law\n III International cooperation\n IV Final clauses.\nFull Convention documents can be found at http://conventions.coe.int/Treaty/en/\nTreaties/Html/185.htm.\nReferences\n1. The National Strategy to Secure Cyberspace. http://www.whitehouse.gov/pcipb/\n2. National IA Education & Training Program. http://www.nsa.gov/ia/academia/acade00001.\ncfm\n3. Council of Europe Convention on Cybercrime. http://conventions.coe.int/Treaty/en/Treaties/\nHtml/185.htm.\n" }, { "page_number": 464, "text": "Part IV\nProjects\n" }, { "page_number": 465, "text": "J.M. Kizza, A Guide to Computer Network Security, Computer Communications and \n Networks, DOI 10.1007/978-1-84800-917-2_5, © Springer-Verlag London Limited 2009\n \n457\nChapter 22\nProjects\n22.1 Introduction\nThis is a special chapter dealing with security projects. We have arranged the projects \nin three parts. Part one consists of projects that can be done on a weekly or biweekly \nbasis. Part two consists of projects that can be done in a group or individually on a \nsemi-semester or on a semester basis. Part three consists of projects that demand a \ngreat deal of work and may require extensive research to be done. Some of the proj-\nects in this part may fulfill a master’s or even Ph.D. degree project requirements. We \nhave tried as much as possible throughout these projects to encourage instructors \nand students to use open source as much as possible. This will decouple the content \nof the guide from the rapidly changing proprietary software market.\n22.2 Part I: Weekly/Biweekly Laboratory Assignments\nProjects in this part were drawn up with several objectives in mind. One is that stu-\ndents in a network security course must be exposed to hands-on experience as much \nas possible. However, we also believe that students, while getting hands-on experi-\nence, must also learn as much as they can about the field of security. Since no one \ncan effectively cover in one course all areas of computer and network security, we \nmust find a way to accomplish as much of this as possible without compromising \nthe level needed in the few core areas covered by the course. Our second objective, \ntherefore, is to cover as broad an area of both computer and network security issues \nas possible. We do this by sending the students out on a scavenger hunt and requir-\ning them to study and learn as much as possible on their own.\nFor each of the selected areas the students must cover, they must write a paper. \nThey doubly benefit, for not only do they learn about security in the broadest sense \npossible but they also learn to communicate, a crucial element in the security pro-\nfession. Security professionals must investigate, analyze, and produce reports. By \nincluding report writing in the course on security, we accomplish, on the side, an \nimportant task.\n" }, { "page_number": 466, "text": "458 \n22 Projects\nLaboratory # 1\nExploratory (2 weeks) – to make students understand the environment and appreci-\nate the field of network security.\nStudy computer and network vulnerabilities and exploits (See chapters 4 and \n7). In an essay of not less than three and not more than five double-spaced typed \npages, discuss 10 such exploits and/or vulnerabilities, paying attention to the \nfollowing:\nRouting algorithm vulnerabilities: route and sequence number spoofi ng, \n• \ninstability, and resonance effects\nTCP/UDP vulnerabilities\n• \nDenial of service\n• \nARP hazard: phantom sources, ARP explosions, and slow links\n• \nRegistry vulnerabilities\n• \nFragmentation vulnerabilities and remedies (ICMP Echo overrun)\n• \nLaboratory # 2\nExploratory (2 weeks) – to make students aware of the availability of an array of \nsolutions to secure the networks.\nResearch and study the available software and hardware techniques to deter, if \nnot eliminate, computer systems attacks. (See part III of the text). Write a compara-\ntive discussion paper (minimum three and maximum five double-spaced pages) on \nfive such techniques, paying attention to the following:\nEncryption techniques (DNSSEC, IPSec, PGP, S/MIME, S-HTTP, and HTTPS)\n• \nIntrusion detection and prevention\n• \nComputer and network forensics techniques\n• \nFirewalls (DMZ)\n• \nSecure network infrastructure services: DNS, NTP, and SNMP\n• \nSecure binding of multimedia streams: Secure RTP/Secure RPC\n• \nAccess control and authentication: Secure RSVP\n• \nMobile systems: WEP and WPA\n• \nInternet security models: IPv4/v6 encapsulation headers in IPSec.\n• \nLaboratory # 3\nExploratory (2 weeks) – to make students aware of the role and weaknesses of \noperating systems in network and data security.\n" }, { "page_number": 467, "text": "22.2 Part I: Weekly/Biweekly Laboratory Assignments \n459\nResearch four major operating systems’ vulnerabilities and write a five-page \ndouble-spaced paper on these vulnerabilities for each operating system.\nConsider the following operating systems:\nUnix ( and other varieties of Unix such as FreeBSD and OpenBSD)\n• \nLinux\n• \nWindows (NT, 2000, XP, etc.)\n• \nOS/2\n• \nMac OS X\n• \nLaboratory # 4\n(1 Week): A look at the security of Windows NT/2000/XP, Linux, and FreeBSD – \nto give students hands-on experience of handling security of the major operating \nsystems today. A student picks one of these platforms and studies it for weaknesses \nand vulnerabilities and what is being done to harden it. A student then can make an \noral presentation to the class on the findings.\nLaboratory # 5\n(! Week): Installation and maintenance of firewalls – to give students the \nexperience of installing and maintaining a peripheral (fencing) security compo-\nnent of both small and large enterprise networks. There are plenty of open source \nand free firewalls. Also a number of companies on the Web offer a variety of \nfirewalls both in software and ready configured hardware. Some of these com-\npanies offer free personal firewalls; others offer high-end but limited corporate \nfirewalls you can download on a trial basis. Check out companies on the Web \nand if you are not already using a firewall, download one for free or on a trial \nbasis.\nInstall it and run it. Here is a list of some of the companies with good software \nfirewalls:\nMcAfee – www.mcafee.com (personal)\n• \nSymantec – www.symantec.com (professional/personal)\n• \nSygate Personal Firewall – www.sygate.com\n• \nTiny Personal Firewall – www.tinysoftware.com\n• \nZoneAlarm Pro – www.zonelabs.com\n• \nFirewall policies: As you install your firewall decide on the following:\nWhether you will let Internet users in your organization upload fi les to the \n• \nnetwork server\nWhat about letting them download?\n• \n" }, { "page_number": 468, "text": "460 \n22 Projects\nWill the network have a Web server? Will inside/outside people access the \n• \nserver?\nWill the site have telnet?\n• \nLaboratory # 6\n(2 weeks): Research on key and certificate management to acquaint the students to \nthe new and developing trends in key management; techniques that are leading to \nnew security and customer confidence tools in e-commerce. In a three- to five-page \ndouble-spaced paper, discuss key management issues (Chapters 10, 11, and 17). In \nparticular, pay attention to\nDNS certifi cates\n• \nKey agreement protocols: STS protocol and IETF work orders\n• \nKey distribution protocols: Kerberos, PGP, X.509, S/MIME, and IPSec\n• \nSSL, SET, and digital payment systems\n• \nOne-time passwords: schemes based on S/KEY\n• \nSession key management: blind-key cryptosystems (NTP)\n• \nSecure access control and management: secure SNMP\n• \nCertifi cate authorities (CAs)\n• \nLaboratory # 7\n(1 week): Network-based and host-based intrusion detection systems (IDS) and \nPrevention. The laboratory is to give students practical experience in safeguarding a \nnetwork, scanning for vulnerabilities and exploits, downloading and installation of \nscanning software, and scanning a small network. Options for scanning are SATAN, \nLANguard Network Scanner (Windows), and Nmap. For an IDS system, use Snort. \nSee Part II for installation information.\nLaboratory # 8\n(1 week): Develop a security policy for an Enterprise Network to enable students \nto acquire the experience of starting from scratch and designing a functioning secu-\nrity system for an enterprise, an experience that is vital in the network security \ncommunity. Write a three- to five-page double-spaced paper on the security policy \nyou have just developed.\n" }, { "page_number": 469, "text": "22.3 Part II: Semester Projects \n461\nLaboratory # 9\nSet up a functioning VPN. There are a variety of sources for materials on how to \nset up a VPN.\nLaboratory # 10\nAny project the instructor may find as having a culminating security experience.\n22.3 Part II: Semester Projects\nThis part focuses on security tools that can make your network secure. We divide \nthe tools into three parts: intrusion detection tools, network reconnaissance and \nscanning tools, and Web-based security protocols.\n22.3.1 Intrusion Detection Systems\nThere are a number of free IDS that can be used for both network-based and host-\nbased intrusion detection. Some of the most common are: Snort, TCPdump, Shadow, \nand Portsentry.\n22.3.1.1 Installing Snort. (www.snort.org).\nSnort is a free network analysis tool that can be used as a packet sniffer like \nTCPdump, a packet logger, or as a network intrusion detection system. Developed \nin 1998 by Martin Roesch, it has been undergoing improvements. These improve-\nments have made Snort highly portable, and now it can run on a variety of platforms \nincluding Linux, Solaris, BSD, IRIX, HP-UX, MacOS X, Win 32, and many more.\nAlso Snort is highly configurable allowing users, after installation, to create their \nown rules and reconfigure its base functionality using its plug-in interface.\nFor this project, you need to\nTake note of the Operating System you are using.\n• \nChoose the type of Snort to use based on your Operating System.\n• \nDownload a free Snort Users’ Manual\n• \nDownload free Snort and install it.\n• \nAnalyze a Snort ASCII output.\n• \nRead Snort rules and learn the different rules of handling Snort outputs.\n• \n" }, { "page_number": 470, "text": "462 \n22 Projects\nNOTE: A Snort performance ASCII output has the following fields:\nName of Alert\n• \nTime and date\n• \n (such as 06/05 –12:04:54.7856231) – to mark the time the packet \nwas sent. The last trailing fl oating number (.7856231) is a fraction of a second \nincluded to make the logging more accurate given that within a second, many \nevents can occur.\nSource address\n• \n (192.163.0.115.15236) – IP source address. (.15236) is the \nport number. Using this string, it may be easy to deduce whether the traffi c is \noriginating from a client or server.\n(>)\n• \n – direction of traffi c\nDestination address\n• \n (192.168.1.05.www).\nTCP options that can be set (Port type, Time to live, Type of service, Session \n• \nID, IP length, Datagram Length) – they are set at the time a connection is \nmade.\nDon’t Fragment (DF)\n• \nS-Flags(P = PSH, R = RST, S = SYN, or F = FIN).\n• \nSequence number\n• \n(5678344:5678346(2)) – the fi rst is the initial sequence \nnumber followed by the ending sequence number and (2) indicates the number \nof bytes transmitted.\nAcknowledgment # (3456789).\n• \nWin (MSS) –\n• \n window size. MSS = maximum segment size. If the client sends \npackets bigger than the maximum window size, the server may drop them.\nHex Payload.\n• \n [56 78 34 90 6D 4F, . . . . . . ]\nHuman Readable Format\n• \n22.3.1.2 Installation of TCPdump (http://www.tcpdump.org/)\nTCPdump is a network monitoring tool developed by the Department of Energy \nat Lawrence Livermore Laboratory. TCPdump, a freeware, is used extensively in \nintrusion detection. To use TCPdump do the following:\nTake note of the Operating System you are using.\n• \nChoose the type of Snort to use based on your Operating System.\n• \nDownload and install TCPdump\n• \nRun a TCPdump trace\n• \nAnalyze a TCPdump trace\n• \nNOTE: In analyzing, consider each field of a TCPdump output. A normal TCPdump \noutput has nine fields as follows:\nTime\n• \n ( such as 12:04:54.7856231) to mark the time the packet was sent. The \nlast trailing fl oating number (.7856231) is a fraction of a second included to \nmake the logging more accurate given that within a second, many events can \noccur.\n" }, { "page_number": 471, "text": "22.3 Part II: Semester Projects \n463\nInterface\n• \n (ethX for Linux, hmeX for Solaris, and BSD-based systems, varied \nwith platform) – interface being monitored.\n(>)\n• \n – direction of traffi c\nSource address\n• \n (192.163.0.115.15236) – IP source address. (.15236) is the \nport number. Using this string, it may be easy to deduce whether the traffi c is \noriginating from a client or server.\nDestination address\n• \n (192.168.1.05.www).\nS-Flags(P = PSH, R = RST, S = SYN, or F = FIN).\n• \nSequence number\n• \n(5678344:5678346(2)) – the fi rst is the initial sequence \nnumber followed by the ending sequence number and (2) indicates the number \nof bytes transmitted.\nWin (MSS)\n• \n – window size. MSS = maximum segment size. If the client sends \npackets bigger than the maximum window size, the server may drop them.\nTCP options that can be set –\n• \n they are set at the time a connection is made.\nDon’t fragment (DF)\n• \n – contains fragment information. If the size of the \ndatagram exceeds the MTU (maximum transmission unit of an IP datagram), \nthen fragmentation occurs.\nRead more about TCPdump in: Intrusion Signature and Analysis by Stephen North-\ncutt, Mark Cooper, Matt Fearnow, and Karen Fredrick. New Rider Publishing, 2001.\n22.3.1.3 Installation of Shadow Version 1.7 (www.nswc.navy.mil/ISSEC/CID/)\nShadow is an Intrusion Detection System (IDS) that is a freeware built on inexpen-\nsive open source software. It consists of two components: a sensor and an analyzer; \nwhen fully installed, it performs network traffic analysis. This is done using the sen-\nsor which collects address information from IP packets and the analyzer examines \nthe collected data and displays user-defined events.\nIt is based on TCPdump and libpcap software packages to collect the packets and \nfilter the collected traffic according to user defined criteria. Shadow installation is \nslightly more involved. For one, Shadow scripts are written in Perl, so you need a sys-\ntem with Perl. Second, you need to be familiar with either Linux or a variant of Unix. \nYou also need a C compiler on your system to install some software from the source.\nTake note of the Operating System you are using.\n• \nChoose the type of Snort to use based on your Operating System.\n• \nDownload a \n• \nShadow Installation Manual.\nUsing the manual build a \n• \nShadow Sensor.\nAgain using the manual build a \n• \nShadow Analyzer.\nPut \n• \nShadow into production.\n22.3.1.4 Installation of Portsentry Version 1.1 (http://www.psionic.com)\nPortsentry uses a built-in Syslog, a system logger reporting routine.\nFor this project, you need to\n" }, { "page_number": 472, "text": "464 \n22 Projects\nTake note of the Operating System you are using.\n• \nChoose the type of Snort to use based on your Operating System.\n• \nDownload Portsentry and install it.\n• \nAnalyze Syslog reports. The fields of Syslog are:\nDate and time\n• \nHostname\n• \nAbacus Project Suite (is a suite of tools for IDS from Psionic Software: (www.\n• \npsionoc.com/abacus/)\nThere are variety of commercially available IDS tools including\nDragon (www.securitywizards.com)\n• \nRealSecure (www.iss.com)\n• \nNetwork Flight Recorder (www.nfr.com)\n• \n22.3.2 Scanning Tools for System Vulnerabilities\nThe following tools are used to scan systems for vulnerabilities and other system \ninformation. Successful attacks always start by the intruder gaining system infor-\nmation on the target hosts. As a student security analyst, you should be able to dif-\nferentiate among three types of incidents: attack, reconnaissance, and false positive. \nBeing able to separate a false positive from a reconnaissance proves your prowess \nto your boss right away. In this session of the project, we are concentrating on \nreconnaissance tools and signatures by looking at programs such as SATAN, LAN-\nguard Network Scanner, and Nmap.\n22.3.2.1 Installing Security Administrator Tool for Analyzing Networks \n(SATAN).(www.satan.com)\nSATAN is used by many Unix/Linux system administrators to determine holes in \ntheir networks. Using SATAN, one can examine vulnerabilities, trust levels, and \nother system information. To install SATAN go to www.satan.com. Notice that \nsince SATAN is written in Perl, you need to have PERL 5.0 or later on your system. \nYou can also download a Perl interpreter for Unix.\nSATAN can probe hosts at various levels of intensity. However, three levels are \nnormally used: light, normal, and heavy:\nLight\n• \n: This is the level for least intrusive scanning collecting information from \nDNS, establishing which RPC services a host offers and determining which fi le \nsystems are sharable over the network.\n" }, { "page_number": 473, "text": "22.3 Part II: Semester Projects \n465\nNormal:\n• \n Level for probing for presence of common network services (fi nger, \nrlogin, FTP, Web, Gopher, e-mail, etc.), establishing operating system types and \nsoftware release versions.\nHeavy:\n• \n Is a level which uses information from normal level above to look at \neach service established in more depth.\nSATAN scan can be detected by Courtney and Gabriel programs. Download \nthese two programs from the Computer Incident Advisory Capability (CIAC): \nwww.ciac.llnl.gov/ciac/ToolsUnixNetMon.html\nNote: By default SATAN scans only your network and all computers attached to \nyour network. You must pay great care when trying to expand the scope of SATAN. \nThere are several reasons to be careful. First, other system administrators do not \nwant you snooping into their systems unannounced, especially when they detect \nyou. Second, if they catch you they might seek legal action against you. Probably, \nthis is the last thing you had in mind.\nAfter downloading and installing SATAN, start by scanning your own system. \nAfter getting the scan reports, learn how to interpret them. Since there is no correct \nsecurity level for you to scan – you make up the level based on your security threat.\nAcquire a SATAN guide to lead you into scan report analysis.\n22.3.2.2 Windows Scans for Windows Vulnerabilities\nTo load and run SATAN for Windows, you will need first to download an evaluation \ncopy of LANguard Network Scanner (www.gfisoftware.com).\nLANguard Network Scanner scans a Windows system (one or more comput-\ners) for holes and vulnerabilities (NetBIOS, ports, open shares, and weak password \nvulnerabilities).\nLANguard Network Scanner always displays its analysis. Learn to read it and \ninterpret it.\n22.3.2.3 Scans with Nmap (www.insecure.org)\nNmap for Network Mapper was created by Fyodor and is free under the GNU Pub-\nlic License (GPL). Nmap is a network-wide portscan and OS detection tool that \naudits the security of the network. Nmap traces are easily detected because it leaves \na signature trail. Scans can be made more difficult by adding a few features such as \nstealth scanning and Xmas. For this exercise, do the following:\nDownload Nmap and install it.\n• \nScan a selected network.\n• \nDownload additional features like Xman, SYN-FIN, and stealth scanning.\n• \nShow how these help in creating a less detectable scanning tool.\n• \n" }, { "page_number": 474, "text": "466 \n22 Projects\n22.4 The Following Tools Are Used to Enhance Security in Web \nApplications\n22.4.1 Public Key Infrastructure\nThe aim of the project is to make students learn the basic concepts of a public key \ninfrastructure (PKI) and its components. Among the activities to carry out in the \nproject are the following:\nIdentify Trusted Root Certifi cate Authorities\n• \nDesign a Certifi cate Authority\n• \nCreate a Certifi cation Authority Hierarchy\n• \nManage a Public Key Infrastructure\n• \nConfi gure Certifi cate Enrollment\n• \nConfi gure Key Archival and Recovery\n• \nConfi gure Trust Between Organizations\n• \n22.4.1.1 Securing Web Traffic by Using SSL\nIn Chapter 17, we discussed in depth the strength of SSL as an encryption technique \nfor securing Web traffic. Read Chapter 17 to learn to implement SSL security and \ncertificate-based authentication for Web applications in this project. You will need \na Windows 2003 Server or better.\nTo do the project consider the following:\nDeploying SSL Encryption at a Web Server by enabling SSL encryption in IIS, \n• \ncertifi cate mapping in both IIS and Active Directory. Also secure the Security \nVirtual Folder.\n22.4.1.2 Configuring E-mail Security\nIn Chapter 17, we discussed at length the different ways of securing e-mail on the \nInternet. This project focuses on that. So read chapter 17. The project will teach \nyou how to implement secure email messages in PGP. You will need to do the \nfollowing:\nGo to www.pgpi.org/products/pgp/vrsions/freeware/ and download a version of \n• \nPGP (i.e., version 7.0.3)\nInstall PGP on your computer\n• \nCreate your own keys.\n• \nPublicize your public key.\n• \nImport new PGP keys.\n• \nEncrypt a text message to send to a friend.\n• \n" }, { "page_number": 475, "text": "22.5 Part III: Research Projects \n467\nDecrypt a message from a friend encrypted with PGP.\n• \nEncrypt/decrypt a fi le with PGP.\n• \nWipe a fi le with PGP.\n• \n22.5 Part III: Research Projects\n22.5.1 Consensus Defense\nOne of the weaknesses of the current global network is the lack of consensus \nwithin the network. When one node or system is attacked, that node or system \nhas no way of making an emergency distress call to all other systems starting \nwith the nearest neighbor so that others should get their defenses up for the \neminent attack. This project is to design a system that can trigger an SOS mes-\nsage to the nearest neighbors to get their defenses up. The system should also \ninclude, where possible, all information the node being attacked can get about \nthe attacking agent.\n22.5.2 Specialized Security\nSpecialized security is vital to the defense of networks. A viable specialized security \nshell should be able to utilize any organization’s specific attributes and peculiarities \nto achieve a desired level of security for that organization. This project is to design \na security shell that can be used by any organization to put in its desired attributes \nand whatever peculiarities that organization may have in order to achieve its desired \nlevel of security.\n22.5.3 Protecting an Extended Network\nEnterprise network resources are routinely extended to users outside the organiza-\ntion, usually partner organizations and sometimes customers. This, of course, opens \nup huge security loopholes that must be plugged to secure network resources. We \nwant to design an automated security system that can be used to screen external \nuser access, mitigate risks, and automatically deal with, report, and recover from an \nincident, if one occurs.\n22.5.4 Automated Vulnerability Reporting\nCurrently, reporting of system vulnerabilities and security incidents is still a manual \njob. It is the responsibility of the system administrator to scan and sort threats and \n" }, { "page_number": 476, "text": "468 \n22 Projects\nincidents before reporting them to the national reporting centers. However, as we all \nknow, this approach is both slow and is itself prone to errors (human and system). \nWe are looking for an automated system that can capture, analyze, sort and immedi-\nately and simultaneously report such incidents to both the system administrator and \nthe national reporting center of choice.\n22.5.5 Turn-Key Product for Network Security Testing\nMost network attacks are perpetuated through network protocol loopholes. Addi-\ntional weak points are also found in application software in the top most layers of \nthe protocol stack. If security is to be tackled head on, attention should be focused \non these two areas. This project is aimed at designing a turn-key product that a \nnetwork administrator can use to comprehensively comb both the network protocol \nand the system application software for those sensitive loopholes. Once these weak \npoints are identified, the administrator can then easily plug them.\n22.5.6 The Role of Local Networks in the Defense of the National \nCritical Infrastructure\nIn the prevailing security realities of the time, local networks, as the building blocks \nof the national critical infrastructure, have become a focal point of efforts to defend \nthe national infrastructure. While the federal government is responsible for manag-\ning threat intelligence and efforts to deter security threats on the national infrastruc-\nture, the defense of local networks is the responsibility of local authorities, civic \nleaders, and enterprise managers. One of the techniques to defend the thousands of \nlocal spheres of influence is the ability of these local units to be able to automati-\ncally separate themselves off the national grid in the event of a huge “bang” on the \ngrid. This project is meant to design the technology that can be used by local net-\nworks to achieve this.\n22.5.7 Enterprise VPN Security\nThe growth of Internet use in enterprise communication and the need for security \nassurance of enterprise information has led to the rapid growth and use of VPN \ntechnology. VPN technology has been a technology of choice for securing Enterprise \nnetworks over public network infrastructure. Although emphasis has been put on \nthe software-side of VPN implementation which looks like a more logical thing, \ninformation in Enterprise VPNs has not been secured to a desired level. This means \nthat other aspects of VPN security need to be explored. Several aspects including \nimplementation, policy, and enterprise organization, among many others, need to be \n" }, { "page_number": 477, "text": "22.5 Part III: Research Projects \n469\nresearched. This project requires the researcher to look for ways of improving VPN \nsecurity by critically examining these complementary security issues.\n22.5.8 Perimeter Security\nOne of the cornerstones of system defense is the perimeter defense. We assume \nthat all the things we want to protect should be enclosed within the perimeter. The \nperimeter, therefore, separates the “bad Internet” outside from the protected net-\nwork. Firewalls have been built for this very purpose. Yet we still dream of a perfect \nsecurity within the protected networks. Is it possible to design a penetration-proof \nperimeter defense?\n22.5.9 Enterprise Security\nSecurity threats to an Enterprise originate from both within and outside the Enter-\nprise. While threats originating from outside can be dealt with to some extent, with \na strong regime of perimeter defense, internal threats are more difficult to deal \nwith. One way to deal with this elusive internal problem is to develop a strong and \neffective security policy. But many from the security community are saying that an \neffective security policy and strong enforcement of it are not enough. Security is \nstill lacking. In this project, study, research, and devise additional ways to protect \nthe Enterprises against internal threats.\n22.5.10 Password Security – Investigating the Weaknesses\nOne of the most widely used system access control security techniques is the use \nof passwords. However, it has been known that system access and authorization \nbased on passwords alone is not safe. Passwords are at times cracked. But password \naccess as a security technique remains the most economically affordable and widely \nused technique in many organizations because of its bottom line. For this project, \nresearch and devise ways to enhance the security of the password system access.\n" }, { "page_number": 478, "text": "Index\nA\nAccess\ncontrol list, 186–189, 203, 292, 297, 414\ncontrol matrix, 187–188\nmandatory, 192, 199, 360\nrole-based, 187, 189–190, 204\nrule-based, 187, 190\nActivism, 116, 132, 439, 445–446\nAdvocacy, 445–446\nAlert notifier, 282, 284\nAmplitude, 8, 400\nAnnualized loss, 160\nAnomaly, 274, 277–279, 294\nARPNET, 113\nAsynchronous token, 215\nATM, 22–23, 37–39, 41, 388, 405\nAuditing, 57, 145–146, 166–169, 185, 208,\n263, 292, 293, 360, 394\nAuthentication\nanonymous, 213, 222, 224\nDES, 217, 220\ndial-in, 221, 225\nheader, 383\nKerberos, 218–220, 224, 376, 394\nnull, 220, 415\npolicy, 223–224\nprotocols, 320, 392–393\nremote, 220–221, 392–394\nUnix, 220\nAuthenticator, 207–208, 210–212, 213, 215,\n219–221\nAuthority registration, 243\nAuthorization\ncoarse grain, 202\nfine grain, 202\ngranularity, 202\nAvailability, 6, 10, 85, 93, 95, 99, 122,\n165–166, 203, 294, 300, 361, 402–403,\n433, 434, 458\nB\nBandwidth, 8, 10–12, 25, 39, 85, 133, 281,\n283, 335, 398, 402–403, 408, 413, 425\nBase-T, 36\nBase-X, 36\nBastion, 250, 253, 265–266, 270\nBiometrics, 43, 53, 194–195, 208, 212–213,\n308\nBlue box, 113, 132\nBluetooth, 40–41, 409–412, 418, 421, 429\nBridge, 3, 12, 22, 24, 26, 28–34, 141, 250, 262,\n296, 405–406\nBuffer overflow, 63, 67, 77, 88\nC\nCASPR, 56–57\nCERT, 57, 63, 88–89, 92, 97, 100, 107, 114,\n138, 143\nCertificate authority, 217, 238–240, 242, 244,\n372\nCertification, 145, 146, 152, 165, 166, 169,\n244, 248, 344, 353–355, 358, 362, 443\nprocess, 165\nsecurity, 145–146, 165–166\nChain of custody, 304, 309, 316\nChallenge-response, 208, 215–216, 221\nCipher\nfeedback, 229, 367\nspecs, 379–380\nCladding, 11\nCoaxial cable, 11, 150, 405\nCOBIT, 57, 59\nCode Red, 68, 77, 88, 99–100, 115, 338,\n340–341\nCommon criteria, 357–358\nCommunicating element, 4, 6, 24, 67, 128,\n217, 219, 238–241\n471\n" }, { "page_number": 479, "text": "472\nIndex\nCommunication\nradio, 12, 404\nsatellite, 12\nComplacency, 90\nComplexity, 90–91, 100, 157–158, 161–162,\n174, 190, 223, 266, 335, 351, 400, 403\nprogramming, 137\nSoftware, 90–91\nsystem, 100\nCompression\ndata, 85, 309, 314\nlossless, 306\nlossy, 306\nConfidentiality, 48–49, 58, 63, 93, 109, 129,\n227–228, 246, 366–370, 373, 374, 381,\n383–385, 413, 433, 436\ndata, 243–245, 436\ninformation, 58\nmessage, 436\nPPP, 392\nCongestion control, 21, 23, 25, 31\nConsolidation, 85\nCracker, 92, 113–114, 115, 131, 230, 313\nCRC, 36, 306, 323\nCryptanalysis, 48, 229–230\nCryptographic algorithm, 49, 151, 228, 230,\n235–236, 371, 385, 430\nCSMA, 35, 411\nCyber\ncrime, 107–132, 175, 300, 327–328, 441,\n444, 452–453\ncyberspace, 65, 68, 71, 80, 87, 108–109,\n111, 116, 120, 133, 137, 185, 187–188,\n191, 300, 365, 439–442, 446, 447–450,\n451–453\nsleuth, 121\nD\nDARPA, 19\nDatagram, 20–22, 26, 30–31, 33, 39, 253–254,\n383–386, 407–409, 462, 463\nDCE, 38\nDemilitarized zone (DMZ), 43, 152, 264–267,\n283–284, 289, 297, 458\nDenial of Service, 64, 67, 73, 77, 107, 118,\n129, 137, 148, 271, 274, 275–277, 314,\n316, 418–419, 432, 434, 437, 440, 458\nDestroyers, 126–127, 342\nDetection\nintrusion, 83, 87, 115, 130, 150, 166, 169,\n225, 268, 271, 273–298, 316, 321, 327,\n433, 458–460, 461–463\nDeterrence, 43, 132\nDisaster\nCommittee, 178\nhuman, 174\nmanagement, 173–184\nnatural, 174\nplanning, 182\nprevention, 175\nrecovery, 177\nresources, 183\nresponse, 177\nDistribution center, 219, 238, 239, 248\nDNS, 20, 116, 149, 167, 257, 265–266, 269,\n296, 317, 320, 458–460, 464\nDual-homed, 258\nDumpster diving, 102–103\nE\ne-attack, 109\nECBS, 51\nECMA, 51\nEducation\nfocused, 443\nformal, 445\nmass, 442–444, 445–446\noccasional, 443\nEffectiveness, 90, 93, 101, 164–165, 177,\n208–210, 286, 294, 297, 333–335, 351,\n355\nEGP, 31\nElectronic\ncodebook, 229\nsurveillance, 121, 193\nEncoding\nanalog, 7–9\ndigital, 7–9\nscheme, 7–9\nEncryption\nasymmetric, 49, 233, 437\nsymmetric, 49, 228, 230, 231–233, 235,\n237–238, 247, 437\nEnd-points, 252, 387\nEspionage\neconomic, 81–82, 88, 111\nmilitary, 81, 120\nEthernet, 16, 22, 29, 31, 35–36, 40, 250, 288,\n292, 406\nETSI, 51, 402, 404\nEvidence\nanalysis of, 309\npreserving, 307\nrecovery, 305\nExploits, 65–66, 68, 79, 110, 116, 128, 146,\n298, 432, 458, 460\n" }, { "page_number": 480, "text": "Index\n473\nF\nFDDI, 35, 37\nFederal criteria, 358, 362\nFiltering\naddress, 252, 254\ncontent, 331–350\nexclusion, 331, 333\nkeyword, 334\npacket, 253, 256–257, 334–335, 344\nport, 255, 257\nprofile, 335\nstateful, 253\nstateless, 253\nvirus, 336–337, 339–340, 343\nFingerprint, 48–49, 194–196, 209, 212, 245\nFIPS, 52–53, 358\nFirewall\nforensics, 268, 271\nlimitations, 269\nNAT, 263, 268\nservices, 252, 269\nSOHO, 252, 262–263\nVPN, 211, 261, 390\nForensic analysis, 166\nForensics\ncomputer, 299–329\nnetwork, 299–329\nFrequency hopping, 411\nFTP, 20, 54, 149, 152, 203, 222, 244, 254, 257,\n259, 261, 265–267, 269, 271, 296\nG\nGateways, 22–24, 28, 33–34, 46, 129, 249,\n252, 336\nGlobalization, 107, 111, 120–121, 173, 185,\n299, 439\nGoodtimes, 72–73\nGSM, 402, 408–409\nH\nHacktivist, 115–118, 129, 132\nHalf open, 66, 110, 135\nHash function, 49, 215, 245–246, 248, 309,\n369, 372, 381, 436\nHashing algorithm, 49\nHidden files, 312\nHoneypot, 288–290, 297, 417\nHotlines, 446\nHTTPS, 365–366, 373, 458\nHumanware, 95, 159, 161, 163, 169\nHybrid, 16, 237, 247, 277–278, 287–288, 297,\n388–390, 419, 421\nI\nICMP, 20, 21, 30–31, 67–68, 110, 151,\n252–254, 270–271, 383, 458\nIgnorance, 83, 121\nImpersonation, 103, 128\nIncident response, 154, 290, 317–318, 329,\n444\nInformation quality, 85, 87\nInfrared, 12, 40, 196, 398, 406, 412, 420, 429\nInitial sequence numbers, 256\nIntegrity, 35, 46, 49, 53, 58, 63, 81, 93,\n108–109, 111, 151–152, 165, 191,\n210, 217, 227–228, 233–235, 237–238,\n240–241, 244–247, 270, 288, 298,\n303–304, 307–309, 342, 359–360, 361,\n370, 373–374, 378–379, 381–385, 388,\n415, 420, 433, 436\nInterface, 18, 20, 22, 28, 30–37, 46, 50, 65,\n93–94, 105, 136, 139, 156, 159, 162,\n181, 187, 191, 204, 211, 252, 287, 297,\n326, 327, 340, 352, 360, 405, 406, 424,\n461, 463\nInternetworking, 4, 30, 41, 112\nIntruder, 43–44, 66–67, 77, 80, 83, 89, 96,\n99, 109–111, 127–128, 132, 150–151,\n162–163, 192, 194, 203, 209, 213,\n215–216, 230, 236–237, 252, 256–257,\n259, 265–267, 274–276, 286–290, 304,\n316, 346, 374, 414–419, 435\nIntrusion detection, 83, 87, 115, 130, 132, 150,\n166, 169, 225, 268, 271, 273–298, 316,\n321, 327, 433, 458–460, 461–463, 464\nIPSec, 51–52, 58, 247, 261, 269, 271, 365,\n382–387, 390–391, 395, 458–460\nIPv4, 21, 346, 383, 386, 394, 458\nIPv6, 21, 346, 383, 386, 394\nIris, 47, 196–197, 204, 208–209, 212\nISAC, 107\nISDN, 37–38, 41, 221\nJ\nJamming, 35, 116, 416, 418, 434\nJavascript, 134, 141–142, 341\nJPEG, 75–76, 369\nK\nKerberos, 52, 211, 217–219, 224–225, 308,\n365, 366, 371, 375–378, 390, 394–395,\n420, 421, 460\nKey\ndistribution, 219, 233, 238–239, 243, 248,\n367, 460\nencryption, 230, 372, 420\nescrow, 242–243\n" }, { "page_number": 481, "text": "474\nIndex\nKey (cont.)\nexchange, 215, 236, 237–238, 371–372,\n390, 420, 435\ninfrastructure, 217, 222, 240, 243, 370, 466\nmanagement, 51–52, 54, 237–240, 415,\n419–420, 434–435, 437–438, 460\nprivate, 49, 216–217, 222, 228, 233–236,\n246, 267, 275, 377, 380\npublic, 49, 51–52, 213, 216–218, 222,\n224–225, 228, 230, 233–248, 366–369,\n371–374, 377, 379–380, 390, 435, 466\nL\nLAN, 5–41, 109, 122, 223, 298, 391–393,\n397–398, 406–407, 413–422\nLand.c attack, 110\nLeast privileges, 201\nLegislation, 130–131, 300, 349, 439–440, 449\nLoad balance, 280–282\nM\nMAC, 35, 51, 191–192, 246–247, 254, 315,\n367, 372, 381–382, 411–414, 419–420,\n426, 436\nMAN, 6, 13, 40\nManchester, 9\nMD-5, 52\nMobile IP, 406–408\nModes\ntransport, 386–387, 390, 394\ntunnel, 385–387, 394\nMonitoring remote, 47\nMultiplexing, 9, 33, 401, 412, 426\nMulti-ported, 27–29\nN\nNarrowband, 40, 406–407\nNetwork\ncentralized, 4–5, 362\ncivic, 6, 209\ndistributed, 4–5, 45\nextended, 12, 250\nmobile, 12, 223, 403\npacket, 25, 30, 65, 108, 211, 249–250,\n252–253, 258, 260–261, 263–270, 346,\n387, 398, 420–421\npublic, 38, 252, 318, 468\nwireless, 12, 39, 41, 223, 397–422\nNext-hop, 31–32, 128, 320\nNIPC, 96, 108\nNIST, 51, 53\nNmap, 460, 464, 465\nNonrepudiation, 228, 234, 237, 246, 433\nNormalizer, 293–294\nNotoriety, 83, 111, 121\nNRZ, 8\nNRZ-I, 8\nNRZ-L, 8\nO\nOpen architecture, 17, 65, 163, 403\nOpenSSL, 98\nOrange Book, 55, 354, 355, 358–359, 361–362\nOSI, 17–20, 28, 30, 31, 33, 38\nmodel, 17–20, 28, 38, 405, 411\nP\nPacket\nfiltering, 252–253, 256–257, 334–335\ninspection, 252–254, 259\nPassword\ncracking, 192, 301, 313\none-time, 214–215, 460\ntoken, 215\nPathogen, 71\nPGP, 52, 54, 236, 238, 307, 320, 365–368, 372,\n394–395, 458, 460, 466–467\nPhase shift, 8\nPhreaking, 113, 119, 132\nPing-of-death, 275\nPKCS, 51–53, 370, 371, 372\nPKI, 52, 58, 217–218, 222, 225, 240, 243–244,\n247–248, 367, 466\nPKZip, 306, 324\nPPP\nauthentication, 221–222, 392\nconfidentiality, 392\nPrank, 120\nPrecedence, 186–187\nPrevention, 43, 46, 109, 129–130, 150,\n174–177, 184, 273–298, 441\nProtocol\nalert, 379–380\nSSL record, 380–381\nProxy server, 252, 257–259, 261, 263, 271,\n336–337, 344–346\nR\nRADIUS, 220, 221, 296, 365, 391–394, 420,\n421\nRegulation, 56, 130, 300, 350, 440–441\nRepeater, 9, 27–28, 38\nReplication, 126, 167, 224, 342, 434\nRisk assessment, 182\nRSA, 51–52, 54, 58, 213, 236, 245, 247, 367,\n369, 370–371, 373\n" }, { "page_number": 482, "text": "Index\n475\nS\nSATAN, 460, 464–465\nScanning\ncontent, 332\nheuristic, 332\nScripts\nCGI, 133–139, 140, 350, 409\nhostile, 91, 128, 133–144, 175, 312\nPerl, 134, 139–140\nserver-side, 139–143\nSecurity\nanalysis, 361\nassessment, 145–169, 354\nassociations, 384–385\nassurance, 145–169, 354, 360\nawareness, 55–56, 85, 87, 94, 105, 153,\n443, 446–447, 451\ncertification, 145, 146, 165–166\nmodel, 458\npolicy, 58, 93, 129–130, 145–148,\n149–155, 163–165, 189, 223–224, 249,\n250–253, 260–263, 282–283, 285, 291,\n294, 333, 360, 460, 469\nrequirements, 145–146, 152–153, 155–156,\n165, 185, 354–356, 359–381, 388–389,\n433, 441\nthreat, 56, 63–88, 110, 137–138, 140–143,\n150–169, 177, 416, 450, 465, 468–469\nvulnerability, 77, 89–90, 161\nSelf-regulation, 130, 440–441, 446\nSensor Networks\ndesign features, 425\ngrowth, 424\nrouting in, 425\nsecuring, 432\nvulnerability of, 431\nShadow, 297, 461, 463\nSignature\nchameleon, 213\ndigital, 49–50, 52–54, 213, 216, 222, 225,\n228, 241–242, 246–248, 366–367,\n369–371, 373–374, 377\nS/Key, 214–215\nSlack space, 312, 322\nS/MIME, 51–55, 365–369, 395, 458, 460\nSniffer, 48, 126, 128, 193–194, 258, 289, 414,\n461\nSniffing, 67, 121, 129\nSNMP, 20, 63, 98, 149, 282, 418, 458, 460\nSnort, 296–297, 326, 460, 461–464\nSocial engineering, 64, 79–80, 87, 90,\n102–103, 106, 121, 128, 153, 163,\n169, 418\nSoftware\napplication, 46, 54, 96, 151, 162, 274, 305,\n468\ncontrols, 442\nsecurity, 106\nSpam laws, 349\nSpread spectrum, 12, 40, 407, 411–412\nSSID, 414, 417–420\nSteganography, 309, 312–313\nSurrogate, 4–5, 48, 71, 74, 117, 126–127,\n337–340\nSwitching\ncircuit, 24\ndata, 24\npacket, 20, 24–26\nSYN flooding, 66, 110\nT\nTACAS, 394\nTACAS+, 394\nTCPDump, 92, 128, 296, 326, 461–463\nTCP/IP, 18–20, 33, 39–40, 52–53, 66, 252,\n257, 263, 296, 373, 387, 410, 418, 426\nTDM, 10\nTDMA, 401–402, 408\nTeardrop, 110–111\nTerrorism, 80–81, 108, 120, 124–125, 174,\n179, 348\nThird generation, 402, 422\nThree-way handshake, 23, 66–67, 86, 110,\n128, 134–135, 221, 256, 380\nTime\nbomb, 126–127, 342\nresponse, 83, 100\nturnaround, 84, 87, 99–101\nToolkit, 59, 168, 290, 301, 305, 324\nTopology\nbus, 14–15\nring, 15–17\nstar, 15–16\nTrapdoor, 126–127, 278\nTrust model, 210\nU\nUDP, 20–21, 24, 66–67, 88, 111, 151, 252–260,\n269–271, 383, 386, 409, 411, 458\nUnauthorized access, 43, 45–46, 63–64, 109,\n112, 139, 148, 201–203, 215, 252, 273,\n277, 383, 433\nV\nVBScript, 134, 141–143, 341\nVendetta, 80, 82, 90, 119, 122, 124\nVerifier, 211, 288\n" }, { "page_number": 483, "text": "476\nIndex\nVictim computer, 73–74, 77, 109, 111,\n123–125\nVirtual sit-in, 116–117, 132\nVirus\nboot, 339–340, 350\nCode Red, 100, 115, 338, 341\nmultipartite, 342\nPalm, 75–76\npolymorphic, 341, 344\nretro, 342\nstealth, 341\nTrojan horse, 341\nVPN\nhybrid, 388–390\nsecure, 388–390\ntrusted, 388–390\nVulnerability assessment, 103–105, 145,\n168–169, 274\nW\nW3C, 51, 54, 353\nWAN, 5–41, 109, 296, 389, 397\nWar\nchalking, 416\ndriving, 416, 421\nfare, 118, 120, 132\nflying, 416\nGames, 416, 421\nwalking, 416, 421\nWI-FI, 223, 406–409, 413, 416, 419, 421\nWildList, 344\nWinNuke, 275\nWinZip, 306, 324\nWireless\nLAN, 39, 41, 51, 397, 406, 410, 416, 422\nloop, 405\nWiretap, 82, 129, 291\nWorkload, 157\nX\nX.25, 37–41\nxDSL, 39\nXML, 52, 54, 408\nY\nY2K\nbug, 72–73\ncrisis, 72\nZ\nZDNET, 73, 298\n" } ] }