{ "pages": [ { "page_number": 1, "text": " \n \n" }, { "page_number": 2, "text": "Testing Web Security-Assessing the Security of Web \nSites and Applications \nSteven Splaine \n \nWiley Publishing, Inc. \nPublisher: Robert Ipsen \nEditor: Carol Long \nDevelopmental Editor: Scott Amerman \nManaging Editor: John Atkins \nNew Media Editor: Brian Snapp \nText Design & Composition: Wiley Composition Services \nDesignations used by companies to distinguish their products are often claimed as \ntrademarks. In all instances where Wiley Publishing, Inc., is aware of a claim, the product \nnames appear in initial capital or ALL CAPITAL LETTERS. Readers, however, should \ncontact the appropriate companies for more complete information regarding trademarks \nand registration. \nThis book is printed on acid-free paper. \nCopyright © 2002 by Steven Splaine. \nISBN:0471232815 \nAll rights reserved. \nPublished by Wiley Publishing, Inc., Indianapolis, Indiana \nPublished simultaneously in Canada \nNo part of this publication may be reproduced, stored in a retrieval system, or transmitted in \nany form or by any means, electronic, mechanical, photocopying, recording, scanning, or \notherwise, except as permitted under Section 107 or 108 of the 1976 United States \nCopyright Act, without either the prior written permission of the Publisher, or authorization \nthrough payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., \n222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470. Requests \nto the Publisher for permission should be addressed to the Legal Department, Wiley \nPublishing, Inc., 10475 Crosspointe Blvd., Indianapolis, IN 46256, (317) 572-3447, fax \n(317) 572-4447, Email: . \nLimit of Liability/Disclaimer of Warranty: While the publisher and author have used their \nbest efforts in preparing this book, they make no representations or warranties with respect \nto the accuracy or completeness of the contents of this book and specifically disclaim any \nimplied warranties of merchantability or fitness for a particular purpose. No warranty may be \ncreated or extended by sales representatives or written sales materials. The advice and \nstrategies contained herein may not be suitable for your situation. You should consult with a \nprofessional where appropriate. Neither the publisher nor author shall be liable for any loss \nof profit or any other commercial damages, including but not limited to special, incidental, \nconsequential, or other damages. \n" }, { "page_number": 3, "text": "For general information on our other products and services, please contact our Customer \nCare Department within the United States at (800) 762-2974, outside the United States at \n(317) 572-3993 or fax (317) 572-4002. \nWiley also publishes its books in a variety of electronic formats. Some content that appears \nin print may not be available in electronic books. \nLibrary of Congress Cataloging-in-Publication Data: \n0-471-23281-5 \nPrinted in the United States of America \n10 9 8 7 6 5 4 3 2 1 \nTo my wife Darlene and our sons, Jack and Sam, who every day remind me of just how \nfortunate I am. \nTo the victims and heroes of September 11, 2001, lest we forget that freedom must always \nbe vigilant. \nAcknowledgments \nThe topic of Web security is so large and the content is so frequently changing that it is \nimpossible for a single person to understand every aspect of security testing. For this \nreason alone, this book would not have been possible without the help of the many security \nconsultants, \ntesters, \nwebmasters, \nproject \nmanagers, \ndevelopers, \nDBAs, \nLAN \nadministrators, firewall engineers, technical writers, academics, and tool vendors who were \nkind enough to offer their suggestions and constructive criticisms, which made this book \nmore comprehensive, accurate, and easier to digest. \nMany thanks to the following team of friends and colleagues who willingly spent many hours \nof what should have been their free time reviewing this book and/or advising me on how \nbest to proceed with this project. \nJames Bach \nJoey Maier \nRex Black \nBrian McCaughey \nRoss Collard \nWayne Middleton \nRick Craig \nClaudette Moore \nDan Crawford \nDavid Parks \nYves de Montcheuil \nEric Patel \nMickey Epperson \nRoger Rivest \nDanny Faught \nMartin Ryan \nPaul Gerrard \nJohn Smentowski \nStefan Jaskiel \nJohn Splaine \nJeff Jones \nHerbert Thompson \nPhilip Joung \nMichael Waldmann \nA special thank-you goes to my wife Darlene and our sons Jack and Sam, for their love and \ncontinued support while I was writing this book. I would especially like to thank Jack for \nunderstanding why Daddy couldn't go play ball on so many evenings. \n \n" }, { "page_number": 4, "text": "Professional Acknowledgment \nI would like to thank everyone who helped me create and then extend Software Quality \nEngineering's Web Security Testing course (www.sqe.com), the source that provided much \nof the structure and content for this book. Specifically, many of SQE's staff, students, and \nclients provided me with numerous suggestions for improving the training course, many of \nwhich were subsequently incorporated into this book. \nSTEVEN SPLAINE is a chartered software engineer with more than twenty years of \nexperience in project management, software testing and product development. He is a \nregular speaker at software testing conferences and lead author of The Web Testing \nHandbook. \n" }, { "page_number": 5, "text": "Foreword \nAs more and more organizations move to Internet-based and intranet-based applications, \nthey find themselves exposed to new or increased risks to system quality, especially in the \nareas of performance and security. Steven Splaine's last book, The Web Testing \nHandbook, provided the reader with tips and techniques for testing performance along with \nmany other important considerations for Web testing, such as functionality. Now Steve \ntakes on the critical issue of testing Web security. \nToo many users and even testers of Web applications believe that solving their security \nproblems merely entails buying a firewall and connecting the various cables. In this book, \nSteve identifies this belief as the firewall myth, and I have seen victims of this myth in my \nown testing, consulting, and training work. This book not only helps dispel this myth, but it \nalso provides practical steps you can take that really will allow you to find and resolve \nsecurity problems throughout the network. Client-side, server-side, Internet, intranet, \noutside hackers and inside jobs, software, hardware, networks, and social engineering, it's \nall covered here. How should you run a penetration test? How can you assess the level of \nrisk inherent in each potential security vulnerability, and test appropriately? When \nconfronted with an existing system or building a new one, how do you keep track of \neverything that's out there that could conceivably become an entryway for trouble? In a \nreadable way, Steve will show you the ins and outs of Web security testing. This book will \nbe an important resource for me on my next Web testing project. If you are responsible for \nthe testing or security of a Web system, I bet it will be helpful to you, too. \nRex Black \nRex Black Consulting \nBulverde, Texas \n" }, { "page_number": 6, "text": "Preface \nAs the Internet continues to evolve, more and more organizations are replacing their \nplaceholder or brochureware Web sites with mission-critical Web applications designed to \ngenerate revenue and integrate with their existing systems. One of the toughest challenges \nfacing those charged with implementing these corporate goals is ensuring that these new \nstorefronts are safe from attack and misuse. \nCurrently, the number of Web sites and Web applications that need to be tested for security \nvulnerabilities far exceeds the number of security professionals who are sufficiently \nexperienced to carry out such an assessment. Unfortunately, this means that many Web \nsites and applications are either inadequately tested or simply not tested at all. These \norganizations are, in effect, playing a game of hacker roulette, just hoping to stay lucky. \nA significant reason that not enough professionals are able to test the security of a Web site \nor application is the lack of introductory-level educational material. Much of the educational \nmaterial available today is either high-level/strategic in nature and aimed at senior \nmanagement and chief architects who are designing the high-level functionality of the \nsystem, or low-level/extremely technical in nature and aimed at experienced developers \nand network engineers charged with implementing these designs. \nTesting Web Security is an attempt to fill the need for a straightforward, easy-to-follow book \nthat can be used by anyone who is new to the security-testing field. Readers of my first \nbook that I coauthored with Stefan Jaskiel will find I have retained in this book the checklist \nformat that we found to be so popular with The Web Testing Handbook (Splaine and \nJaskiel, 2001) and will thereby hopefully make it easier for security testers to ensure that \nthe developers and network engineers have implemented a system that meets the explicit \n(and implied) security objectives envisioned by the system's architects and owners. \nSteven Splaine \nTampa, Florida \n" }, { "page_number": 7, "text": " \nTable of Contents \n \nTesting Web Security—Assessing the Security of Web\nSites and Applications \n Foreword \n Preface \n Part I - An Introduction to the Book \n Chapter 1 - Introduction \n Part II - Planning the Testing Effort \n Chapter 2 - Test Planning \n Part III - Test Design \n Chapter 3 - Network Security \n Chapter 4 - System Software Security \n Chapter 5 - Client-Side Application Security \n Chapter 6 - Server-Side Application Security \n Chapter 7 - Sneak Attacks: Guarding Against the Less-\nThought-of Security Threats \n Chapter 8 - Intruder \nConfusion, \nDetection, \nand\nResponse \n Part IV - Test Implementation \n Chapter 9 - Assessment and Penetration Options \n Chapter 10 - Risk Analysis \n Epilogue \n Part V - Appendixes \n Appendix A - An \nOverview \nof \nNetwork \nProtocols,\nAddresses, and Devices \n Appendix B - SANS Institute Top 20 Critical Internet\nSecurity Vulnerabilities \n Appendix C - Test-Deliverable Templates \n Additional Resources \n Index \n List of Figures \n List of Tables \n List of Sidebars \n \n" }, { "page_number": 8, "text": " \n1 \nPart I: An Introduction to the Book \nChapter List \nChapter 1: Introduction \n" }, { "page_number": 9, "text": " \n2 \nChapter 1: Introduction \nOverview \nThe following are some sobering statistics and stories that seek to illustrate the growing \nneed to assess the security of Web sites and applications. The 2002 Computer Crime and \nSecurity Survey conducted by the Computer Security Institute (in conjunction with the San \nFrancisco Federal Bureau of Investigation) reported the following statistics (available free of \ncharge via www.gocsi.com): \nƒ \nNinety percent of respondents (primarily large corporations and government \nagencies) detected computer security breaches within the last 12 months. \nƒ \nSeventy-four percent of respondents cited their Internet connection as a frequent \npoint of attack, and 40 percent detected system penetration from the outside. \nƒ \nSeventy-five percent of respondents estimated that disgruntled employees were the \nlikely source of some of the attacks that they experienced. \nThe following lists the number of security-related incidents reported to the CERT \nCoordination Center (www.cert.org) for the previous 4 1/2 years: \nƒ \n2002 (Q1 and Q2)-43,136 \nƒ \n2001-52,658 \nƒ \n2000-21,756 \nƒ \n1999-9,859 \nƒ \n1998-3,734 \nIn February 2002, Reuters (www.reuters.co.uk) reported that \"hackers\" forced CloudNine \nCommunications-one of Britain's oldest Internet service providers (ISPs) -out of business. \nCloudNine came to the conclusion that the cost of recovering from the attack was too great \nfor the company to bear, and instead elected to hand over their customers to a rival ISP. \nIn May 2002, CNN/Money (www.money.cnn.com) reported that the financing division of a \nlarge U.S. automobile manufacturer was warning 13,000 people to be aware of identity theft \nafter the automaker discovered \"hackers\" had posed as their employees in order to gain \naccess to consumer credit reports. \n \nThe Goals of This Book \nThe world of security, especially Web security, is a very complex and extensive knowledge \ndomain to attempt to master-one where the consequences of failure can be extremely high. \nPractitioners can spend years studying this discipline only to realize that the more they \nknow, the more they realize they need to know. In fact, the challenge may seem to be so \ndaunting that many choose to shy away from the subject altogether and deny any \nresponsibility for the security of the system they are working on. \"We're not responsible for \nsecurity-somebody else looks after that\" is a common reason many members of the project \nteam give for not testing a system's security. Of course, when asked who the somebody \nelse is, all too often the reply is \"I don't know,\" which probably means that the security \ntesting is fragmented or, worse still, nonexistent. \nA second hindrance to effective security testing is the naive belief held by many owners and \nsenior managers that all they have to do to secure their internal network and its applications \nis purchase a firewall appliance and plug it into the socket that the organization uses to \nconnect to the Internet. Although a firewall is, without doubt, an indispensable defense for a \nWeb site, it should not be the only defense that an organization deploys to protect its Web \nassets. The protection afforded by the most sophisticated firewalls can be negated by a \n" }, { "page_number": 10, "text": " \n3 \npoorly designed Web application running on the Web site, an oversight in the firewall's \nconfiguration, or a disgruntled employee working from the inside. \n \nTHE FIREWALL MYTH \nThe firewall myth is alive and well, as the following two true conversations illustrate. \nAnthony is a director at a European software-testing consultancy, and Kevin is the owner of \na mass-marketing firm based in Florida. \nƒ Anthony: We just paid for someone to come in and install three top-of-the-line \nfirewalls, so we're all safe now. \nƒ Security tester: Has anybody tested them to make sure they are configured \ncorrectly? \nƒ Anthony: No, why should we? \nƒ Kevin: We're installing a new wireless network for the entire company. \nƒ Security tester: Are you encrypting the data transmissions? \nƒ Kevin: I don't know; what difference does it make? No one would want to hack us, \nand even if they did, our firewall will protect us. \nThis book has two goals. The first goal is to raise the awareness of those managers \nresponsible for the security of a Web site, conveying that a firewall should be part of the \nsecurity solution, but not the solution. This information can assist them in identifying and \nplanning the activities needed to test all of the possible avenues that an intruder could use \nto compromise a Web site. The second goal is aimed at the growing number of individuals \nwho are new to the area of security testing, but are still expected to evaluate the security of \na Web site. Although no book can be a substitute for years of experience, this book \nprovides descriptions and checklists for hundreds of tests that can be adapted and used as \na set of candidate test cases. These tests can be included in a Web site's security test \nplan(s), making the testing effort more comprehensive than it would have been otherwise. \nWhere applicable, each section also references tools that can be used to automate many of \nthese tasks in order to speed up the testing process. \n \nThe Approach of This Book \nTesting techniques can be categorized in many different ways; white box versus black box \nis one of the most common categorizations. Black-box testing (also known as behavioral \ntesting) treats the system being tested as a black box into which testers can't see. As a \nresult, all the testing must be conducted via the system's external interfaces (for example, \nvia an application's Web pages), and tests need to be designed based on what the system \nis expected to do and in accordance with its explicit or implied requirements. White-box \ntesting assumes that the tester has direct access to the source code and can look into the \nbox and see the inner workings of the system. This is why white-box testing is sometimes \nreferred to as clear-box, glass-box, translucent, or structural testing. Having access to the \nsource code helps testers to understand how the system works, enabling them to design \ntests that will exercise specific program execution paths. Input data can be submitted via \nexternal or internal interfaces. Test results do not need to be based solely on external \noutputs; they can also be deduced from examining internal data stores (such as records in \nan application's database or entries in an operating system's registry). \nIn general, neither testing approach should be considered inherently more effective at \nfinding defects than the other, but depending upon the specific context of an individual \ntesting project (for example, the background of the people who will be doing the testing-\ndeveloper oriented versus end-user oriented), one approach could be easier or more cost-\neffective to implement than the other. Beizer (1995), Craig et al. (2002), Jorgensen (2002), \n" }, { "page_number": 11, "text": " \n4 \nand Kaner et al. (1999) provide additional information on black-box and white-box testing \ntechniques. \nGray-box testing techniques can be regarded as a hybrid approach. In other words, a tester \nstill tests the system as a black box, but the tests are designed based on the knowledge \ngained by using white-box-like investigative techniques. Gray-box testers using the \nknowledge gained from examining the system's internal structure are able to design more \naccurate/focused tests, which yield higher defect detection rates than those achieved using \na purely traditional black-box testing approach. At the same time, however, gray-box testers \nare also able to execute these tests without having to use resource-consuming white-box \ntesting infrastructures. \nGRAY-BOX TESTING \nGray-box testing incorporates elements of both black-box and white-box testing. It consists \nof methods and tools derived from having some knowledge of the internal workings of the \napplication and the environment with which it interacts. This extra knowledge can be \napplied in black-box testing to enhance testing productivity, bug finding, and bug-analyzing \nefficiency. \nSource: Nguyen (2000). \nWherever possible, this book attempts to adopt a gray-box approach to security testing. By \ncovering the technologies used to build and deploy the systems that will be tested and then \nexplaining the potential pitfalls (or vulnerabilities) of each technology design or \nimplementation strategy, the reader will be able to create more effective tests that can still \nbe executed in a resource-friendly black-box manner. \nThis book stops short of describing platform- and threat-specific test execution details, such \nas how to check that a Web site's Windows 2000/IIS v5.0 servers have been protected from \nan attack by the Nimda worm (for detailed information on this specific threat, refer to CERT \nadvisory CA-2001-26-www.cert.org). Rather than trying to describe in detail the specifics of \nthe thousands of different security threats that exist today (in the first half of 2002 alone, the \nCERT Coordination Center recorded 2,148 reported vulnerabilities), this book describes \ngeneric tests that can be extrapolated and customized by the reader to accommodate \nindividual and unique needs. In addition, this book does not expand on how a security \nvulnerability could be exploited (information that is likely to be more useful to a security \nabuser than a security tester) and endeavors to avoid making specific recommendations on \nhow to fix a security vulnerability, since the most appropriate remedy will vary from \norganization to organization and such a decision (and subsequent implementation) would \ngenerally be considered to be the role of a security designer. \n \nHow This Book Is Organized \nAlthough most readers will probably find it easier to read the chapters in sequential order, \nthis book has been organized in a manner that permits readers to read any of the chapters \nin any order. Depending on the background and objectives of different readers, some may \neven choose to skip some of the chapters. For example, a test manager who is well versed \nin writing test plans used to test the functionality of a Web application may decide to skip \nthe chapter on test planning and focus on the chapters that describe some of the new types \nof tests that could be included in his or her test plans. In the case of an application \ndeveloper, he or she may not be concerned with the chapter on testing a Web site's \nphysical security because someone else looks after that (just so long as someone actually \ndoes) and may be most interested in the chapters on application security. \n" }, { "page_number": 12, "text": " \n5 \nTo make it easier for readers to hone in on the chapters that are of most interest to them, \nthis book has been divided into four parts. Part 1 is comprised of this chapter and provides \nan introduction and explanation of the framework used to construct this book. \nChapter 2, \"Test Planning,\" provides the material for Part 2, \"Planning the Testing Effort,\" \nand looks at the issues surrounding the planning of the testing effort. \nPart 3, \"Test Design,\" is the focus of this book and therefore forms the bulk of its content by \nitemizing the various candidate tests that the testing team should consider when evaluating \nwhat they are actually going to test as part of the security-testing effort of a Web site and its \nassociated Web application(s). Because the testing is likely to require a variety of different \nskill sets, it's quite probable that different people will execute different groups of tests. With \nthis consideration in mind, the tests have been grouped together based on the typical skill \nsets and backgrounds of the people who might be expected to execute them. This part \nincludes the following chapters: \nƒ \nChapter 3: Network Security \nƒ \nChapter 4: System Software Security \nƒ \nChapter 5: Client-Side Application Security \nƒ \nChapter 6: Server-Side Application Security \nƒ \nChapter 7: Sneak Attacks: Guarding against the Less-Thought-of Security Threats \nƒ \nChapter 8: Intruder Confusion, Detection, and Response \nHaving discussed what needs to be tested, Part 4, \"Test Implementation,\" addresses the \nissue of how to best execute these tests in terms of who should actually do the work, what \ntools should be used, and what order the tests should be performed in (ranking test \npriority). This part includes the following chapters: \nƒ \nChapter 9: Assessment and Penetration Options \nƒ \nChapter 10: Risk Analysis \nAs a means of support for these 10 chapters, the appendix provides some additional \nbackground information, specifically: a brief introduction to the basics of computer networks \nas utilized by many Web sites (in case some of the readers of this book are unfamiliar with \nthe components used to build Web sites), a summarized list of the top-20 critical Internet \nsecurity vulnerabilities (as determined by the SANS Institute), and some sample test \ndeliverable templates (which a security-testing team could use as a starting point for \ndeveloping their own customized documentation). \nFinally, the resources section not only serves as a bibliography of all the books and Web \nsites referenced in this book, but it also lists other reference books that readers interested \nin testing Web security may find useful in their quest for knowledge. \n \nTerminology Used in This Book \nThe following two sections describe some of the terms used in this book to describe the \nindividuals who might seek to exploit a security vulnerability on a Web site-and hence the \npeople that a security tester is trying to inhibit-and the names given to some of the more \ncommon deliverables that a security tester is likely to produce. \nHackers, Crackers, Script Kiddies, and Disgruntled Insiders \nThe term computer hacker was originally used to describe someone who really knew how \nthe internals of a computer (hardware and/or software) worked and could be relied on to \ncome up with ingenious workarounds (hacks) to either fix a problem with the system or \nextend its original capabilities. Somewhere along the line, the popular press relabeled this \n" }, { "page_number": 13, "text": " \n6 \nterm to describe someone who tries to acquire unauthorized access to a computer or \nnetwork of computers. \nThe terminology has become further blurred by the effort of some practitioners to \ndifferentiate the skill levels of those seeking unauthorized access. The term cracker is \ntypically used to label an attacker who is knowledgeable enough to create his or her own \nhacks, whereas the term script kiddie is used to describe a person who primarily relies on \nthe hacks of others (often passed around as a script or executable). The situation becomes \neven less clear if you try to pigeonhole disgruntled employees who don't need to gain \nunauthorized access in order to accomplish their malicious goals because they are already \nauthorized to access the system. \nNot all attackers are viewed equally. Aside from their varying technical expertise, they also \nmay be differentiated by their ethics. Crudely speaking, based on their actions and \nintentions, attackers are often be categorized into one of the following color-coded groups: \nƒ \nWhite-hat hackers. These are individuals who are authorized by the owner of a \nWeb site or Web-accessible product to ascertain whether or not the site or product is \nadequately protected from known security loopholes and common generic exploits. \nThey are also known as ethical hackers, or are part of a group known as a tiger team \nor red team. \nƒ \nGray-hat hackers. Also sometimes known as wackers, gray-hat hackers attack a \nnew product or technology on their own initiative to determine if the product has any \nnew security loopholes, to further their own education, or to satisfy their own curiosity. \nAlthough their often-stated aim is to improve the quality of the new technology or their \nown knowledge without directly causing harm to anyone, their methods can at times \nbe disruptive. For example, some of these attackers will not inform the product's \nowner of a newly discovered security hole until they have had time to build and \npublicize a tool that enables the hole to be easily exploited by others. \n \nƒ HACKER \nƒ Webster's II New Riverside Dictionary offers three alternative definitions for the word \nhacker, the first two of which are relevant for our purposes: \no 1 a. Computer buff \no 1b. One who illegally gains access to another's electronic system \n \n \nƒ COLOR-CODING ATTACKERS \nƒ The reference to colored hats comes from Hollywood's use of hats in old black-and-\nwhite cowboy movies to help an audience differentiate between the good guys \n(white hats) and the bad guys (black hats). \n \n \n \n• Black-hat hackers. Also known as crackers, these are attackers who typically \nseek to exploit known (and occasionally unknown) security holes for their own \npersonal gain. Script kiddies are often considered to be the subset of black-\nhatters, whose limited knowledge forces them to be dependent almost exclusively \nupon the tools developed by more experienced attackers. Honeynet Project \n(2001) provides additional insight into the motives of black-hat hackers. \nOf course, assigning a particular person a single designation can be somewhat arbitrary \nand these terms are by no means used consistently across the industry; many people have \nslightly different definitions for each category. The confusion is compounded further when \n" }, { "page_number": 14, "text": " \n7 \nconsidering individuals who do not always follow the actions of just one definition. For \ninstance, if an attacker secretly practices the black art at night, but also publicly fights the \ngood fight during the day, what kind of hatter does that make him? \nRather than use terms that potentially carry different meanings to different readers (such as \nhacker), this book will use the terms attacker, intruder, or assailant to describe someone \nwho is up to no good on a Web site. \nTesting Vocabulary \nMany people who are new to the discipline of software testing are sometimes confused \nover exactly what is meant by some of the common terminology used to describe various \nsoftware-testing artifacts. For example, they might ask the question, \"What's the difference \nbetween a test case and a test run?\" This confusion is in part due to various practitioners, \norganizations, book authors, and professional societies using slightly different vocabularies \nand often subtly different definitions for the terms defined within their own respective \nvocabularies. These terms and definitions vary for many reasons. Some definitions are \nembryonic (defined early in this discipline's history), whereas others reflect the desire by \nsome practitioners to push the envelope of software testing to new areas. \nThe following simple definitions are for the testing artifacts more frequently referenced in \nthis book. They are not intended to compete with or replace the more verbose and exacting \ndefinitions already defined in industry standards and other published materials, such as \nthose defined by the Institute of Electrical and Electronics Engineers (www.ieee.org), the \nProject \nManagement \nInstitute \n(www.pmi.org), \nor \nRational's \nUnified \nProcess \n(www.rational.com). Rather, they are intended to provide the reader with a convenient \nreference of how these terms are used in this book. Figure 1.1 graphically summarizes the \nrelationship between each of the documents. \n \n \n \nFigure 1.1: Testing documents. \n \n \nƒ \nTest plan. A test plan is a document that describes the what, why, who, when, and \nhow of a testing project. Some testing teams may choose to describe their entire \ntesting effort within a single test plan, whereas others find it easier to organize groups \nof tests into two or more test plans, with each test plan focusing on a different aspect \nof the testing effort. \nTo foster better communication across projects, many organizations have defined test \nplan templates. These templates are then used as a starting point for each new test \nplan, and the testing team refines and customizes each plan(s) to fit the unique needs \nof their project. \n \n" }, { "page_number": 15, "text": " \n8 \nƒ \nTest item. A test item is a hardware device or software program that is the subject of \nthe testing effort. The term system under test is often used to refer to the collection of \nall test items. \nƒ \nTest. A test is an evaluation with a clearly defined objective of one or more test \nitems. A sample objective could look like the following: \"Check that no unneeded \nservices are running on any of the system's servers.\" \nƒ \nTest case. A test case is a detailed description of a test. Some tests may \nnecessitate utilizing several test cases in order to satisfy the stated objective of a \nsingle test. The description of the test case could be as simple as the following: \n\"Check that NetBIOS has been disabled on the Web server.\" It could also provide \nadditional details on how the test should be executed, such as the following: \"Using \nthe tool nmap, an external port scan will be performed against the Web server to \ndetermine if ports 137-139 have been closed.\" \nDepending on the number and complexity of the test cases, a testing team may choose \nto specify their test cases in multiple test case documents, consolidate them into a \nsingle document, or possibly even embed them into the test plan itself. \nƒ \nTest script. A test script is a series of steps that need to be performed in order to \nexecute a test case. Depending on whether the test has been automated, this series \nof steps may be expressed as a sequence of tasks that need to be performed \nmanually or as the source code used by an automated testing tool to run the test. \nNote that some practitioners reserve the term test script for automated scripts and \nuse the term test procedure for the manual components. \nƒ \nTest run. A test run is the actual execution of a test script. Each time a test case is \nexecuted, it creates a new instance of a test run. \n \nWho Should Read This Book? \nThis book is aimed at three groups of people. The first group consists of the owners, CIOs, \nmanagers, and security officers of a Web site who are ultimately responsible for the security \nof their site. Because these people might not have a strong technical background and, \nconsequently, not be aware of all the types of threats that their site faces, this book seeks \nto make these critical decision makers aware of what security testing entails and thereby \nenable them to delegate (and fund) a security-testing effort in a knowledgeable fashion. \nThe second group of individuals who should find this book useful are the architects and \nimplementers of a Web site and application (local area network [LAN] administrators, \ndevelopers, database administrators [DBAs], and so on) who may be aware of some (or all) \nof the security factors that should be considered when designing and building a Web site, \nbut would appreciate having a checklist of security issues that they could use as they \nconstruct the site. These checklists can be used in much the same way that an experienced \nairplane pilot goes through a mandated preflight checklist before taking off. These are \nhelpful because the consequences of overlooking a single item can be catastrophic. \nThe final group consists of the people who may be asked to complete an independent \nsecurity assessment of the Web site (in-house testers, Q/A analysts, end users, or outside \nconsultants), but may not be as familiar with the technology (and its associated \nvulnerabilities) as the implementation group. For the benefit of these people, this book \nattempts to describe the technologies commonly used by implementers to build Web sites \nto a level of detail that will enable them to test the technology effectively but without getting \nas detailed as a book on how to build a Web site. \n \n" }, { "page_number": 16, "text": " \n9 \nSummary \nWith the heightened awareness for the need to securely protect an organization's electronic \nassets, the supply of available career security veterans is quickly becoming tapped out, \nwhich has resulted in an influx of new people into the field of security testing. This book \nseeks to provide an introduction to Web security testing for those people with relatively little \nexperience in the world of information security (infosec), allowing them to hit the ground \nrunning. It also serves as an easy-to-use reference book that is full of checklists to assist \ncareer veterans such as the growing number of certified information systems security \nprofessionals (CISSPs) in making sure their security assessments are as comprehensive \nas they can be. Bragg (2002), Endorf (2001), Harris (2001), Krutz et al. (2001 and 2002), \nPeltier (2002), the CISSP Web portal (www.cissp.com), and the International Information \nSystems Security Certifications Consortium (www.isc2.org) provide additional information \non CISSP certification. \n" }, { "page_number": 17, "text": " \n10 \nPart II: Planning the Testing Effort \nChapter List \nChapter 2: Test Planning \n" }, { "page_number": 18, "text": " \n11 \nChapter 2: Test Planning \nFailing to adequately plan a testing effort will frequently result in the project's sponsors \nbeing unpleasantly surprised. The surprise could be in the form of an unexpected cost \noverrun by the testing team, or finding out that a critical component of a Web site wasn't \ntested and consequently permitted an intruder to gain unauthorized access to company \nconfidential information. \nThis chapter looks at the key decisions that a security-testing team needs to make while \nplanning their project, such as agreeing on the scope of the testing effort, assessing the \nrisks (and mitigating contingencies) that the project may face, spelling out any rules of \nengagement (terms of reference) for interacting with a production environment, and \nspecifying which configuration management practices to use. Failing to acknowledge any \none of these considerations could have potentially dire consequences to the success of the \ntesting effort and should therefore be addressed as early as possible in the project. Black \n(2002), Craig et al. (2002), Gerrard et al. (2002), Kaner et al. (1999, 2001), the Ideahamster \nOrganization (www.ideahamster.org), and the Rational Unified Process (www.rational.com) \nprovide additional information on planning a testing project. \nRequirements \nA common practice among testing teams charged with evaluating how closely a system will \nmeet its user's (or owner's) expectations is to design a set of tests that confirm whether or \nnot all of the features explicitly documented in a system's requirements specification have \nbeen implemented correctly. In other words, the objectives of the testing effort are \ndependent upon on the system's stated requirements. For example, if the system is \nrequired to do 10 things and the testing team runs a series of tests that confirm that the \nsystem can indeed accurately perform all 10 desired tasks, then the system will typically be \nconsidered to have passed. Unfortunately, as the following sections seek to illustrate, this \nprocess is nowhere near as simple a task to accomplish as the previous statement would \nlead you to believe. \nClarifying Requirements \nIdeally, a system's requirements should be clearly and explicitly documented in order for the \nsystem to be evaluated to determine how closely it matches the expectations of the \nsystem's users and owners (as enshrined by the requirements documentation). \nUnfortunately, a testing team rarely inherits a comprehensive, unambiguous set of \nrequirements; often the requirements team-or their surrogates, who in some instances may \nend up being the testing team-ends up having to clarify these requirements before the \ntesting effort can be completed (or in some cases started). The following are just a few \nsituations that may necessitate revisiting the system's requirements: \nƒ \nImplied requirements. Sometimes requirements are so obvious (to the \nrequirements author) that the documentation of these requirements is deemed to be a \nwaste of time. For example, it's rare to see a requirement such as \"no spelling \nmistakes are to be permitted in the intruder response manual\" explicitly documented, \nbut at the same time, few organizations would regard spelling mistakes as desirable. \nƒ \nIncomplete or ambiguous requirements. A requirement that states, \"all the Web \nservers should have service pack 3 installed,\" is ambiguous. It does not make it clear \nwhether the service pack relates to the operating system or to the Web service \n(potentially different products) or which specific brand of system software is required. \n" }, { "page_number": 19, "text": " \n12 \nƒ \nNonspecific requirements. Specifying \"strong passwords must be used\" may \nsound like a good requirement, but from a testing perspective, what exactly is a \nstrong password: a password longer than 7 characters or one longer than 10? To be \nconsidered strong, can the password use all uppercase or all lowercase characters, \nor must a mixture of both types of letters be used? \nƒ \nGlobal requirements. Faced with the daunting task of specifying everything that a \nsystem should not do, some requirements authors resort to all-encompassing \nstatements like the following: \"The Web site must be secure.\" Although everyone \nwould agree that this is a good thing, the reality is that the only way the Web site \ncould be made utterly secure is to disconnect it from any other network (including the \nInternet) and lock it behind a sealed door in a room to which no one has access. \nUndoubtedly, this is not what the author of the requirement had in mind. \nFailing to ensure that a system's requirements are verifiable before the construction of the \nsystem is started (and consequently open to interpretation) is one of the leading reasons \nwhy systems need to be reworked or, worse still, a system enters service only for its users \n(or owners) to realize in production that the system is not actually doing what they need it to \ndo. An organization would therefore be well advised to involve in the requirements \ngathering process the individuals who will be charged with verifying the system's capability. \nThese individuals (ideally professional testers) may then review any documented \nrequirement to ensure that it has been specified in such a way that it can be easily and \nimpartially tested. \nMore clearly defined requirements should not only result in less rework on the part of \ndevelopment, but also speed the testing effort, as specific tests not only can be designed \nearlier, but their results are likely to require much less interpretation (debate). Barman \n(2001), Peltier (2001), and Wood (2001) provide additional information on writing security \nrequirements. \nSecurity Policies \nDocumenting requirements that are not ambiguous, incomplete, nonquantifiable, or even \ncontradictory is not a trivial task, but even with clearly defined requirements, a security-\ntesting team faces an additional challenge. Security testing is primarily concerned with \ntesting that a system does not do something (negative testing)-as opposed to confirming \nthat the system can do something (positive testing). Unfortunately, the list of things that a \nsystem (or someone) should not do is potentially infinite in comparison to a finite set of \nthings that a system should do (as depicted in Figure 2.1). Therefore, security requirements \n(often referred to as security policies) are by their very nature extremely hard to test, \nbecause the number of things a system should not do far exceeds the things it should do. \n" }, { "page_number": 20, "text": " \n13 \n \nFigure 2.1: System capabilities. \nWhen testing security requirements, a tester is likely to have to focus on deciding what \nnegative tests should be performed to ascertain if the system is capable of doing something \nit should not do (capabilities that are rarely well documented-if at all). Since the number of \ntests needed to prove that a system does not do what it isn't supposed to is potentially \nenormous, and the testing effort is not, it is critically important that the security-testing team \nnot only clarify any vague requirements, but also conduct a risk analysis (the subject of \nChapter 10) to determine what subset of the limitless number of negative tests will be \nperformed by the testing effort. They should then document exactly what (positive and \nnegative tests) will and will not be covered and subsequently ensure that the sponsor of the \neffort approves of this proposed scope. \n \nThe Anatomy of a Test Plan \nOnce a set of requirements has been agreed upon (and where needed, clarified), thereby \nproviding the testing team with a solid foundation for them to build upon, the testing team \ncan then focus its attention on the test-planning decisions that the team will have to make \nbefore selecting and designing the tests that they intend to execute. These decisions and \nthe rationale for making them are typically recorded in a document referred to as a test \nplan. \nA test plan could be structured according to an industry standard such as the Institute of \nElectrical and Electronics Engineers (IEEE) Standard for Software Documentation-Std. 829, \nbased on an internal template, or even be pioneering in its layout. What's more important \nthan its specific layout is the process that building a test plan forces the testing team to go \nthrough. Put simply, filling in the blank spaces under the various section headings of the \ntest plan should generate constructive debates within the testing team and with other \ninterested parties. As a result, issues can be brought to the surface early before they \nbecome more costly to fix (measured in terms of additional resources, delayed release, or \nsystem quality). For some testing projects, the layout of the test plan is extremely important. \nFor example, a regulatory agency, insurance underwriter, or mandated corporate policy \nmay require that the test plan be structured in a specific way. For those testing teams that \nare not required to use a particular layout, using an existing organizational template or \nindustry standard (such as the Rational's Unified Process [RUP]) may foster better \n" }, { "page_number": 21, "text": " \n14 \ninterproject communication. At the same time, the testing team should be permitted to \ncustomize the template or standard to reflect the needs of their specific project and not feel \nobliged to generate superfluous documentation purely because it's suggested in a specific \ntemplate or standard. Craig et al. (2002) and Kaner et al. (2002) both provide additional \nguidance on customizing a test plan to better fit the unique needs of each testing project. \nA test plan can be as large as several hundred pages in length or as simple as a single \npiece of paper (such as the one-page test plan described in Nguyen [2000]). A voluminous \ntest plan can be a double-edged sword. A copiously documented test plan may contain a \ncomprehensive analysis of the system to be tested and be extremely helpful to the testing \nteam in the later stages of the project, but it could also represent the proverbial millstone \nthat is hung around the resource neck of the testing team, consuming ever-increasing \namounts of effort to keep up-to-date with the latest project developments or risk becoming \nobsolete. Contractual and regulatory obligations aside, the testing team should decide at \nwhat level of detail a test plan ceases to be an aid and starts to become a net drag on the \nproject's productivity. \nThe testing team should be willing and able (contractual obligations aside) to modify the \ntest plan in light of newly discovered information (such as the test results of some of the \nearlier scheduled tests), allowing the testing effort to hone in on the areas of the system \nthat this newly discovered information indicates needs more testing. This is especially true if \nthe testing effort is to adopt an iterative approach to testing, where the later iterations won't \nbe planned in any great detail until the results of the earlier iterations are known. \nAs previously mentioned in this section, the content (meat) of a test plan is far more \nimportant that the structure (skeleton) that this information is hung on. The testing team \nshould therefore always consider adapting their test plan(s) to meet the specific needs of \neach project. For example, before developing their initial test plan outline, the testing team \nmay wish to review the test plan templates or checklists described by Kaner et al. (2002), \nNguyen (2000), Perry (2000), Stottlemyer (2001), the Open Source Security Testing \nMethodology (www.osstmm.org), IEEE Std. 829 (www.standards.ieee.org), and the \nRational unified process (www.rational.com). The testing team may then select and \ncustomize an existing template, or embark on constructing a brand-new structure and \nthereby produce the test plan that will best fit the unique needs of the project in hand. \nOne of the most widely referenced software-testing documentation standards to date is that \nof the IEEE Std. 829 (this standard can be downloaded for a fee from \nwww.standards.ieee.org). For this reason, this chapter will discuss the content of a security \ntest plan, in the context of an adapted version of the IEEE Std. 829-1998 (the 1998 version \nis a revision of the original 1983 standard). \n \nIEEE STD. 829-1998 SECTION HEADINGS \nFor reference purposes, the sections that the IEEE 829-1998 standard recommends have \nbeen listed below: \na. Test plan identifier \nb. Introduction \nc. Test items \nd. Features to be tested \ne. Features not to be tested \nf. Approach \ng. Item pass/fail criteria \nh. Suspension criteria and resumption requirements \ni. Test deliverables \n" }, { "page_number": 22, "text": " \n15 \nj. Testing tasks \nk. Environmental needs \nl. Responsibilities \nm. Staffing and training needs \nn. Schedule \no. Risks and contingencies \np. Approvals \nTest Plan Identifier \nEach test plan and, more importantly, each version of a test plan should be assigned an \nidentifier that is unique within the organization. Assuming the organization already has a \ndocumentation configuration management process (manual or automated) in place, the \nmethod for determining the ID should already have been determined. If such a process has \nyet to be implemented, then it may pay to spend a little time trying to improve this situation \nbefore generating additional documentation (configuration management is discussed in \nmore detail later in this chapter in the section Configuration Management). \nIntroduction \nGiven that test-planning documentation is not normally considered exciting reading, this \nsection may be the only part of the plan that many of the intended readers of the plan \nactually read. If this is likely to be the case, then this section may need to be written in an \nexecutive summary style, providing the casual reader with a clear and concise \nunderstanding of the exact goal of this project and how the testing team intends to meet \nthat goal. Depending upon the anticipated audience, it may be necessary to explain basic \nconcepts such as why security testing is needed or highlight significant items of information \nburied in later sections of the document, such as under whose authority this testing effort is \nbeing initiated. The key consideration when writing this section is to anticipate what the \ntargeted reader wants (and needs) to know. \nProject Scope \nAssuming that a high-level description of the project's testing objectives (or goals) was \nexplicitly defined in the test plan's introduction, this section can be used to restate those \nobjectives in much more detail. For example, the introduction may have stated that security \ntesting will be performed on the wiley.com Web site, whereas in this section, the specific \nhardware and software items that make up the wiley.com Web site may be listed. For \nsmaller Web sites, the difference may be trivial, but for larger sites that have been \nintegrated into an organization's existing enterprise network or that share assets with other \nWeb sites or organizations, the exact edge of the testing project's scope may not be \nobvious and should therefore be documented. Chapter 3 describes some of the techniques \nthat can be used to build an inventory of the devices that need to be tested. These \ntechniques can also precisely define the scope of the testing covered by this test plan. \nIt is often a good idea to list the items that will not be tested by the activities covered by this \ntest plan. This could be because the items will be tested under the auspices of another test \nplan (either planned or previously executed), sufficient resources were unavailable to test \nevery item, or other reasons. Whatever the rationale used to justify a particular item's \nexclusion from a test plan, the justification should be clearly documented as this section is \nlikely to be heavily scrutinized in the event that a future security failure occurs with an item \nthat was for some reason excluded from the testing effort. Perhaps because of this \nconcern, the \"out of scope\" section of a test plan may generate more debate with senior \nmanagement than the \"in scope\" section of the plan. \n" }, { "page_number": 23, "text": " \n16 \nChange Control Process \nThe scope of a testing effort is often defined very early in the testing project, often when \ncomparatively little is known about the robustness and complexity of the system to be \ntested. Because changing the scope of a project often results in project delays and budget \noverruns, many teams attempt to freeze the scope of the project. However, if during the \ncourse of the testing effort, a situation arises that potentially warrants a change in the \nproject's scope, then many organizations will decide whether or not to accommodate this \nchange based on the recommendation of a change control board (CCB). For example, \ndiscovering halfway through the testing effort that a mirror Web site was planned to go into \nservice next month (but had not yet been built) would raise the question \"who is going to \ntest the mirror site?\" and consequently result in a change request being submitted to the \nCCB. \nWhen applying a CCB-like process to changes in the scope of the security-testing effort in \norder to provide better project control, the members of a security-testing CCB should bear \nin mind that unlike the typical end user, an attacker is not bound by a project's scope or the \ndecisions of a CCB. This requires them to perhaps be a little more flexible than they would \nnormally be when faced with a nonsecurity orientation change request. After all, the testing \nproject will most likely be considered a failure if an intruder is able to compromise a system \nusing a route that had not been tested, just because it had been deemed to have been \nconsidered out of scope by the CCB. \nA variation of the CCB change control process implementation is to break the projects up \ninto small increments so that modifying the scope for the increment currently being tested \nbecomes unnecessary because the change request can be included in the next scheduled \nincrement. The role of the CCB is effectively performed by the group responsible for \ndetermining the content of future increments. \n \nTHE ROLE OF THE CCB \nThe CCB (also sometimes known as a configuration control board) is the group of \nindividuals responsible for evaluating and deciding whether or not a requested change \nshould be permitted and subsequently ensuring that any approved changes are \nimplemented appropriately. \nIn some organizations, the CCB may be made up of a group of people drawn from different \nproject roles, such as the product manager, project sponsor, system owner, internal \nsecurity testers, local area network (LAN) administrators, and external consultants, and \nhave elaborate approval processes. In other organizations, the role of the CCB may be \nperformed by a single individual such as the project leader who simply gives a nod to the \nrequest. Regardless of who performs this role, the authority to change the scope of the \ntesting effort should be documented in the test plan. \nFeatures to Be Tested \nA system's security is only as strong as its weakest link. Although this may be an obvious \nstatement, it's surprising how frequently a security-testing effort is directed to only test some \nand not all of the following features of a Web site: \nƒ \nNetwork security (covered in Chapter 3) \nƒ \nSystem software security (covered in Chapter 4) \nƒ \nClient-side application security (covered in Chapter 5) \nƒ \nClient-side to server-side application communication security (covered in Chapter 5) \nƒ \nServer-side application security (covered in Chapter 6) \n" }, { "page_number": 24, "text": " \n17 \nƒ \nSocial engineering (covered in Chapter 7) \nƒ \nDumpster diving (covered in Chapter 7) \nƒ \nInside accomplices (covered in Chapter 7) \nƒ \nPhysical security (covered in Chapter 7) \nƒ \nMother nature (covered in Chapter 7) \nƒ \nSabotage (covered in Chapter 7) \nƒ \nIntruder confusion (covered in Chapter 8) \nƒ \nIntrusion detection (covered in Chapter 8) \nƒ \nIntrusion response (covered in Chapter 8) \nBefore embarking on an extended research and planning phase that encompasses every \nfeature of security testing, the security-testing team should take a reality check. Just how \nlikely is it that they have the sufficient time and funding to test everything? Most likely the \nsecurity-testing team will not have all the resources they would like, in which case choices \nmust be made to decide which areas of the system will be drilled and which areas will \nreceive comparatively light testing. Ideally, this selection process should be systematic and \nimpartial in nature. A common way of achieving this is through the use of a risk analysis \n(the subject of Chapter 10), the outcome of which should be a set of candidate tests that \nhave been prioritized so that the tests that are anticipated to provide the greatest benefit \nare scheduled first and the ones that provide more marginal assistance are executed last (if \nat all). \nFeatures Not to Be Tested \nIf the testing effort is to be spread across multiple test plans, there is a significant risk that \nsome tests may drop through the proverbial cracks in the floor, because the respective \nscopes of the test plans do not dovetail together perfectly. A potentially much more \ndangerous situation is the scenario of an entire feature of the system going completely \nuntested because everyone in the organization thought someone else was responsible for \ntesting this facet of the system. \nTherefore, it is a good practice to not only document what items will be tested by a specific \ntest plan, but also what features of these items will be tested and what features will fall \noutside the scope of this test plan, thereby making it explicitly clear what is and is not \ncovered by the scope of an individual test plan. \nApproach \nThis section of the test plan is normally used to describe the strategy that will be used by \nthe testing team to meet the test objectives that have been previously defined. It's not \nnecessary to get into the nitty-gritty of every test strategy decision, but the major decisions \nsuch as what levels of testing (described later in this section) will be performed and when \n(or how frequently) in the system's life cycle the testing will be performed should be \ndetermined. \nLevels of Testing \nMany security tests can be conducted without having to recreate an entire replica of the \nsystem under test. The consequence of this mutual dependency (or lack of) on other \ncomponents being completed impacts when and how some tests can be run. \nOne strategy for grouping tests into multiple testing phases (or levels) is to divide up the \ntests based on how complete the system must be before the test can be run. Tests that can \nbe executed on a single component of the system are typically referred to as unit- or \nmodule-level tests, tests that are designed to test the communication between two or more \n" }, { "page_number": 25, "text": " \n18 \ncomponents of the system are often referred to as integration-, string- or link-level tests, \nand finally those that would benefit from being executed in a full replica of the system are \noften called system-level tests. For example, checking that a server has had the latest \nsecurity patch applied to its operating system can be performed in isolation and can be \nconsidered a unit-level test. Testing for the potential existence of a buffer overflow occurring \nin any of the server-side components of a Web application (possibly as a result of a \nmalicious user entering an abnormally large string via the Web application's front-end) \nwould be considered an integration- or system-level test depending upon how much of the \nsystem needed to be in place for the test to be executed and for the testing team to have a \nhigh degree of confidence in the ensuing test results. \nOne of the advantages of unit-level testing is that it can be conducted much earlier in a \nsystem's development life cycle since the testing is not dependent upon the completion or \ninstallation of any other component. Because of the fact that the earlier that a defect is \ndetected, the easier (and therefore more cheaply) it can be fixed, an obvious advantage \nexists to executing as many tests as possible at the unit level instead of postponing these \ntests until system-level testing is conducted, which because of its inherent dependencies \ntypically must occur later in the development life cycle. \nUnfortunately, many organizations do not conduct as many security tests at the unit level as \nthey could. The reasons for this are many and vary from organization to organization. \nHowever, one recurring theme that is cited in nearly every organization where unit testing is \nunderutilized is that the people who are best situated to conduct this level of testing are \noften unaware of what should be tested and how to best accomplish this task. Although the \nhow is often resolved through education (instructor-led training, books, mentoring, and so \non), the what can to a large part be addressed by documenting the security tests that need \nto be performed in a unit-level checklist or more formally in a unit-level test plan-a step that \nis particularly important if the people who will be conducting these unit-level tests are not \nmembers of the team responsible for identifying all of the security tests that need to be \nperformed. \nDividing tests up into phases based upon component dependencies is just one way a \ntesting team may strategize their testing effort. Alternative or complementary strategies \ninclude breaking the testing objectives up into small increments, basing the priority and type \nof tests in later increments on information gleaned from running earlier tests (an heuristic or \nexploratory approach), and grouping the tests based on who would actually do the testing, \nwhether it be developers, outsourced testing firms, or end users. The large variety of \npossible testing strategies in part explains the proliferation of testing level names that are in \npractice today, such as unit, integration, build, alpha, beta, system, acceptance, staging, \nand post-implementation to name but a few. Black (2003), Craig et al. (2002), Kaner et al. \n(2001), Gerrard et al. (2002), and Perry (2000) provide additional information on the various \nalternate testing strategies that could be employed by a testing team. \nFor some projects, it may make more sense to combine two (or more) levels of testing into \na single test plan. The situation that usually prompts this test plan cohabitation is when the \ntesting levels have a great deal in common. For example, on one project, the set of unit-\nlevel tests might be grouped with the set of integration-level tests because the people who \nwill be conducting the tests are the same, both sets of tests are scheduled to occur at \napproximately the same time, or the testing environments are almost identical. \nRelying on only a single level of testing to capture all of a system's security defects is likely \nto be less efficient than segregating the tests into two (or more) levels; it may quite possibly \nincrease the probability that security holes will be missed. This is one of the reasons why \nmany organizations choose to utilize two or more levels of testing. \n" }, { "page_number": 26, "text": " \n19 \nWhen to Test \nFor many in the software industry, testing is the activity that happens somewhere in the \nsoftware development life cycle between coding and going live. Security tests are often \nsome of the very last tests to be executed. This view might be an accurate observation of \nyesterday's system development, when development cycles were measured in years and \nthe tasks that the system was being developed to perform were well understood and rarely \nchanged, but it most certainly should not be the case today. \nIn today's world of ever-changing business requirements, rapid application development, \nand extreme programming, testing should occur throughout the software development life \ncycle (SDLC) rather than as a single-step activity that occurs toward the end of the process, \nwhen all too often too little time (or budget) is left to adequately test the product or fix a \nmajor flaw in the system. \nWhen to Retest \nAlthough many foresighted project managers have scheduled testing activities to occur \nearly in the development cycle, it is less likely that as much thought will be given to planning \nthe continuous testing that will be needed once the system goes live. Even if the functional \nrequirements of a system remain unchanged, a system that was deemed secure last week \nmay become insecure next week. The following are just a few examples of why this could \nhappen: \nƒ \nA previously unknown exploit in an operating system used by the system becomes \nknown to the attacker community. \nƒ \nAdditional devices (firewalls, servers, routers, and so on) are added to the system to \nenable it to meet higher usage demands. Unfortunately, these newly added devices \nmay not have been configured in exactly the same way as the existing devices. \nƒ \nA service pack installed to patch a recently discovered security hole also resets other \nconfiguration settings back to their default values. \nƒ \nDue to the large number of false alarms, the on-duty security personnel have \nbecome desensitized to intruder alerts and subsequently do not respond to any \nautomated security warnings. \nƒ \nUser-defined passwords that expire after a period of time and were originally long \nand cryptic have become short, easy to remember, and recycled. \nƒ \nLog files have grown to the point that no free disk space is left, thereby inhibiting the \ncapability of an intruder detection system to detect an attack. \nSecurity testing should not be regarded as a one-time event, but rather as a recurring \nactivity that will be ongoing as long as the system remains active. The frequency with which \nthe retests occur will to a large part be driven by the availability of resources to conduct the \ntests (cost) and the degree to which the system changes over time. Some events may, \nhowever, warrant an immediate (if limited in scope) retest. For example, the organization \nmay decide to upgrade the operating system used by a number of the servers on the Web \nsite, or a firewall vendor releases a \"hot fix\" for its product. \nWhat to Retest \nAs a starting point, the testing team should consider each test that was utilized during the \nsystem's initial testing effort as a potential candidate for inclusion into a future set of tests \nthat will be reexecuted on a regular basis after the system goes into production (sometimes \nreferred to as a postdeployment regression test set) to ensure that vulnerabilities that were \nsupposedly fixed (or never existed) do not subsequently appear. \n" }, { "page_number": 27, "text": " \n20 \nFor tests that have been automated, there may be very little overhead in keeping these \ntests as part of a regression test set, especially if the automated test script is being \nmaintained by another organization at no additional cost, which may well be the case for a \nsecurity assessment tool (such as those listed in Table 9.4) that an organization has a \nmaintenance agreement for, or is available free of charge. \n \nTHE REGRESSION TEST SET \nRegression tests are usually intended to be executed many times and are designed to \nconfirm that previously identified defects have been fixed and stay fixed, that functionality \nthat should not have changed has indeed remained unaffected by any other changes to the \nsystem, or both. \n \nWith regard to manual tests, the determination as to whether or not to repeat a test will to a \nlarge part depend upon how problems previously detected by the test were fixed (and \nconsequently what the likelihood is that the problem will reappear). For example, if the \ntesting team had originally found that weak passwords were being used and the solution \nwas to send an email telling everyone to clean up their act, then chances are within a \ncouple of userID/password cycles, weak (easy to remember) passwords will again start to \nshow up, necessitating the testing team to be ever vigilant for this potential vulnerability. If, \non the other hand, a single user-sign-on system was implemented with tough password \nrequirements, then the same issue is not likely to occur again and therefore may not \nwarrant the original tests being included in future regression tests. \nPass/Fail Criteria \nA standard testing practice is to document the expected or desired results of an individual \ntest case prior to actually executing the test. As a result, a conscious (or subconscious) \ntemptation to modify the pass criteria for a test based on its now known result is avoided. \nUnfortunately, determining whether security is good enough is a very subjective measure-\none that is best left to the project's sponsor (or the surrogate) rather than the testing team. \nMaking a system more secure all too often means making the system perform more slowly, \nbe less user-friendly, harder to maintain, or more costly to implement. Therefore, unlike \ntraditional functional requirements, where the theoretical goal is absolute functional \ncorrectness, an organization may not want its system to be as secure as it could be \nbecause of the detrimental impact that such a secure implementation would have on \nanother aspect of the system. For example, suppose a Web site requires perspective new \nclients to go through an elaborate client authentication process the first time they register \nwith the Web site. (It might even involve mailing user IDs and first-time passwords \nseparately through the postal service.) Such a requirement might reduce the number of \nfraudulent instances, but it also might have a far more drastic business impact on the \nnumber of new clients willing to go through this process, especially if a competitor Web site \noffers a far more user-friendly (but potentially less secure) process. The net result is that \nthe right amount of security for each system is subjective and will vary from system to \nsystem and from organization to organization. \nInstead of trying to make this subjective call, the testing team might be better advised to \nconcentrate on how to present the findings of their testing effort to the individual(s) \nresponsible for making this decision. For example, presenting the commissioner of a \nsecurity assessment with the raw output of an automated security assessment tool that had \nperformed several hundred checks and found a dozen irregularities is probably not as \n" }, { "page_number": 28, "text": " \n21 \nhelpful as a handcrafted report that lists the security vulnerabilities detected (or suspected) \nand their potential consequences if the system goes into service (or remains in service) as \nis. \nIf an organization's testing methodology mandates that a pass/fail criteria be specified for a \nsecurity-testing test effort, it may be more appropriate for the test plan to use a criteria such \nas the following: \"The IS Director will retain the decision as to whether the total and/or \ncriticality of any or all detected vulnerabilities warrant the rework and/or retesting of the \nWeb site.\" This is more useful than using a dubious pass criteria such as the following: \"95 \npercent of the test cases must pass before the system can be deemed to have passed \ntesting.\" \nSuspension Criteria and Resumption Requirements \nThis section of the test plan may be used to identify the circumstances under which it would \nbe prudent to suspend the entire testing effort (or just portions of it) and what requirements \nmust subsequently be met in order to reinitiate the suspended activities. For example, \nrunning a penetration test would not be advisable just before the operating systems on the \nmajority of the Web site's servers are scheduled to be upgraded with the latest service \npack. Instead, testing these items would be more effective if it was suspended until after the \nservers have been upgraded and reconfigured. \nTest Deliverables \nEach of the deliverables that the testing team generates as a result of the security-testing \neffort should be documented in the test plan. The variety and content of these deliverables \nwill vary from project to project and to a large extent depend on whether the documents \nthemselves are a by-product or an end product of the testing effort. \nAs part of its contractual obligations, a company specializing in security testing may need to \nprovide a client with detailed accounts of all the penetration tests that were attempted \n(regardless of their success) against the client's Web site. For example, the specific layout \nof the test log may have been specified as part of the statement of work that the testing \ncompany proposed to the client while bidding for the job. In this case, the test log is an end \nproduct and will need to be diligently (and time-consumingly) populated by the penetration-\ntesting team or they risk not being paid in full for their work. \nIn comparison, a team of in-house testers trying to find a vulnerability in a Web application's \nuser login procedure may use a screen-capture utility to record their test execution. In the \nevent that a suspected defect is found, the tool could be used to play back the sequence of \nevents that led up to the point of failure, thereby assisting the tester with filling out an \nincident or defect report. Once the report has been completed, the test execution recording \ncould be attached to the defect (providing further assistance to the employee assigned to fix \nthis defect) or be simply discarded along with all the recordings of test executions that didn't \nfind anything unusual. In this case, the test log was produced as a by-product of the testing \neffort and improved the project's productivity. \nBefore a testing team commits to producing any deliverable, it should consider which \ndeliverables will assist them in managing and executing the testing effort and which ones \nare likely to increase their documentation burden. It's not unheard of for testing teams who \nneed to comply with some contractual documentary obligation to write up test designs and \ncreatively populate test logs well after test execution has been completed. \nThe following sections provide brief overviews of some of the more common deliverables \ncreated by testing teams. Their relationships are depicted in Figure 2.2. \n" }, { "page_number": 29, "text": " \n22 \n \n \n \nFigure 2.2: Testing deliverables. \nTest Log \nThe test log is intended to record the events that occur during test execution in a \nchronological order. The log can take the form of shorthand notes on the back of an \nenvelope, a central database repository manually populated via a graphical user interface \n(GUI) front end, or a bitmap screen-capture utility unobtrusively running in the background \ntaking screen snapshots every few seconds. Appendix C contains a sample layout for a test \nlog. \nTest Incident Report \nAn incident is something that happens during the course of test execution that merits further \ninvestigation. The incident may be an observable symptom of a defect in the item being \ntested, unexpected but acceptable behavior, a defect in the test itself, or an incident that is \nso trivial in nature or impossible to recreate that its exact cause is never diagnosed. \nThe test incident report is a critical project communication document because the majority \nof incidents are likely to be investigated by someone other than the person who initially \nobserved (and presumable wrote up) the incident report. For this reason, it is important to \nuse a clear and consistent report format and an agreement be reached between \nrepresentatives of those who are likely to encounter and report the incidents and those who \nare likely to be charged with investigating them. Appendix C contains an example layout of \na test incident report. \nAlthough the exact fields used on the report may vary from project to project, depending on \nlocal needs, conceptually the report needs to do one thing: accurately document the \nincident that has just occurred in such a way that someone else is able to understand what \nhappened, thereby enabling the reader to thoroughly investigate the incident (typically by \ntrying to reproduce it) and determine the exact cause of the event. Craig et al. (2002) and \nKaner et al. (1999) provide additional information on the content and use of test incident \nreports. \nDefect-Tracking Reports \nA defect differs from an incident in that a defect is an actual flaw (or bug) in the system, \nwhereas an incident is just an indicator that a defect may exist. At the moment an incident \nis initially recorded, it is typically not clear whether this incident is the result of a defect or \nsome other cause (such as a human error in the testing). Therefore, it's a common practice \nto include incident reports that have not yet been investigated along with identified defects \n" }, { "page_number": 30, "text": " \n23 \nin a single defect-tracking report. In effect, a guilty-until-proven-innocent mentality is applied \nto the incidents. \nThe question of who should be assigned ownership of a defect (who is responsible for \nmaking sure it is resolved) may be a politically charged issue. Often the testing team has \nthe task of acting as the custodian of all known incident and defect reports. To make \nmanaging these reports easier, most testing teams utilize an automated tool such as one of \nthose listed in Table 2.1. \n \nTable 2.1: Sample Defect-Tracking Tools \nNAME \nASSOCIATED WEB SITE \nBug Cracker \nwww.fesoft.com \nBugbase \nwww.threerock.com \nBugCollector \nwww.nesbitt.com \nBuggy \nwww.novosys.de \nBUGtrack \nwww.skyeytech.com \nBugLink \nwww.pandawave.com \nBugzilla \nwww.mozilla.org \nClearQuest \nwww.rational.com \nD-Tracker \nwww.empirix.com \nElementool \nwww.elementool.com \nPT BugTracker \nwww.avensoft.com \nRazor \nwww.visible.com \nSWBTracker \nwww.softwarewithbrains.com \nTeam Remote Debugger \nwww.remotedebugger.com \nTeam Tracker \nwww.hstech.com.au \nTestDirector \nwww.mercuryinteractive.com \nTestTrack Pro \nwww.seapine.com \nVisual Intercept \nwww.elsitech.com \nZeroDefect \nwww.prostyle.com \nUsing a commercial defect-tracking tool (such as one of the tools listed in Table 2.1) or an \nin-house-developed tool typically enables the testing team to automatically produce all sorts \nof defect-tracking reports. Examples include project status reports showing the status of \nevery reported incident/defect; progress reports showing the number of defects that have \nbeen found, newly assigned, or fixed since the last progress report was produced; agendas \nfor the next project meeting where defect-fixing priorities will be assessed; and many \ninteresting defect statistics. \nJust because a tool can produce a particular report does not necessarily mean that the \nreport will be useful to distribute (via paper or email). Too many reports produced too \n" }, { "page_number": 31, "text": " \n24 \nfrequently can often generate so much paper that the reports that are truly useful get lost in \nthe paper shredder. Therefore, a testing team should consider what reports are actually \ngoing to be useful to the team itself and/or useful methods of communication to individuals \noutside of the testing team, and then document in this section of the test plan which reports \nthey initially intend to produce. If the needs of the project change midway through the \ntesting effort, requiring new reports, modifications to existing ones, or the retirement of \nunneeded ones, then the test plan should be updated to reflect the use of this new set of \nreports. \nMetrics \nSome defect metrics are so easy to collect that it's almost impossible to avoid publishing \nthem. For example, the metrics of the number of new incidents found this week, the mean \nnumber of defects found per tested Web page, or the defects found per testing hour are \neasily found and gathered. The problem with statistics is that they can sometimes cause \nmore problems than they solve. For instance, if 15 new bugs were found this week, 25 last \nweek, and 40 the previous week, would senior management then determine that based on \nthese statistics that the system being tested was nearly ready for release? The reality could \nbe quite different. If test execution was prioritized so that the critical tests were run first, \nmoderately important tests were run next, and the low-priority tests were run last, then with \nthis additional information, it would be revealed that the unimportant stuff works relatively \nwell compared to the system's critical components. This situation is hardly as desirable as \nmight have been interpreted from the numbers at first glance. \nBefore cracking out metric after metric just because a tool produces it or because it just \nseems like the right thing to do, the testing team should first consider the value of these \nmetrics to this project or to future projects. Moller et al. (1993) cites six general uses for \nsoftware metrics: (1) goal setting, (2) improving quality, (3) improving productivity, (4) \nproject planning, (5) managing, or (6) improving customer confidence. More specific goals \nmay include identifying training needs, measuring test effectiveness, or pinpointing \nparticularly error-prone areas of the system. If a proposed metric is not a measure that can \nbe directly used to support one or more of these uses, then it runs the risk of being \nirrelevant or, worse still, misinterpreted. Black (2002, 2003), Craig et al. (2002), and Kan \n(1995) provide additional information on collecting and reporting software-testing metrics. \nThis section of the test plan can be used to document what metrics will be collected and \n(optionally) published during the testing effort. Note that some metrics associated with \nprocess improvement may not be analyzed until after this specific project has been \ncompleted. The results are then used to improve future projects. \nTest Summary Report \nThe test summary report-also sometimes known as a test analysis report-enables the \ntesting team to summarize all of its findings. The report typically contains information such \nas the following: \nƒ \nA summary of the testing activities that actually took place. This may vary from the \noriginally planned activities due to changes such as a reduction (or expansion) of \ntesting resources, an altered testing scope, or discoveries that were made during \ntesting (this will be especially true if extensive heuristic or exploratory testing was \nutilized). \nƒ \nA comprehensive list of all of the defects and limitations that were found (sometimes \neuphemistically referred to as a features list). The list may also include all the \nsignificant incidents that could still not be explained after investigation. \n" }, { "page_number": 32, "text": " \n25 \nƒ \nHigh-level project control information such as the number of hours and/or elapsed \ntime expended on the testing effort, capital expenditure on the test environment, and \nany variance from the budget that was originally approved. \nƒ \nOptionally, an assessment of the accumulative severity of all the known defects and \npossibly an estimation of the number and severity of the defects that may still be \nlurking in the system undetected. \nƒ \nFinally, some test summary reports also include a recommendation as to whether or \nnot the system is in a good enough state to be placed into or remain in production. \nAlthough the ultimate decision for approval should reside with the system's owner or \nsurrogate, often the testing team has the most intimate knowledge of the strengths \nand weaknesses of the system. Therefore, the team is perhaps the best group to \nmake an objective assessment of just how good, or bad, the system actually is. \nThe completion of the test summary report is often on a project plan's critical path. \nTherefore, the testing team may want to build this document in parallel with test execution \nrather than writing it at the end of the execution phase. Depending upon how long test \nexecution is expected to take, it may be helpful to those charged with fixing the system's \nvulnerabilities to see beta versions of the report prior to its final publication. These beta \nversions often take the form of a weekly (or daily) status report, with the test summary \nreport ultimately being the very last status report. Appendix C contains an example layout of \na test summary report. \nEnvironmental Needs \nA test environment is a prerequisite if the security-testing team wants to be proactive and \nattempt to catch security defects before they are deployed in a production environment. In \naddition, tests can be devised and executed without worrying about whether or not \nexecuting the tests might inadvertently have an adverse effect on the system being tested, \nsuch as crashing a critical program. Indeed, some tests may be specifically designed to try \nand bring down the target system (a technique sometimes referred to as destructive \ntesting). For example, a test that tried to emulate a denial-of-service (DoS) attack would be \nmuch safer to evaluate in a controlled test environment, than against a production system \n(even if in theory the production system had safeguards in place that should protect it \nagainst such an attack). \nIt would certainly be convenient if the testing team had a dedicated test lab that was an \nexact full-scale replica of the production environment, which they could use for testing. \nUnfortunately, usually as a result of budgetary constraints, the test environment is often not \nquite the same as the production environment it is meant to duplicate (in an extreme \nsituation it could solely consist of an individual desktop PC). For example, instead of using \nfour servers (as in the production environment) dedicated to running each of the following \ncomponents-Web server, proxy server, application server, and database server-the test \nenvironment may consist of only one machine, which regrettably cannot be simultaneously \nconfigured four different ways. Even if a test environment can be created with an equivalent \nnumber of network devices, some of the devices used in the test lab may be cheap \nimitations of the products actually used in the production environment and therefore behave \nslightly differently. For example, a $100 firewall might be used for a test instead of the \n$50,000 one used in production. \nIf the test environment is not expected to be an exact replica of the production environment, \nconsideration should be given to which tests will need to be rerun on the production system, \nas running them on the imperfect test environment without incident will not guarantee the \nsame results for the production environment. A second consideration is that the test \nenvironment could be too perfect. For example, if the implementation process involves any \n" }, { "page_number": 33, "text": " \n26 \nsteps that are prone to human error, then just because the proxy server in the test lab has \nbeen configured to implement every security policy correctly does not mean that the \nproduction version has also been implemented correctly. \nIn all probability, some critical site infrastructure security tests will need to be rerun on the \nproduction environment (such as checking the strength of system administrators' \npasswords, or the correct implementation of a set of firewall rules). If the Web site being \ntested is brand-new, this extra step should not pose a problem because these tests can be \nrun on the production environment prior to the site going live. For a Web site that has \nalready gone live, the security-testing team must develop some rules of engagement (terms \nof reference) that specify when and how the site may be prodded and probed, especially if \nthe site was previously undertested or not tested at all. These rules serve as a means of \neliminating false intruder alarms, avoiding accidental service outages during peak site \nusage, and inadvertently ignoring legitimate intruder alarms (because it was thought that \nthe alarm was triggered by the security-testing team and not a real intruder). \nConfiguration Management \nConfiguration management is the process of identifying (what), controlling (library \nmanagement), tracking (who and when), and reporting (who needs to know) the \ncomponents of a system at discrete points in time for the primary purpose of maintaining \nthe integrity of the system. A good configuration management process is not so obtrusive, \nbureaucratic, or all encompassing that it has a net negative effect on development or \ntesting productivity, but rather speeds such activities. \nDeveloping, testing, or maintaining anything but the most simplistic of Web sites is virtually \nimpossible without implementing some kind of configuration management process and \nshould therefore be a prerequisite for any significant testing effort, and consequently be \naddressed by the associated test plan. For example, a Webmaster may choose to install an \noperating system service pack midway through a penetration test, or a developer may \ndirectly perform a quick fix on the Web application in production. Both decisions can cause \na great deal of confusion and consequently prove to be quite costly to straighten out. \nMany organizations have become accustomed to using a configuration management tool to \nhelp them manage the ever-increasing number of software components that need to be \ncombined in order to build a fully functional Web application. A typical source code \npromotion process would require all developers to check in and check out application \nsource code from a central development library. At regular intervals, the configuration \nmanager (the build manager or librarian) baselines the application and then builds and \npromotes the application into a system-testing environment for the testing team to evaluate. \nIf all goes well, the application is finally promoted into the production environment. Note that \nsome organizations use an additional staging area between the system-testing and \nproduction environments, which is a particularly useful extra step if for any reason the \nsystem test environment is not an exact match to the production environment. Table 2.2 \nlists some sample configuration management tools that could be used to assist with this \nprocess. \n \nTable 2.2: Sample Configuration Management Tools \nTOOL \nASSOCIATED WEB SITE \nChangeMan \nwww.serena.com \nClearCase \nwww.rational.com \n" }, { "page_number": 34, "text": " \n27 \nTable 2.2: Sample Configuration Management Tools \nTOOL \nASSOCIATED WEB SITE \nCM Synergy \nwww.telelogic.com \nEndevor/Harvest \nwww.ca.com \nPVCS \nwww.merant.com \nRazor \nwww.visible.com \nSource Integrity \nwww.mks.com \nStarTeam \nwww.starbase.com \nTRUEchange \nwww.mccabe.com \nVisual SourceSafe \nwww.microsoft.com \nA second area of software that is a candidate for configuration management (potentially \nusing the same tools to manage the application's source code) would be the test scripts and \ntest data used to run automated tests (sometimes collectively referred to as testware). This \nis a situation that becomes increasingly important as the size and scope of any test sets \ngrow. \nIt is less common to see a similar configuration management process being applied to \nsystem software installation and configuration options. For example, the current set of \nservers may have been thoroughly tested to ensure that they are all configured correctly \nand have had all the relevant security patches applied, but when new servers are added to \nthe system to increase the system's capacity or existing servers are reformatted to fix \ncorrupted files, the new system software installations are not exactly the same as the \nconfiguration that was previously tested; it can be something as simple as two security \npatches being applied in a different order or the installer forgetting to uncheck the install \nsample files option during the install. \nRather than relying on a manual process to install system software, many organizations \nnow choose to implement a configuration management process for system software by \nusing some sort of disk replication tool. (Table 2.3 lists some sample software- and \nhardware-based disk replication tools.) The process works something like the following: a \nmaster image is first made of a thoroughly tested system software install. Then each time a \nnew installation is required (for a new machine or to replace a corrupted version), the \nreplication tool copies the master image onto the target machine, reducing the potential for \nhuman error. \n \nTable 2.3: Sample Disk Replication Tools \nTOOL \nASSOCIATED WEB SITE \nDrive2Drive \nwww.highergroundsoftware.com \nGhost \nwww.symantec.com \nImageCast \nwww.storagesoftsolutions.com \nImage MASSter/ImageCast \nwww.ics-iq.com \nLabExpert/RapiDeploy \nwww.altiris.com \n" }, { "page_number": 35, "text": " \n28 \nTable 2.3: Sample Disk Replication Tools \nTOOL \nASSOCIATED WEB SITE \nOmniClone \nwww.logicube.com \nPC Relocator \nwww.alohabob.com \nOne undesirable drawback of disk replication tools is their dependence on the target \nmachine having the same hardware configuration as the master machine. This would be no \nproblem if all the servers were bought at the same time from the same vendor. However, it \nwould be problematic if they were acquired over a period of time and therefore use different \ndevice drivers to communicate to the machine's various hardware components. \nUnfortunately, due to the lack of automated tools for some platforms, some devices still \nneed to be configured manually-for example, when modifying the default network traffic \nfiltering rules for a firewall appliance. Given the potential for human error, it's imperative that \nall manual installation procedures be well documented and the installation process be \nregularly checked to ensure that these human-error-prone manual steps are followed \nclosely. \nBefore embarking on any significant testing effort, the security-testing team should confirm \ntwo things: that all of the Web site and application components that are to be tested are \nunder some form of configuration management process (manual or automated) and that \nunder normal circumstances these configurations will not be changed while test execution \nis taking place. Common exceptions to this ideal scenario include fixing defects that actually \ninhibit further testing and plugging newly discovered holes that present a serious risk to the \nsecurity of the production system. \nIf the items to be tested are not under any form of configuration management process, then \nthe security-testing team should not only try to hasten the demise of this undesirable \nsituation, but they should also budget additional time to handle any delays or setbacks \ncaused by an unstable target. Also, where possible, the team should try to schedule the \ntesting effort in a way that minimizes the probability that a serious defect will make its way \ninto production, especially one that results from a change being made to part of the system \nthat had already been tested and wasn't retested. \nBrown et al. (1999), Dart (2000), Haug et al. (1999), Leon (2000), Lyon (2000), and White \n(2000) all provide a good starting point for those attempting to define and implement a new \nconfiguration management process. \nResponsibilities \nWho will be responsible for making sure all the key testing activities take place on \nschedule? This list of activities may also include tasks that are not directly part of the testing \neffort, but that the testing team depends upon being completed in a timely manner. For \ninstance, who is responsible for acquiring the office space that will be used to house the \nadditional testers called for by the test plan? Or, if hardware procurement is handled \ncentrally, who is responsible for purchasing and delivering the machines that will be needed \nto build the test lab? \nIdeally, an escalation process should also be mapped out, so that in the event that \nsomeone doesn't fulfill their obligations to support the testing team for whatever reason, the \nsituation gets escalated up the management chain until it is resolved (hopefully as quick \nand painless as possible). \n" }, { "page_number": 36, "text": " \n29 \nAnother decision that needs to be made is how to reference those responsible for these key \nactivities. Any of the reference methods in Table 2.4 would work. \n \nTable 2.4: Options for Referencing Key Personnel \nKEY PERSON \nREFERENCED BY \n... \nEXAMPLE \nCompany level \nThe xyz company is responsible for conducting a physical \nsecurity assessment of the mirror Web site hosted at the \nLondon facility. \nDepartment level \nNetwork support is responsible for installing and configuring \nthe test environment. \nJob/role title \nThe application database administrator (DBA) is responsible \nfor ensuring that the database schema and security settings \ncreated in the test environment are identical to those that will \nbe used in the production environment. \nIndividual \nJohnny Goodspeed is responsible for testing the security of \nall application communications. \nWhen listing these activities, the testing team will need to decide on how granular this list of \nthings to do should be. The more granular the tasks, the greater the accountability, but also \nthe greater the effort needed to draft the test plan and subsequently keep it up to date. The \ntest plan also runs the risk of needlessly duplicating the who information contained in the \nassociated test schedule (described later in this chapter in the section Schedule). \nStaffing and Training Needs \nIf outside experts are used to conduct penetration testing (covered in more detail in Chapter \n9), is it cost effective for the internal staff to first conduct their own security tests? If the \noutside experts are a scarce commodity-and thus correspondingly expensive or hard to \nschedule-then it may make sense for the less experienced internal staff to first run the \neasy-to-execute security assessment tests; costly experts should only be brought in after all \nthe obvious flaws have been fixed. In effect, the in-house staff would be used to run a set of \ncomparatively cheap entry-criteria tests (also sometimes referred to as smoke tests) that \nmust pass before more expensive, thorough testing is performed. \nOne consideration that will have a huge impact on the effectiveness of any internally staffed \nsecurity-testing effort is the choice of who will actually do the testing. The dilemma that \nmany organizations face is that their security-testing needs are sporadic. Often extensive \nsecurity-oriented testing is not needed for several months and then suddenly a team of \nseveral testers is needed for a few weeks. In such an environment, an organization is going \nto be hard pressed to justify hiring security gurus and equally challenged to retain them. An \nalternative to maintaining a permanent team of security testers is to appropriate employees \nfrom other areas such as network engineers, business users, Web masters, and \ndevelopers. Unfortunately, many of these candidates may not be familiar with generic \ntesting practices, let alone security-specific considerations. This could result in a longer-\nlasting testing effort and less dependable test results. \nFor organizations that maintain a permanent staff of functional testers, Q/A analysts, test \nengineers, or other similarly skilled employees, one possible solution is to train these \n" }, { "page_number": 37, "text": " \n30 \nindividuals in the basics of security testing and use them to form the core of a temporary \nsecurity-testing team. Such a team would, in all probability, still need to draw upon the skills \nof other employees such as the firewall administrators and DBAs in order to conduct the \nsecurity testing. But having such a team conduct many of the security tests that need to be \nperformed may be more cost effective than outsourcing the entire testing task to an outside \nconsulting firm. This would be especially beneficial for the set of tests that are expected to \nbe regularly rerun after the system goes into production. \nThe degree to which some (or all) of the security-testing effort can be handled in house to a \nlarge extent depends on the steepness of the learning curve that the organization's \nemployees will face. One way to reduce this learning curve is to make use of the ever-\ngrowing supply of security-testing tools. The decision on how much of the testing should be \ndone in house and what tools should be acquired will therefore have a major impact on how \nthe security-testing effort is structured. These topics are expanded on further in Chapter 9. \nSchedule \nUnless the testing effort is trivial in size, the actual details of the test schedule are probably \nbest documented in a separate deliverable and generated with the assistance of a project-\nscheduling tool such as one of those listed in Table 2.5. \n \nTable 2.5: Sample Project-Scheduling Tools \nNAME \nASSOCIATED WEB SITE \nEasy Schedule Maker \nwww.patrena.com \nFastTrack Schedule \nwww.aecsoft.com \nGigaPlan.net \nwww.gigaplan.com \nManagePro \nwww.performancesolutionstech.com \nMicrosoft Project \nwww.microsoft.com \nNiku \nwww.niku.com \nOpenAir \nwww.openair.com \nPlanView \nwww.planview.com \nProj-Net \nwww.rationalconcepts.com \nProjectKickStart \nwww.projectkickstart.com \nProject Dashboard \nwww.itgroupusa.com \nProject Office \nwww.pacificedge.com \nTime Disciple \nwww.timedisciple.com \nVarious \nwww.primavera.com \nXcolla \nwww.axista.com \nWith the scheduling details documented elsewhere, this section of the test plan can be \nused to highlight significant scheduling dates such as the planned start and end of the \ntesting effort and the expected dates when any intermediate milestones are expected to be \nreached. \n" }, { "page_number": 38, "text": " \n31 \nSince many testing projects are themselves subprojects of larger projects, a factor to \nconsider when evaluating or implementing a scheduling tool is how easy it is to roll up the \ndetails of several subprojects into a larger master project schedule, thereby allowing for \neasier coordination of tasks or resources that span multiple subprojects. \nProject Closure \nAlthough itmight be desirable from a security perspective to keep a security-testing project \nrunning indefinitely, financial reality may mean that such a project ultimately must be \nbrought to closure (if only to be superseded by a replacement project). \nWhen winding down a security testing project, great care must be exercised to ensure that \nconfidential information (such as security assessment reports or a defect-tracking database \nthat contains a list of all the defects that were not fixed because of monitory pressures) \ngenerated by the testing effort does not fall into the wrong hands. This is especially relevant \nif going forward nobody is going to be directly accountable for protecting this information, or \nif some of this information was generated by (or shared with) third parties. \nThe test plan should therefore outline how the project should be decommissioned, itemizing \nimportant tasks such as who will reset (or void) any user accounts that were set up \nspecifically for the testing effort, making sure no assessment tools were left installed on a \nproduction machine, and that any paper deliverables are safely destroyed. \nPlanning Risks and Contingencies \nA planning risk can be any event that adversely affects the planned testing effort (the \nschedule, completeness, quality, and so on). Examples would include the late delivery of \napplication software, the lead security tester quitting to take a better-paid job (leaving a \nhuge gap in the testing team's knowledge base), or the planned test environment not being \nbuilt due to unexpected infrastructure shortages (budget cuts). \nThe primary purpose of identifying in the test plan the most significant planning risks is to \nenable contingency plans to be proactively developed ahead of time and ready for \nimplementation in the event that the potential risk becomes a reality. Table 2.6 lists some \nexample contingency plans. \n \nTable 2.6: Example Contingency Plans \nPLANNING RISK \nCONTINGENCY PLAN \nDon't install the service pack. (Keep the \nscope the same.) \nInstall the service pack and reexecute \nany of the test cases whose results \nhave now been invalidated. (More time \nor resources are needed.) \nInstall the service pack, but don't \nchange the test plan. (The quality of the \ntesting is reduced.) \nMidway \nthrough \nthe \ntesting \neffort, \nMicrosoft releases a new service pack \nfor the operating system installed on a \nlarge number of the servers used by the \nWeb site. \nRedo some of the highly critical tests \nthat have been invalidated and drop \nsome of the lower, as-yet-unexecuted \n" }, { "page_number": 39, "text": " \n32 \nTable 2.6: Example Contingency Plans \nPLANNING RISK \nCONTINGENCY PLAN \ntests. (The quality of the testing is \nreduced.) \nDo nothing, as the test environment \nbecomes \nless \nlike \nthe \nproduction \nenvironment. (The quality of the testing \nis reduced.) \nBuy a new firewall for the test \nenvironment. (Increase resources.) \nThe production environment upgrades its \nfirewall to a more expensive/higher-\ncapacity version. \nReduce firewall testing in the test \nenvironment and increase testing in the \nproduction environment. (Change the \nscope of the testing.) \nThe entire testing team wins the state \nlottery. \nMake sure you are in the syndicate. \nFor any given risk, typically numerous contingencies could be considered. However, in \nmost cases, the contingencies can be categorized as either extending the time required for \ntesting, reducing the scope of the testing (for example, reducing the number of test items \nthat will be tested), adding additional resources to the testing effort, or reducing the quality \nof the testing (for example, running fewer or less well designed tests), thereby increasing \nthe risk of the system failing. These contingency categories can be illustrated by the quality \ntrade-off triangle depicted in Figure 2.3. Reducing one side of the triangle without \nincreasing at least one of the other sides reduces the quality of the testing (as represented \nby the area inside the triangle). \n \nFigure 2.3: Quality trade-off triangle. \nNone of these options may sound like a good idea to senior management and they may \ndecide that all these contingencies are unacceptable. Unfortunately, if management does \nnot make a proactive contingency decision, the decisions (and their consequences) do not \ngo away. Instead, they are implicitly passed down to the individual members of the testing \nteam. This results in unplanned consequences such as a tester unilaterally deciding to skip \nan entire series of tests, skimping on the details of an incident report (requiring the incident \ninvestigator to spend more time trying to recreate the problem), or working extra unpaid \nhours (while at the same time looking for another job). None of these consequences are \nlikely to be more desirable than the options that senior management previously decided \nwere unacceptable. \n" }, { "page_number": 40, "text": " \n33 \nIssues \nMany people may hold the view that each and every issue is merely a risk that needs to be \nmitigated by developing one or more contingencies, resulting in any issues that the testing \nteam faces being documented in the \"planning risks\" section of the test plan. \nAlternatively, issues that have highly undesirable or impractical contingencies (such as a \nhuge increase in the cost of the testing effort), maybe siphoned off from the planning risks \nsection and thereby highlighted in their own section, allowing management to focus on \nthese unresolved issues. \nAssumptions \nIn a perfect world, a test plan would not contain any assumptions, because any assumption \nthat the testing team had to make would be investigated to determine the validity of the \nassumption. Once thoroughly researched, the assumption would be deleted or transferred \nto another section (such as the Planning Risks section). \nUnfortunately, many assumptions may not be possible to prove or disprove because of the \ntime needed to investigate them, or because the people who could confirm the assumption \nare unwilling to do so. For example, the testing team may need to assume that the \ninformation provided by bug- and incident-tracking center Web sites (such as those listed in \nTable 4.2) is accurate, because the effort needed to reconfirm this information would take \ntoo long and consume too many resources. \nConstraints and Dependencies \nThe testing team may find it useful to list all the major constraints that they are bound by. \nObvious constraints include the project's budget or the deadline for its completion. Less \nobvious constraints include a corporate \"no new hires\" mandate (which means that if the \ntesting is to be done in house, it must be performed using the existing staff), or a corporate \nprocurement process that requires the testing team to purchase any hardware or software \nthat costs more than $1,000 through a central purchasing unit (a unit that typically runs six \nto eight weeks behind). \nAcronyms and Definitions \nThis section of the test plan can be used to provide a glossary of terms and acronyms \nreferenced by the test plan and are not normally found in the everyday language of the \nplan's anticipated readership. \nReferences \nIt's generally considered a good practice to include a summary list of all the other \ndocuments that are referenced, explicitly or implicitly, by the test plan (such as the project \nschedule or requirements documentation). Placing the list in its own section towards the \nend of the test plan will improve the readability of this section-and hence improve the \nchances that it is actually used. \nApprovals \nA test plan should identify two groups of approvers. The first group will be made up of those \nindividuals who will decide whether or not the proposed test plan is acceptable and meets \nthe security-testing needs of the organization, whereas the second group (which may be \ncomposed of the same individuals as the first group) will decide whether or not the \n" }, { "page_number": 41, "text": " \n34 \ndeliverables specified in the test plan and subsequently produced and delivered by the \ntesting team (for example, the test summary report) are acceptable. \nBeing asked to approve something is not the same as being kept informed about it. There \nmay be additional interested parties (stakeholders) who need to be kept informed about the \nongoing and ultimate status of the testing project, but do not really have the organizational \npower to approve or disapprove any of the deliverables listed in the test plan. For example, \nthe configuration management librarian may need to know what the testing team needs in \nterms of configuration management support, but it is unlikely to be able to veto a particular \nset of tests. Rather than listing these individuals as approvers, it may be more accurate to \nidentify them as stakeholders and indicate that their acceptance of the test plan merely \nindicates that they believe they have been adequately informed of the testing project's \nplans. \n \nMaster Test Plan (MTP) \nFor small simple testing projects, a single, short test plan may be all that is needed to \nsufficiently convey the intended activities of the testing team to other interested parties. \nHowever, for larger projects, where the work may be divided across several teams working \nat separate locations for different managers and at different points in the system's \ndevelopment, it may be easier to create several focused test plans rather than one large all-\nencompassing plan. For example, one plan may focus on testing the physical security of \nthe computer facilities that the Web site will be housed in, another may describe the \npenetration testing that will be performed by an outsourced security-testing firm, and a third \nmay concentrate on the unit-level tests that the Web application development team is \nexpected to do. \nIf multiple test plans will be used, the activities within each plan need to be coordinated. For \ninstance, it does not make sense for all of the test plans to schedule the creation of a \ncommon test environment; instead, the first plan that will need this capability should include \nthis information. The higher the number of test plans, the easier it is to manage each \nindividual plan, but the harder it becomes to coordinate all of these distributed activities, \nespecially if the organization's culture does not lend itself to nonhierarchical lines of \norganizational communication. \nOne solution to the problem of multiple test plan coordination that many practitioners \nchoose to utilize is the master test plan (MTP). The MTP is a test plan that provides a high-\nlevel summary of all the other test plans, thereby coordinating and documenting how the \nentire security-testing effort has been divided up into smaller, more manageable units of \nwork (as depicted in Figure 2.4). \n \n \n" }, { "page_number": 42, "text": " \n35 \n \nFigure 2.4: MTP. \nA by-product of defining multiple test plans is that such an approach may facilitate several \nsecurity-testing teams working in parallel, which provides a significant advantage when \nworking in Web time. Additionally, having several documented and well-scoped groups of \ntests makes outsourcing some or all of the testing effort much more controllable. \nAs is the case with each individual test plan, it is often the process of developing an MTP \nrather than the actual end product that is of greater help to the testing team. Creating the \nMTP should facilitate discussions on what testing objectives should be assigned to each \nindividual test plan as part of an overall scheme, rather than relying on the recognizance of \nhead-down individuals working in isolation on separate test plans. Craig et al. (2002) and \nGerrard (2002) provide additional information on the concept of a master test plan. \n \nSummary \nWhether a person chooses to use a test plan format based on an industry standard, an \ninternal template, or a unique layout customized for this specific project, the test plan and \nits associated documents should be reviewed to make sure that it adequately addresses \nthe test-planning considerations summarized in Table 2.7. \n \n \nTable 2.7: Test-Planning Consideration Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nHave the system's security requirements been clarified and \nunambiguously documented? \n" }, { "page_number": 43, "text": " \n36 \nTable 2.7: Test-Planning Consideration Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nHas the goal (and therefore scope) of the testing effort been clearly \ndefined? \n□ \n□ \nHave all the items (and their versions) that need to be tested been \nidentified? \n□ \n□ \nHave any significant items that will not be tested been listed? \n□ \n□ \nHas a change control process for the project been defined and have \nall the individuals who will approve changes to the scope of the \ntesting been identified? \n□ \n□ \nHave all the features that need to be tested been identified? \n□ \n□ \nHave any significant features that will not be tested been listed? \n□ \n□ \nHas the testing approach (strategy) been documented? \n□ \n□ \nHave the criteria (if any) by which the system will be deemed to have \npassed security testing been documented? \n□ \n□ \nHave the criteria (if any) for halting (and resuming) the testing effort \nbeen documented? \n□ \n□ \nHave the deliverables that the testing effort is expected to produce \nbeen documented? \n□ \n□ \nHave all the environmental needs of the testing effort been \nresearched and documented? \n□ \n□ \nHas a configuration management strategy for the items that are to be \ntested been documented? \n□ \n□ \nHas a configuration management strategy for the test scripts and test \ndata (testware) been documented? \n□ \n \nHave responsibilities for all the testing activities been assigned? \n□ \n□ \nHave responsibilities for all the activities that the testing effort is \ndependent upon been assigned? \n□ \n□ \nHave staffing needs been identified and resourced? \n□ \n□ \nHave any training needs been identified and resourced? \n□ \n□ \nHas a test schedule been created? \n□ \n□ \nHave the steps necessary to bring the project to a graceful closure \nbeen considered? \n□ \n□ \nHave the most significant planning risks been identified? \n□ \n□ \nHave contingency plans for the most significant planning risks been \ndevised and approved? \n□ \n□ \nHave all issues, assumptions, constraints, and dependencies been \n" }, { "page_number": 44, "text": " \n37 \nTable 2.7: Test-Planning Consideration Checklist \nYES \nNO \nDESCRIPTION \ndocumented? \n□ \n□ \nHave any unusual acronyms and terms been defined? \n□ \n□ \nHave any supporting documents been identified and cross-\nreferenced? \n□ \n□ \nHave those individuals responsible for approving the test plans been \nidentified? \n□ \n□ \nHave those individuals responsible for accepting the results of the \ntesting effort been identified? \n□ \n□ \nHave those individuals who need to be kept informed of the testing \neffort's plans been identified? \n \n" }, { "page_number": 45, "text": " \n38 \nPart III: Test Design \nChapter List \nChapter 3: Network Security \nChapter 4: System Software Security \nChapter 5: Client-Side Application Security \nChapter 6: Server-Side Application Security \nChapter 7: Sneak Attacks: Guarding Against the Less-Thought-of Security Threats \nChapter 8: Intruder Confusion, Detection, and Response \n" }, { "page_number": 46, "text": " \n39 \nChapter 3: Network Security \nOverview \nWhen asked to assess the security of a Web site, the first question that needs to be \nanswered is, \"What is the scope of the assessment?\" Often the answer is not as obvious as \nit would seem. Should the assessment include just the servers that are dedicated to hosting \nthe Web site? Or should the assessment be expanded to include other machines that \nreside on the organization's network? What about the routers that reside upstream at the \nWeb site's Internet service provider (ISP), or even the machines running legacy applications \nthat interface to one of the Web applications running on the Web site? Therefore, one of the \nfirst tasks the testing team should accomplish when starting a security assessment is to \ndefine the scope of the testing effort and get approval for the scope that they have \nproposed. \nThis chapter discusses how a security assessment effort can be scoped by referencing a \nset of network segments. The network devices attached to these segments collectively form \nthe network under test. Adding the physical locations used to house these devices, the \nbusiness process aimed at ensuring their security, and the system software and \napplications that run on any of these devices may then form the collection of test items that \nwill ultimately comprise the system that will be tested by the security-testing effort. Figure \n3.1 graphically depicts this relationship. \n \nFigure 3.1: Scope of security testing. \nThe subsequent sections of this chapter explains an approach that may be used by the \ntesting team to ensure that the network defined by the scoping effort has been designed \n" }, { "page_number": 47, "text": " \n40 \nand implemented in a manner that minimizes the probability of a security vulnerability being \nexploited by an attacker (summarized in Figure 3.5). \nMany of the networking terms used in this chapter may not be readily familiar to some \nreaders of this book. Appendix A provides a basic explanation of what each of the network \ndevices referenced in this chapter does and gives an overview of the networking protocols \nused by these components to communicate to each other across the network. Readers who \nare unfamiliar with what a firewall does or what the Transmission Control Protocol/Internet \nProtocol (TCP/IP) is should review this appendix first before continuing with the rest of this \nchapter. For those readers looking for a more detailed explanation of networking concepts, \nthe following books provide a good introduction to this topic: Brooks 2001, Nguyen 2000, \nand Skoudis 2001. \n \nScoping Approach \nA security assessment can be scoped in several ways. The assessment can focus entirely \non the software components that compromise a single Web application or restrict itself to \nonly testing the devices that are dedicated to supporting the Web site. The problem with \nthese and other similar approaches is that they ignore the fact that no matter how secure a \nsingle component is (software or physical device), if the component's neighbor is vulnerable \nto attack and the neighbor is able to communicate to the allegedly secure component \nunfettered, then each component is only as secure as the most insecure member of the \ngroup. To use an analogy, suppose two parents are hoping to spare the youngest of their \nthree children from a nasty stomach bug that's currently going around school. The parents \nwould be deluding themselves if they thought that their youngest child would be protected \nfrom this threat if he were kept home from school and his older sisters still went to school. If \nthe elder siblings were eventually infected, there would be little to stop them from passing it \non to their younger brother. \nThe approach this book uses to define the scope of a security assessment is based on \nidentifying an appropriate set of network segments. Because the term network segment \nmay be interpreted differently by readers with different backgrounds, for the purposes of \ndefining the testing scope, this book defines a network segment as a collection of \nnetworked components that have the capability to freely communicate with each other-such \nas several servers that are connected to a single network hub. \nMany organizations choose to use network components such as firewalls, gateways, proxy \nservers, and routers to restrict network communications. For the purposes of defining a \ntesting scope, these components can be considered to represent candidate boundaries for \neach of the network segments. They can be considered this way because these devices \ngive an organization the opportunity to partition a large network into smaller segments that \ncan be insulated from one another (as depicted in Figure 3.2), potentially isolating (or \ndelaying) a successful intrusion. \n \n" }, { "page_number": 48, "text": " \n41 \n \nFigure 3.2: Example network segments. \nThe scope of a security assessment can therefore be documented by referencing a \ncollection of one or more network segments. For some small Web sites, the scope could be \nas simple as a single multipurpose server and a companion firewall appliance. For larger \norganizations, the scope could encompass dozens (or even hundreds) of servers and \nappliances scattered across multiple network segments in several different physical \nlocations. \nDepending upon how the network engineers and local area network (LAN) administrators \nhave (or propose to) physically constructed the network, these network segments may be \neasily referenced (for example, stating that the scope of the testing effort will be restricted \nto the network segments ebiz.tampa, crm.tampa, and dmz.tampa) or each of the physical \ncomponents that compromise the network segments may have to be individually itemized. \nAs previously discussed, including only a portion of a network segment within the scope \nshould be avoided because establishing the security of only a portion of a segment is of \nlittle value unless the remaining portion has already been (or will be) tested by another \ntesting effort. \nOnce the network segments that comprise the scope of the testing effort have been \ndefined, the scope can be further annotated to specify the hardware devices that make up \nthese network segments, the physical facilities that are used to house this hardware, and \nany system software or application software that resides on this hardware, and finally any \nbusiness processes (such as an intruder response process) that are intended to protect the \nsecurity of the devices. \n \nScoping Examples \nThe specific approach used to identify the scope of the testing effort is very dependent on \nthe size of the task and the culture of the organization, as the following scenarios illustrate. \nHotel Chain \nA small hotel chain has decided to place its Web site (the originally stated target of the \ntesting effort) on a handful of servers, which they own and administer, at an off-site facility \nowned and managed by their ISP. On occasion, the Web application running at this site \nneeds to upload new reservations and download revised pricing structures from the \norganization's legacy reservation processing system that resides on the corporate network. \nThe Web application and legacy reservation system communicates via the Internet. Access \nto the corporate network is via the same firewall-protected Internet connection used by the \n" }, { "page_number": 49, "text": " \n42 \nhotel chain for several other Internet services (such as employee emails and Internet \nbrowsing). Figure 3.3 illustrates this configuration. \n \nFigure 3.3: Hotel chain configuration. \nCommunication between the Web site and corporate network is via a firewall. Therefore, it \nwould not be unreasonable to restrict the scope of the Web site security-testing effort to that \nof the two network segments that the hotel chain administers at the ISP facility (a \ndemilitarized zone [DMZ] and back-end Web application). On the other hand, had the \ncommunication to the legacy system been via an unfiltered direct network connection to the \ncorporate network, it would have been hard to justify not including the corporate network in \nthe scope (unless it was covered by another testing project). A security breach at the \ncorporate network could easily provide a back-door method of entry to the Web site, \ncircumventing any front-door precautions that may have been implemented between the \nWeb application and the Internet. \nFurniture Manufacturer \nA medium-sized furniture manufacturer has decided to develop its own Web application in \nhouse using contracted resources. Its entire Web site, however, will be hosted at an ISP's \nfacility that offers low-cost Web hosting-so low cost that parts of the Web application \n(specifically the database) will be installed on a server shared with several other clients of \nthe ISP. Assume that the ISP is unwilling (or perhaps unable) to provide the furniture \nmanufacturer with the schematic of its network infrastructure and that it would not \nappreciate any of its clients conducting their own, unsolicited security assessments. The \nfurniture manufacturer should restrict its security-testing activities to testing the in-house-\ndeveloped Web application using its own test lab. The risk of the production version being \nattacked physically or via a system software vulnerability would be mitigated by requiring \nthe ISP to produce evidence that it has already tested its infrastructure to ensure that it is \nwell defended. Ideally, some form of guarantee or insurance policy should back up this \nassurance. \nAccounting Firm \nA small accounting firm, whose senior partner doubles as the firm's system administrator, \nhas hosted its Web site on the partnership's one-and-only file and print server. (This is a \nquestionable decision that the security assessment process should highlight.) This server is \naccessible indirectly from the Internet via a cheap firewall appliance and directly from any \none of the dozen PCs used by the firm's employees. Figure 3.4 illustrates this configuration. \n \n" }, { "page_number": 50, "text": " \n43 \n \nFigure 3.4: Accounting firm configuration. \nBecause of the lack of any interior firewall or other devices to prohibit accessing the Web \nsite from a desktop machine, the Web site is only as safe as the least secure PC. (Such a \nPC would be one that, unbeknownst to the rest of the firm, has a remote-access software \npackage installed upon it, so the PC's owner can transfer files back and forth from his or her \nhome over the weekend.) Such a situation would merit including all the PCs in the scope of \nthe security assessment or suspending the security testing until an alternate network \nconfiguration is devised. \nSearch Engine \nA large Internet search engine Web site uses several identical clusters of servers scattered \nacross multiple geographic locations in order to provide its visitors with comprehensive and \nfast search results. The Web site's LAN administrator is able to provide the testing team \nwith a list of all the network segments used by the distributed Web site, and the devices \nconnect to these different segments. \nBecause of the size of this security assessment, the testing team may decide to break the \nassessment up into two projects. The first project would concentrate on testing a single \ncluster of servers for vulnerabilities. Once any vulnerabilities identified by the first project \nhave been fixed, the second phase focuses on ensuring that this previously assessed \nconfiguration has now been implemented identically on all the other clusters. \nThe Test Lab \nAn area that is often overlooked when deciding upon what should be covered by a security-\ntesting effort is the devices and network segment(s) used by the testing team itself to test a \nnonproduction version of the system. Typically, these environments are referred to as test \nlabs. If they are connected to the production environment or to the outside world (for \ninstance, via the Internet), they might pose a potential security threat to the organization \nunless they are included in the security assessment. \nTest labs are notorious for having weak security. They are therefore often the target of \nattackers trying to gain access to another network segment using the testing lab as a \nstepping stone to their ultimate goal. The following are just two of the scenarios that have \ncontributed to this reputation. Test lab machines are often reconfigured or reinstalled so \nfrequently that normal security policies and access controls are often disregarded for the \nsake of convenience. For example, very simple or even blank administrator passwords \nmight be used because the machines are constantly being reformatted, or protective \nsoftware such as antivirus programs are not installed because they generate too many false \nalarms during functional testing and potentially skew the test results obtained during \nperformance testing. Secondly, minimum access controls are used in order to make \nautomated test scripts more robust and less likely to fail midway through a test because the \ntesting tool did not have sufficient privileges. \n" }, { "page_number": 51, "text": " \n44 \nThe scope of the security assessment should therefore always explicitly state whether or \nnot a system's associated test lab is included in the testing effort and, if not, why it has been \nexcluded. All too often, these test labs' only line of defense is the assumption that no \nattacker knows of their existence; solely relying on a security-by-obscurity strategy is a \ndangerous thing to do. \nSuspension Criteria \nIf, during the scoping of a security assessment, clearly defining a testing scope proves to be \ncompletely impossible, then it might be wise to temporarily suspend the testing effort until \nthis issue can be resolved. Such a situation could have come about because the \ninformation needed to make an informed decision about the network topology could not be \nobtained. This could also occur because the topology that has actually been implemented \nappears to allow such liberal access between multiple network segments that the size of \nsecurity assessment needed to ensure the security of the network would become too vast, \nor if it is restricted to a single segment, it could not ensure the security of the segment \nbecause of the uncertainly associated with other adjacent segments. \nAlternatively, a huge disclaimer could be added to the security assessment report stating \nthat someone else has presumably already thoroughly tested (or will soon test) these \nadjacent segments using the same stringent security policies that this testing project uses. \nThis so-called solution ultimately presents a lot of opportunity for mis-communication and \npotential finger-pointing at a later date, but may also provide the impetus for convincing \nmanagement to devote more time and energy toward remedying the situation. \n \nDevice Inventory \nOnce the network segments that will form the subject of the security assessment have been \nidentified, the next step is to identify the devices that are connected to these segments. A \ndevice inventory is a collection of network devices, together with some pertinent information \nabout each device, that are recorded in a document. \nA device can be referenced in a number of ways, such as its physical location or by any \none of several different network protocol addresses (such as its hostname, IP address, or \nEthernet media access control [MAC] address). Therefore, the inventory should be \ncomprehensive enough to record these different means of identification. The security-\ntesting team will need this information later in order to verify that the network has been \nconstructed as designed. In addition, if a test lab is to be built, much of this information will \nbe needed in order to make the test lab as realistic as possible. Table 3.1 depicts the \npertinent information that the testing team should consider acquiring for each of the devices \nin the device inventory. Appendix A provides background information on the differences \nbetween these different network protocol addresses. \n" }, { "page_number": 52, "text": " \n45 \n \nTable 3.1: Example Layout Structure for a Device Inventory \nDEVICE ID \nDEVICE \nDESCRIPTIO\nN \nPHYSICAL LOCATION \nNETWORK \nACCESSIBILITY \nHOSTNAME(S) \nIP ADDRESS(ES) \nMAC ADDRESS(ES) \n1 \nISP router \nISP facility \n \n \n123.456.789.123 \n \n2 \nInternal router \nTelecom room \n \n \n123.456.789.124 \n \n3 \nPerimeter \nfirewall \nTelecom room \n \n \n123.456.789.130 \n \n4 \nWeb server \n#1 \nMain server room \n \nweb1.dmz.miami \n123.456.789.131 \naa.bb.cc.dd.bb.aa \n5 \nWeb server \n#2 \nMain server room \n \nweb2.dmz.miami \n123.456.789.132 \naa.bb.cc.dd.bb.bb \n6 \nFTP server \nMain server room \n \nftp1.dmz.miami \n123.456.789.133 \naa.bb.cc.dd.bb.cc \n7 \nLoad \nbalancer \nMain server room \n \n \n123.456.789.134 \n \n8 \nDMZ firewall \nMain server room \n \n \n123.456.789.135 \n \n9 \nInternal \nswitch \nMain server room \n \n \n123.456.789.136 \n \n10 \nApplication \nserver \nMain server room \n \nweblogic1.main.miami \n123.456.789.140 \naa.bb.cc.dd.cc.aa \n11 \nDatabase \nserver \nMain server room \n \nsybase1.main.miami \n123.456.789.141 \naa.bb.cc.dd.cc.bb \n12 \nComputer \nroom PC # 1 \nMain server room \n \njoshua.main.miami \n123.456.789.150 \naa.bb.cc.dd.aa.aa \n13 \nComputer \nroom PC #2 \nMain server room \n \ndavid.main.miami \n123.456.789.151 \naa.bb.cc.dd.aa.bb \n14 \nComputer \nroom PC #3 \nMain server room \n \nmatthew.main.miami \n123.456.789.152 \naa.bb.cc.dd.aa.cc \n" }, { "page_number": 53, "text": " \n46 \nTable 3.1: Example Layout Structure for a Device Inventory \nDEVICE ID \nDEVICE \nDESCRIPTIO\nN \nPHYSICAL LOCATION \nNETWORK \nACCESSIBILITY \nHOSTNAME(S) \nIP ADDRESS(ES) \nMAC ADDRESS(ES) \nNote that instead of assigning a new inventory ID to each device, the testing team may find it more convenient to use the original equipment manufacturer (OEM) \nserial number or the inventory tracking number (sometimes represented as a barcode) internally assigned by the owning organization. \n \n" }, { "page_number": 54, "text": " \n47 \n \nAn organization's LAN administrator should be able to provide the testing team with all the \ninformation needed to construct a device inventory by either automatically running an \nalready-installed network auditing tool (such as those listed in Table 3.6) or by referencing \nthe network specification used to construct the network (or if it is not already built, the \nnetwork design that will be used to build the network). If such documentation is not \nimmediately available or if it can't be recreated in a timely manner, this may be a symptom \nof an unmanaged network and/or LAN administrators who are too busy administrating the \nnetwork to keep track of its structure. Either way, such an undesirable scenario is likely to \nbe indicative of a system prone to security vulnerabilities due to an undocumented structure \nand overtaxed administrators. \nIf the information needed to build a device inventory cannot be provided to the testing team, \nthe testing team must decide whether the testing effort should be suspended until the \nsituation can be remedied or attempt to acquire the raw information needed to construct the \ndevice inventory themselves (possible using one or more of the techniques described later \nin the verifying device inventory section of this chapter). Table 3.2 provides a checklist for \nbuilding a device inventory. \n \nTable 3.2: Network Device Inventory Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nHas a list been acquired of all the network devices (servers, routers, \nswitches, hubs, and so on) attached to any of the network segments \nthat form the scope of the security assessment? \n□ \n□ \nHas the physical location of each network device been identified? \n□ \n□ \nHave the network addresses used to reference each network device \nbeen identified? \n \nWIRELESS SEGMENTS \nWhen documenting a network that utilizes wireless communications (such as Bluetooth or \nIEEE 802.11), the desired effective range of the communication should also be recorded \nand the method of encryption (if any) that will be used. \nAlthough a wireless standard may stipulate that the broadcasting device only have a short \nrange, in practice this range may be significantly larger, giving potential intruders the \nopportunity to attach their wireless-enabled laptops to a network by parking their cars \nacross the road from the targeted organizations. \n \nNetwork Topology \nFor simplicity, the LAN administrator may have connected (or if the network is yet to be \nbuilt, may be intending to connect) all the devices that comprise the network under test to a \nsingle network hub, switch, or router, enabling any device to directly communicate to any \nother device on the network. A more compartmentalized approach would be to break the \nnetwork up into several separate network segments. Such an approach should, if \nconfigured properly, improve security (and possibly network performance) by keeping \nnetwork traffic localized at the cost of increased network complexity. If a compartmentalized \napproach is adopted, the network's intended topological configuration should be \n" }, { "page_number": 55, "text": " \n48 \ndocumented and subsequently verified to ensure that the network has actually been \nconfigured as desired. That would be advisable because an improper implementation may \nprovide not only suboptimal performance, but it also may give attackers more options for \ncircumventing network security precautions. \nFor a small network, this information can be displayed in a neat, concise diagram, which is \nsimilar to the ones depicted in Figures 3.3 and 3.4. This illustration might look nice on the \nLAN administrator's office wall, but unfortunately it would also pose a security leak. \nFor larger networks, a two-dimensional matrix may prove to be a more appropriate method \nof documenting links. Either way, this information should be kept under lock and key and \nonly distributed to authorized personnel as an up-to-date network map would save an \nintruder an awful lot of time and effort. \nDevice Accessibility \nBy restricting the number of devices that can be seen (directly communicated to) by a \nmachine located outside of the organization network, the network offers an external intruder \nfewer potential targets. Anyone looking for security vulnerabilities would be thwarted; \nhence, the network as a whole would be made more secure. The same is true when \nconsidering the visibility of a device to other internal networks: The fewer devices that can \nbe accessed from other internal networks, the fewer opportunities an internal attacker has \nto compromise the network. \nDevice accessibility (or visibility) is an additional attribute that can be added to the device \ninventory. This attribute can be documented in detail, explaining under what specific \ncircumstances a device may be accessed (for example, the database server should only be \naccessible to the application server and the network's management and backup devices) or \nmay be defined in more general terms. For example, each device can be characterized in \nthree ways: as (1) a public device that is visible to the outside world (for instance, the \nInternet), (2) a protected device that can be seen from other internal networks but not \nexternally, or (3) a private device that can only be accessed by devices on the same \nnetwork segment. \nOnce the network designers have specified under what circumstances a device may be \naccessed by another device (and subsequently documented as a network security policy), \nnetwork traffic-filtering devices such as firewalls and routers can be added to the network \ndesign to restrict any undesired communication. Barman (2001), Peltier (2001), and Wood \n(2001) provide additional information on the process of defining security policies. The lack \nof a documented network security policy (and a process for updating this document) is often \nan indicator that different policies are likely to have been implemented on different filtering \ndevices and can cause a potential process issue when the existing staff leaves. \n \nEXAMPLE NETWORK SECURITY POLICIES \nNetwork security polices can be as straightforward as \"only permit access to IP address \n123.456.789.123 via port 80\" or as smart as \"don't permit any incoming network traffic that \nhas a source IP address that matches the IP address of an internal machine.\" Such a \nscenario should not occur in the legitimate world, but an external attacker might alter \n(spoof) his or her originating IP address in an effort to fool an internal machine into thinking \nthat it was communicating to another friendly internal machine. \n \nBLOCK AND THEN OPEN VERSUS OPEN AND THEN BLOCK \n" }, { "page_number": 56, "text": " \n49 \nSome LAN administrators first deploy a filtering device with all accesses permitted and then \nselectively block/filter potentially harmful ones. A more secure strategy is to block all \naccesses and then only open up the required ones, as an implementation error using the \nsecond approach is likely to eventually be spotted by a legitimate user being denied access \nto a resource. However, an implementation error in the former approach may go undetected \nuntil an intruder has successfully breached the filter device, been detected, and \nsubsequently been traced back to his or her entry point, which is a much more dire scenario \nand one that should therefore be checked when inspecting the network filtering rules used \nby a filtering device. \nDocumenting the conditions (or rules) under which a device may be accessed in an \norganization's network security policy is one thing; however, implementing these rules is \nmore difficult because each network security policy must be converted into an access-\ncontrol rule that can be programmed into the appropriate network-filtering device. Zwicky, et \nal. (2000) provide information on how to implement network-filtering rules using a firewall. \nTable 3.3 summarizes the network topology decisions that should ideally be documented \nbefore a network design (proposed or implemented) can be validated effectively. \n \nTable 3.3: Network Topology Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nHas a diagram (or alternate documentation) depicting the network's \ntopology been acquired? \n□ \n□ \nHave the effective ranges of any wireless network segments been \ndefined? \n□ \n□ \nHas the network accessibility of each device been defined? \n□ \n□ \nHave the network security polices needed to restrict undesired \nnetwork traffic been defined? \n□ \n□ \nIs there a documented policy in place that describes how these \nnetwork security policies will be updated? \n \n \nValidating Network Design \nOnce the network's proposed design (if the network is yet to be built) or implemented \ndesign (if the assessment is occurring after the network has been built) has been accurately \ndocumented, it is possible to review the design to ascertain how prone the design is to a \nsecurity breach. Often the goals of security run counter to the goals of easy maintenance. \n(A network that is easy for a LAN administrator to administer may also be easy for an \nintruder to navigate.) In some cases, stricter security may improve network performance; in \nothers, it may reduce the capacity of the network. For these reasons, it's unlikely that a \nnetwork design can be categorized simply as right or wrong (blatant errors aside); instead, \nit can be said to have been optimized for one of the following attributes: performance, \ncapacity, security, availability, robustness/fault tolerance, scalability, maintainability, cost, \nor, more likely, a balancing act among all of these attributes. Therefore, before starting any \ndesign review, the respective priorities of each of these design attributes should be \ndetermined and approved by the network's owner. \n" }, { "page_number": 57, "text": " \n50 \nNetwork Design Reviews \nApplication developers commonly get together to review application code, but much less \ncommonly does this technique actually get applied to network designs. Perhaps this \nhappens because the number of people within an organization qualified to conduct a \nnetwork review is smaller than the number able to review an application. Maybe it's \nbecause the prerequisite of documenting and circulating the item to be discussed is much \neasier when the object of the review is a standalone application program rather than a (as \nof yet undocumented) network design. Or maybe it's simply a question of the organization's \nculture: \"We've never done one before, so why start now?\" Whatever the reason for not \ndoing them, reviews have been found by those organizations that do perform them to be \none of the most cost-effective methods of identifying defects. A network design review \nshould therefore always be considered for incorporation into a network's construction or, if \nalready built, its assessment. \nThe first step in conducting a network design review is to identify the potential participants. \nObviously, the LAN administrator, the network designer, and a representative of the \nsecurity-testing team should always be included, if possible. Other candidates include the \nnetwork's owner, internal or external peers of the LAN administrator, network security \nconsultants, and end users. (At times, a knowledgeable end user can assist with giving the \nuser's perspective on network design priorities as they are the ones that will primarily be \nusing the network.) \nOnce the participants have been identified, they should be sent a copy of the network \ntopology and device inventory in advance of the review plus any networking requirements \nthat have been explicitly specified. (It goes without saying that these review packages \nshould be treated with the same confidentiality as the master network documentation \nbecause an intruder would find these copies as useful as the originals.) Once the \nparticipants have had a chance to review the network design, a meeting can be scheduled \nfor everybody to discuss their findings. Oppenheimer (1999) and Dimarzio (2001) provide \ninformation on network design concepts and implementations. \nNetwork Design Inspections \nAn inspection differs from a review in that an inspection compares the network design \nagainst a predefined checklist. The checklist could be based on anything from an industry \nstandard defined by an external organization to a homegrown set of best-practice \nguidelines. Table 3.4 lists some questions that may be used as part of a network design \ninspection when evaluating the network from a security perspective. \n \nTable 3.4: Network Design Security Inspection Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nIs the number of network segments appropriate? For example, \nshould a network be segmented or an existing network segment \nfurther divided? or should one or more segments be merged? \n□ \n□ \nIs each network device connected to the most appropriate network \nsegment(s)? \n□ \n□ \nAre the most appropriate types of equipment being used to connect \nthe devices on a network segment together, such as switches, hubs, \ndirect connections, and so on? \n" }, { "page_number": 58, "text": " \n51 \nTable 3.4: Network Design Security Inspection Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nAre the most appropriate types of equipment being used to connect \ndifferent segments together such as bridges, routers, gateways, and \nso on? \n□ \n□ \nIs each connection between the network segments absolutely \nnecessary? \n□ \n□ \nDoes each network device have an appropriate number of network \nconnections (network interface cards [NICs] and IP addresses)? \n□ \n□ \nHas each device been made accessible/visible to only the \nappropriate network segments? \n□ \n□ \nWill the network security policies that have been defined ensure that \nonly the appropriate devices are accessible? \n \n \nVerifying Device Inventory \nOnce the network design has been reviewed and any changes have been agreed upon and \nimplemented, the testing team can use the revised device inventory to verify that any \nchanges that were identified as part of the network review process have been implemented \nby the networking team correctly. \nPhysical Location \nFor small networks, the task of confirming the precise physical location of each network \ndevice inventory may be as simple as going into the server room and counting two boxes. \nFor larger, more complex network infrastructures, this process is not as straightforward and \nthe answers are not as obvious. For instance, the device inventory may have specified that \nthe router that connects an organization's Web site to its ISP be housed in the secure \nserver room, but in reality this device is located in the telephone switching room and \nprotected by a door that is rarely locked. \nEven if a room's head count matches the number of devices that were expected to be found \nat this location (and the OEM serial numbers or internal inventory tracking numbers are not \navailable), it is not guaranteed that two or more entries in the device inventory were not \ntransposed. For instance, according to the device inventory, Web server 1 (with a hostname \nof web1.corp) is supposed to be located at the downtown facility, whereas Web server 5 \n(with a hostname of web5.corp) is at the midtown facility. However, in reality, web1 is at the \nmidtown facility and web2 is at the downtown facility. \nVerifying that devices that have a built-in user interface (such as a general-purpose server) \nare physically located where they are expected to be can be as simple as logging into each \ndevice in a room and confirming that its hostname matches the one specified in the device \ninventory. For example, on a Windows-based machine, this could be done via the control \npanel, or on a Unix system, via the hostname command. For devices such as printers and \nnetwork routers that don't necessarily have built-in user interfaces, their true identity will \nneed to be ascertained by probing it from a more user-friendly device directly attached to it. \nThe physical inventory is intended to do two things: confirm that all of the devices listed in \nthe device inventory area are where they are supposed to be and identify any extra devices \n" }, { "page_number": 59, "text": " \n52 \nthat are currently residing in a restricted area such as a server room. For instance, an old \npowered-down server residing in the back corner of a server room still poses a security risk. \nIt should be either added to the device inventory (and hence added to the scope of the \ntesting effort) or removed from the secure area because an intruder who was able to gain \nphysical access to such a device could easily load a set of hacking tools on to the machine \nand temporarily add it to the network via a spare port on a nearby network device. \n \nSTICKY LABELS \nAdding an easily viewable sticky label to the outside of each device, indicating its inventory \nidentifier, should help speed up the process of confirming the physical location of each \nnetwork device should this task need to be repeated in the near future. \nIf sticky labels are used, care should be taken to ensure that the selected identifier does not \nprovide a potential attacker with any useful information. For example, using an \norganization's internal inventory tracking number (possibly in the form of a barcode) would \nbe more secure than displaying a device's IP address(es) in plain view. \nUnauthorized Devices \nAside from physically walking around looking for devices that should not be connected to \nthe network, another approach to discovering unwanted network devices is to perform an IP \naddress sweep on each network segment included in the security-testing efforts scope. \nThe network protocol of choice for conducting an IP address sweep is the Internet Control \nMessage Protocol (ICMP). (Appendix A provides more details on network protocols \ncommonly used by Web applications.) ICMP is also known as ping; hence, the term ping \nsweep is often used to describe the activity of identifying all the IP addresses active on a \ngiven network segment. In the following Windows command-line example, the IP address \n123.456.789.123 successfully acknowledges the ping request four times, taking on average \n130 milliseconds: \n C:\\>ping 123.456.789.123 \n \n Pinging 123.456.789.123 with 32 bytes of data: \n \n Reply from 123.456.789.123: bytes=32 time=97ms TTL=110 \n Reply from 123.456.789.123: bytes=32 time=82ms TTL=110 \n Reply from 123.456.789.123: bytes=32 time=151ms TTL=110 \n Reply from 123.456.789.123: bytes=32 time=193ms TTL=110 \n \n Ping statistics for 123.456.789.123: \n Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), \n Approximate round trip times in milliseconds: \n Minimum = 82ms, Maximum = 193ms, Average = 130ms \nUnfortunately (from an IP address sweeping perspective), some LAN administrators may \nhave configured one or more of the network devices not to respond to an ICMP (ping) \nprotocol request. If this is the case, it may still be possible to conduct an IP address sweep \nusing another network protocol such as TCP or the User Datagram Protocol (UDP). \n" }, { "page_number": 60, "text": " \n53 \n \nIP ADDRESS SWEEP \nOne way to determine what devices are active on the network is to direct a \"Hello is anyone \nthere?\" message at every IP address that might be used by a device connected to the \nnetwork. A \"Yes, I'm here\" reply would indicate that the IP address is being used by a \ndevice. Since any active device needs an IP address to communicate to other network \ndevices (a few passive devices may not use an IP address), the sum total of positive replies \nshould comprise all of the powered-up devices connected to the network segment being \nswept. \nThese sweeps may prove to be quite time consuming if the entire range of potential IP \naddresses needs to be scanned manually. Fortunately, numerous IP-address-sweeping \ntools exist that can be used to automate this often tedious task (see Table 3.5). Klevinsky \n(2002) and Scambray (2001) both provide more information on how to conduct a ping \nsweep. \n \nTable 3.5: Sample List of IP-Address-Sweeping Tools and Services \nNAME \nASSOCIATED WEB SITE \nFping \nwww.deter.com \nHping \nwww.kyuzz.org/antirez \nIcmpenum & Pinger \nwww.nmrc.org \nNetScanTools \nwww.nwpsw.com \nNmap \nwww.insecure.org \nNmapNT \nwww.eeye.com \nPing \nwww.procheckup.com and www.trulan.com \nPing Sweep/SolarWinds \nwww.solarwinds.net \nWS_Ping Pro Pack \nwww.ipswitch.com \nAn IP address sweep is often the first indication of an intruder sniffing around, as an \nexternal intruder typically does not know what IP addresses are valid and active, and is \ntherefore often forced to grope in the dark hoping to illuminate (or enumerate) IP addresses \nthat the LAN administrator has not restricted access to. The testing team should therefore \nmake sure that it informs any interested parties before it conducts their (often blatant) IP \naddress sweeps, so as not to be confused with a real attack, especially if an organization \nhas an intruder detection system that is sensitive to this kind of reconnaissance work. \nNetwork Addresses \nSeveral different techniques can be used to verify that the networking team has assigned \neach network device the network addresses specified by the device inventory. In addition, \nthese techniques can also be used to confirm that no unauthorized network addresses have \nbeen assigned to legitimate (or unauthorize) devices. Appendix A provides additional \ninformation on network addresses and on some of the network-addressing scenarios that a \ntesting team may encounter (and that could possibly cause them confusion) while trying to \nverify or build an inventory of network addresses used by a Web site. \n" }, { "page_number": 61, "text": " \n54 \nCommercial Tools \nThe organization may have already invested in a commercial network-auditing or \nmanagement tool (such as those listed in Table 3.6) that has the capability to produce a \nreport documenting all the network addresses that each network device currently has \nassigned. Because some of these tools may require a software component to be installed \non each network device, some of these tools may not be particularly capable of detecting \ndevices that are not supposed to be attached to the network. Care should also be taken to \nmake sure than when these tools are used, all the devices that are to be audited are \npowered up, as powered-down machines could easily be omitted. \n \n \nTable 3.6: Sample List of Network-Auditing Tools \nNAME \nASSOCIATED WEB SITE \nDiscovery \nwww.centennial.co.uk \nLANauditor \nwww.lanauditor.com \nLan-Inspector \nwww.vislogic.org \nNetwork Software Audit \nwww.mfxr.com \nOpenManage \nwww.dell.com \nPC Audit Plus \nwww.eurotek.co.uk \nSystems Management Server (SMS) \nwww.microsoft.com \nTivoli \nwww.ibm.com \nToptools and OpenView \nwww.hp.com \nTrackBird \nwww.trackbird.com \nUnicenter \nwww.ca.com \nZAC Suite \nwww.nai.com/magicsolutions.com \nDomain Name System (DNS) Zone Transfers \nAn extremely convenient way of obtaining information on all the network addresses used by \na network is to request the common device used by the network to resolve network address \ntranslations to transfer en masse these details to the device making the request. A request \nthat can be accomplished using a technique called a domain name system (DNS) zone \ntransfer. DNS transfers have legitimate uses, such as when a LAN administrator is setting \nup a new LAN at a branch location and does not want to manually reenter all the network \naddresses used by the corporate network. Unfortunately, this capability is open to abuse \nand if it is made available locally, it may still be blocked by any network-filtering device such \nas a perimeter firewall. \nA DNS zone transfer can either be initiated from a built-in operating system command such \nas nslookup or via a tool such as the ones listed in Table 3.7. Scambray (2001) and \nSkoudis (2001) both provide more information on how to attempt a DNS zone transfer. \n \n" }, { "page_number": 62, "text": " \n55 \nTable 3.7: Sample List of DNS Zone Transfer Tools and Services \nNAME \nASSOCIATED WEB SITE \nDig \nftp.cerias.purdue.edu \nDNS Audit/Solarwinds \nsolarwinds.net \nDnscan \nftp.technotronic.com \nDnswalk \nvisi.com/~barr \nDomtools \ndomtools.com \nHost \nftp.uu.net/networking/ip/dns \nSam Spade \nsamspade.org \nManual \nIf an automated approach cannot be used because of powered-down machines or \nsuspicions that stealthy hidden devices have been connected to the network, then while \nconfirming the physical location of each network device, the testing team may also want to \nmanually confirm the network addresses assigned to each device. For Windows- or Unix-\nbased devices, the network addresses can be determined using one or more of the \ncommands (such as those listed in Table 3.8) that are built into the operating system. \nNetwork devices that do not offer built-in commands to support these kind of inquires (such \nas a network printer) may require probing from a more user-friendly device directly \nconnected to it. \n \nTable 3.8: Sample List of Built-In Operating System Commands \nOPERATING SYSTEM \nCOMMANDS \nUnix \nhostname, ifconfig, or nslookup \nWindows \narp, ipconfig, net, nslookup, route, winipcfg, or wntipcfg \nNote that not all commands have been implemented on every version of the \noperating system. \nTable 3.9 summarizes the checks that can be used to verify that each device \nimplementation matches its corresponding device inventory entry. \n \nTable 3.9: Verifying Network Device Inventory Implementation Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nAre all of the devices listed in the device inventory physically located \nwhere they should be? \n□ \n□ \nHave unauthorized devices been checked for? \n□ \n□ \nHave all of the devices listed in the device inventory been assigned \ntheir appropriate network addresses? \n \n" }, { "page_number": 63, "text": " \n56 \nVerifying Network Topology \nOnce the testing team has verified that each individual device has been configured \ncorrectly, the next step is to confirm that the physical connections between these devices \nhave been implemented exactly as specified and that any network-filtering rules have been \napplied correctly. \nNetwork Connections \nFor most small networks, a visual inspection may be all that is needed to confirm that each \ndevice has been physically connected to its network peers. For larger, more complicated \nand/or dispersed networks, verifying that all of the network connections have been \nimplemented correctly and no unneeded connections established is probably most \nproductively done electronically. \nThe network's topological map (or matrix) can be manually verified by logging into each \ndevice on the network and using built-in operating system commands such as tracert \n(Windows) or traceroute (Unix). These commands show the path taken by an ICMP request \nas it traverses the network (hopping from device to device) to its ultimate destination. In the \nfollowing Windows command-line example, it appears that the initiating device (IP address \n123.456.789.123) is directly connected to the device with IP address 123.456.789.124 and \nindirectly connected to the target of the request (IP address 123.456.789.125). \n C::>tracert web1.tampa \n Tracing route to web1.tampa [123.456.789.125] \n over a maximum of 30 hops: \n \n 1 69 ms 27 ms 14 ms bigboy.tampa \n[123.456.789.123] \n 2 28 ms <10 ms 14 ms 123.456.789.124 \n 3 41 ms 27 ms 14 ms web1.tampa \n[123.456.789.125] \n \n Trace complete. \nTable 3.10 lists some tools and services that provide more advanced and user-friendly \nversions of these trace route commands. In addition, some of the network auditng tools \nlisted in Table 3.6 provide features for constructing a topological map of the network they \nare managing. \n \nTable 3.10: Sample List of Trace Route Tools and Services \nNAME \nASSOCIATED WEB SITE \nCheops \nwww.marko.net \nNeoTrace \nwww.mcafee.com/neoworx.com \nQcheck \nwww.netiq.com \nSolarWinds \nwww.solarwinds.net \nTrace \nwww.network-tools.com \n" }, { "page_number": 64, "text": " \n57 \nTable 3.10: Sample List of Trace Route Tools and Services \nNAME \nASSOCIATED WEB SITE \nTraceRoute \nwww.procheckup.com \nTracert \nwww.trulan.com \nTracerX \nwww.packetfactory.net \nVisualRoute \nwww.visualroute.com \nDevice Accessibility \nContrary to popular belief, once a network-traffic-filtering device such as a firewall is \nconnected to a network, it will not immediately start protecting a network; rather, the filtering \ndevice must first be configured to permit only the network traffic to pass through that is \nappropriate for the specific network it has just been attached to. Although the \nmanufacturer's default configuration may be a good starting point, relying on default \nsettings is a risky business. For instance, default outgoing network traffic policies are often \ntoo liberal, perhaps due to the mindset of an external intruder who does not consider the \npossibility of an attack being initiated from within an organization or that of an external \nintruder wanting to send confidential information out of an organization. \nTherefore, each traffic-filtering device should be checked to make sure that it has been \nconfigured according to the filtering rules defined by the network's security policies and that \nonly the network devices that should be visible to devices on other network segments are \nactually accessible. These network security policies are often implemented as a series of \nrules in a firewall's access control list (ACL). These rules can either be manually inspected \nvia a peer-level review and/or checked by executing a series of tests designed to confirm \nthat each rule has indeed been implemented correctly. \nA filtering device can be tested by using a device directly connected to the outward-facing \nside of a network-filtering device. The testing should try to communicate to each of the \ndevices located on the network segment(s) that the filtering device is intended to protect. \nAlthough many organizations may test their network-filtering defenses by trying to break \ninto a network, fewer organizations run tests designed to make sure that unauthorized \nnetwork traffic cannot break out of their network. The testing team should therefore \nconsider reversing the testing situation and attempt to communicate to devices located in \nthe outside world from each device located on the network segment(s) being protected by \nthe filtering device. Allen (2001) and Nguyen (2000) both provide additional information on \nhow to test a firewall. \nTesting Network-Filtering Devices \nAlthough in many cases filtering implementations are too lax, in others a filter may be too \nstrict, which restricts legitimate traffic. Therefore, tests should also be considered to make \nsure that all approved traffic can pass unfettered by the filter. \nIf multiple filters are to be used—such as a DMZ configuration that uses two firewalls (a \nscenario expended upon in Appendix A)—each filter should be tested to ensure that it has \nbeen correctly configured. This procedure is recommended as it is unwise to only check a \nnetwork's perimeter firewall and assume that just because it is configured correctly (and \nhence blocks all inappropriate traffic that it sees) every other filter is also configured \ncorrectly. This is particularly true when internal firewalls that block communication between \ntwo internal network segments are considered. The most restrictive perimeter firewall does \n" }, { "page_number": 65, "text": " \n58 \nnothing to prohibit a network intrusion initiated from within an organization (which is a more \ncommon scenario than an externally launched attack). \nIf some of the filter's rules are dependent not only on the destination address of the network \ntraffic, but also on the source address, then in addition to requesting access to permitted \nand restricted devices, it may also be necessary to vary (spoof) the source address in order \nto create authorized and unauthorized network traffic data for inbound and outbound tests. \nFirewalls often have the capability to log inbound and/or outbound requests. This feature \ncan be useful if evidence is needed to prosecute an offender or the network security team is \ninterested in receiving an early warning that someone is attempting to gain unauthorized \nentry. If logging is enabled and unmonitored, aside from slowing down a firewall (another \ncase of a performance optimization conflicting with a security consideration) and \nunmonitored, the logs may grow to a point where the firewall's functionality integrity is \ncompromised. Endurance tests should therefore be considered to make sure that any \nlogging activity does not interfere with the firewall's filtering capabilities over time. \n \nSPOOFING \nSpoofing refers to the technique of changing the original network address of a network \nmessage, typically from an untrusted address to one that is trusted by a firewall (such as \nthe address of the firewall itself). Of course, one of the side effects of changing the \norigination address is that the target machine will now reply to the spoofed address and not \nthe original address. Although it might be possible for an intruder to alter a network's \nconfiguration to set up bidirectional communication (or source routing), spoofing will \ntypically result in the intruder only having unidirectional communication (or blind spoofing). \nUnfortunately, unidirectional communication is all an intruder needs if he or she is only \ninterested in executing system commands (such as creating a new user account) and is not \ninterested in (or does not need to be) receiving any confirmation responses. \nSome filtering devices (such as proxy servers) have a harder time deciding what should \nand should not be filtered as the load placed on them increases. At high load levels, the \ndevice may be so stressed that it starts to miss data packets that it should be blocking. \nRunning stress tests against the filtering device should be considered to ascertain whether \nor not they exhibit this behavior. If they do, consider inserting a network load governor to \nensure that such a heavy load will not be placed on the susceptible device in a production \nenvironment. \nA firewall only works if it's enabled. Forgetting to change or disable any default user IDs and \npasswords or removing any remote login capability that might have been enabled by the \nmanufacturer may allow an intruder to disable the firewall or selectively remove a pesky \nfiltering rule that is preventing him or her from accessing the network behind the firewall. \nTherefore, checks should be considered to make sure that neither of these potential \noversights makes it into production (see Table 3.11). \n \nTable 3.11: Network Topology Verification Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nDoes the implemented network topology match the topology specified \nby the approved network topology design? \n□ \n□ \nHave default configuration settings for each network-traffic-filtering \ndevice been reviewed and, if necessary, changed (for example, \n" }, { "page_number": 66, "text": " \n59 \nTable 3.11: Network Topology Verification Checklist \nYES \nNO \nDESCRIPTION \nassigning new and different user IDs and passwords)? \n□ \n□ \nHave all the inbound network-traffic-filtering rules been implemented \ncorrectly on every filtering device? \n□ \n□ \nHave all the outbound network-traffic-filtering rules been implemented \ncorrectly on every filtering device? \n□ \n□ \nDo all of the filtering devices still work correctly when exposed to \nheavy network loads? \n□ \n□ \nIf network-traffic-filtering logs are being used, are the logs being \nmonitored for signs of an intruder at work, lack of free disk space, or \nother noteworthy events? \n \nSupplemental Network Security \nIn addition to the basic security measures described in the preceding sections, some \norganizations implement additional network security measures to make it harder for any \nattacker to compromise the security of the network. Unfortunately, these measures come at \na price, typically one of additional network complexity, which in turn means additional \nnetwork administration—an overhead that may not be justified for every network. However, \nif such additional measures are deemed desirable, then the security-testing team should \nconsider running tests to check that these extra precautions have indeed been \nimplemented correctly. \nNetwork Address Corruption \nTo facilitate more compatible networks, most network protocols utilize several different \nnetwork addresses (for instance, the Web sites typically use three addresses: a \nhost/domain name, an IP address, and a MAC address). Each time a data packet is passed \nfrom one network device to another, the sending device typically must convert an address \nfrom one address format to another (for example, the domain name wiley.com must first be \nconverted to the IP address 123.456.789.123 before the data can be passed across the \nInternet). Ordinarily, this translation process occurs without incident, each device \nremembering (caching) the translations that it repeatedly has to perform and occasionally \nrefreshing this information or requesting a new network address mapping for a network \naddress it has not had to translate before (or recently) from a network controller. \nUnfortunately, if intruders are able to gain access to one or more devices on a network, \nthey may be able to corrupt these mappings (a technique often referred to as spoofing or \npoisoning). They may misdirect network traffic to alternate devices (often a device that is \nbeing used by an intruder to eavesdrop on the misdirected network traffic). To reduce the \npossibility of this form of subversion, some LAN administrators permanently set critical \nnetwork address mappings on some devices (such as the network address of the Web \nserver on a firewall), making these network address translations static. Permanent (static) \nentries are much less prone to manipulation than entries that are dynamically resolved (and \nmay even improve network performance ever so slightly, as fewer translation lookups need \nto be performed). However, manually setting network address mappings can be time \nconsuming and is therefore not typically implemented for every probable translation. \n" }, { "page_number": 67, "text": " \n60 \nHostname-to-IP-Address Corruption \nA device needing to resolve a hostname-to-IP-address translation typically calls a local \nDNS server on an as-needed basis (dynamically). To make sure that erroneous or \nunauthorized entries are not present, DNS mappings can be checked using a built-in \noperating system command such as nslookup or using a tool such as the ones listed in \nTable 3.7 . Alternatively, a LAN administrator may have hardcoded some critical DNS \nmappings using a device's hosts file (note that this file has no file-type extension), thereby \nremoving the need for the device to use a DNS server and mitigating the possibility of this \nlookup being corrupted (improving network performance ever so slightly). The following is a \nsample layout of a hosts file that might be found on a Windows-based device: \n # This is a HOSTS file used by Microsoft TCP/IP for Windows. \n # This file contains the mappings of IP addresses to host names. \nEach \n # entry should be kept on an individual line. The IP address \nshould \n # be placed in the first column followed by the corresponding \nhost name. \n # The IP address and the host name should be separated by at \nleast one \n # space. \n # Additionally, comments (such as these) may be inserted on \nindividual \n # lines or following the machine name denoted by a '#' symbol. \n \n 127.0.0.1 localhost \n \n 123.456.789.123 wiley.com \nThe static hostname-to-IP-address mappings on each device can be tested by either a \nvisual inspection of the hosts file in which the static mappings contained in this file can be \nviewed (or edited) using a simple text-based editor such as Notepad (Windows) or vi (Unix), \nor by using a simple networking utility that must resolve the mapping before it is able to \nperform its designated task. (For example, entering ping wiley.com from a command-line \nprompt requires the host device to convert wiley.com to an IP address before being able to \nping its intended target.) \nIP Address Forwarding Corruption \nInstead of corrupting a network address mapping, an intruder may attempt to misdirect \nnetwork traffic by modifying the routing tables used by network devices to forward network \ntraffic to their ultimate destination. To confirm that a network device such as a router has \nnot had its IP routing tables misconfigured or altered by an intruder, these tables can either \nbe manually inspected (typically using the utility originally used to configure these routing \ntables) or verified by sending network traffic destined for all probable network destinations \nvia the device being tested and then monitoring the IP address that the device actually \nforwards the test network traffic to. \n" }, { "page_number": 68, "text": " \n61 \nIP-Address-to-MAC-Address Corruption \nThe Address Resolution Protocol (ARP) is the protocol used to convert IP addresses to \nphysical network addresses. For Ethernet-based LANs, the physical network address is \nknown as a MAC address. \nAs with hostname-to-IP-address mappings, a LAN administrator may choose to selectively \nuse static mappings for IP-address-to-MAC-address mappings. If static ARP entries are \nsupposed to have been implemented, each device should be checked to see which ARP \nentries are static and which are dynamic. This can be done by using a tool such as \narpwatch (www.ee.lbl.gov) or manually visiting every device and using a built-in operating \nsystem command such as arp. In the following Windows command-line example, only one \nof the ARP entries has been set permanently (statically): \n C:\\>arp -a \n Interface: 123.456.789.123 on Interface 0x2 \n Internet Address Physical Address Type \n 123.456.789.124 aa.bb.cc.dd.ee.ff static \n 123.456.789.125 aa.bb.cc.dd.ee.aa dynamic \nTable 3.12 provides a checklist for verifying that static network addresses have been \nimplemented correctly. \n \nTable 3.12: Network Address Corruption Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nIs there a documented policy in place that describes which devices \nare to use static network addresses and which specific addresses are \nto be statically defined? \n□ \n□ \nAre all of the devices that should be using static network addresses \nactually using static addressing? \nSecure LAN Communications \nUnless encrypted, any data transmitted across a LAN can potentially be eavesdropped \n(sniffed) by either installing a sniffer application onto a compromised device or attaching a \nsniffing appliance to the cabling that makes up the LAN (an exercise that is made a lot \neasier if the network uses wireless connections). \nTo protect against internal sniffing, sensitive data (such as application user IDs and \npasswords) transmitted between these internal devices should be encrypted and/or \ntransmitted only over physically secured cabling. For example, a direct connection between \ntwo servers locked behind a secure door would require an intruder to first compromise one \nof the servers before he or she could listen to any of the communications. \nTo check for sensitive data being transmitted across a LAN in cleartext (unencrypted), a \nnetwork- or host-based network-sniffing device (such as one of the tools listed in Table \n3.13) can be placed on different network segments and devices to sniff for insecure data \ntransmissions. Due to the large amount of background traffic (for example, ARP requests) \nthat typically occurs on larger LANs, the sniffing tool should be configured to filter out this \nnoise, making the analysis of the data communication much easier. \n" }, { "page_number": 69, "text": " \n62 \n \nTable 3.13: Sample List of Network-Sniffing Tools \nNAME \nASSOCIATED WEB SITE \nAgilent Advisor \nwww.onenetworks.comms.agilent.com \nDragonware (Carnivore) \nwww.fbi.gov \nCommView \nwww.tamos.com \nDistinct Network Monitor \nwww.distinct.com \nEsniff/Linsniff/Solsniff \nwww.rootshell.com \nEthereal \nwww.zing.org \nEthertest \nwww.fte.com \nIris \nwww.eeye.com \nNetBoy \nwww.ndgssoftware.com \nNetMon & Windows Network \nMonitor \nwww.microsoft.com \nSniff'em \nwww.sniff-em.com \nSniffer \nwww.sniffer.com \nTCPDump \nwww.ee.lbl.gov and www.tcpdump.org \nWinDump \nwww.netgroup-serv.polito.it \nNote that some of the functional testing tools listed in Table 6.12 can also be used to \nsniff network traffic entering or leaving the device they are installed on. \nA more rigorous test would be to input selectively sniffed data into a decryption tool (such \nas the ones listed in Table 4.16) to ascertain whether or not the data was sufficiently \nencrypted and not easily decipherable. Table 3.14 is a sample checklist for testing the \nsafety of LAN network communications. \n \nTable 3.14: Secure LAN Communication Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nIs there a documented policy in place that describes how sensitive \ndata should be transmitted within a LAN? \n□ \n□ \nWhen using a sniffer application on each network segment (or \ndevice), is sensitive data being transmitted or received by any service \nrunning on the network in cleartext or in a format that can be easily \ndeciphered? \n□ \n□ \nAre the physical cables and sockets used to connect each of the \ncomponents on the network protected from an inside intruder directly \nattaching a network-sniffing device to the network? \n" }, { "page_number": 70, "text": " \n63 \nWireless Segments \nIt is generally considered good practice to encrypt any nonpublic network traffic that may be \ntransmitted via wireless communications. However, due to the performance degradation \ncaused by using strong encryption and the possibility of a vulnerability existing within the \nencryption protocol itself, it would be prudent not to broadcast any wireless communication \nfurther than it absolutely needs to be. If wireless communications will be used anywhere on \nthe network under test, then the network's supporting documentation should specify the \nmaximum distance that these signals should be receivable. The larger the distance, the \nmore mobile-friendly the network will be, but the greater the risk that an eavesdropper may \nalso be able to listen to any communications. \nAlthough the wireless standard may specify certain distances that wireless devices should \nbe effective over, each individual implementation varies in the actual reception coverage. \nThe reasons why this coverage varies from network to network include the following: \nƒ \nTransmitter. The more power a transmitter devotes to broadcasting its signal, the \nfarther the signal is propagated. \nƒ \nReceiver. By using specialized (gain-enhancing and/or directional) antennas, a \nreceiving device can extend its effective range. \nƒ \nHeight. The higher the broadcasting (and receiving) device, the farther the signal \ncan travel. For example, a wireless router located on the third floor has a larger radius \nof coverage than one located in the basement. \nƒ \nBuilding composition. The construction materials and building design used to build \nthe facility where the broadcasting device is located will impede the signal's strength \nto varying degrees. For example, steel girders can create a dead zone in one \ndirection, while at the same time enhancing the signal in another direction. In addition, \na building's electrical wiring may inadvertently carrier a signal into other adjacent \nbuildings. \nƒ \nBackground noise. Electrical transmission pylons or other wireless networks \nlocated in the neighborhood generate background noise and thereby reduce the \neffective range of the broadcasting device. \nƒ \nWeather. Rain droplets on a facility's windows or moisture in the air can reduce the \neffective range of a broadcasting device. \nThe actual effective wireless range of wireless network segments should therefore be \nchecked to ensure that a particular wireless network implementation is not significantly \ngreater than the coverage called for by the network's design. \nDenial-of-Service (DoS) Attacks \nTechnically speaking, a denial-of-service (DoS) attack means the loss of any critical \nresource. Some examples of this attack include putting superglue into the server room's \ndoor lock, uploading to a Web server a Common Gateway Interface (CGI) script that's \ndesigned to run forever seeking the absolute value of π (slowing down the CPU), blocking \naccess to the Web site's credit-card service bureau (blocking new orders), or by creating \nhuge dummy data files (denying system log files free disk space, causing the system to \nhang). However, the most common DoS attack is an attempt to deny legitimate clients \naccess to a Web site by soaking up all the Web site's available network bandwidth or \nnetwork connections, typically by creating an inordinate number of phony Web site \n" }, { "page_number": 71, "text": " \n64 \nrequests. Kelvinsky (2002) provides an extensive review of some of the most common \ntechniques and tools used to launch DoS attacks. \nA variation of a DoS attack is a distributed denial-of-service (DDoS) attack. Unlike a DoS \nattack, which is originated from a single source, a DDoS attack is launched from multiple \nsources (although it may still be orchestrated from a single point). This enables the amount \nof network traffic focused at the target network to be many times greater. \nMachiavellian DoS Attacks \nAn attacker might choose to employ a DoS attack for other less obvious reasons. DoS \nattacks are therefore not always what they seem. Here are some examples: \nƒ \nAn attacker could launch a small-scale DoS attack, but instead of using a bogus \nsource network address, he or she could use the network address of an important \nrouter/server on the Web site that is being attacked. A Web site that has already been \nattacked in a similar fashion may have installed an automated defense mechanism \nthat blocks all communication from the source of a DoS attack. Of course, if the \nnetwork address is the Web site's upstream router, the Web site may inadvertently \ncut itself off from the rest of the world! \nƒ \nDepending upon how a Web site is configured, a large DoS attack might actually \ncause a device to pause or starve to death some (or all) of the background processes \nthat are supposed to monitor and/or block intruder attacks. For instance, under \nnormal loads, a network-based intrusion detection system (IDS) may be able to detect \nemails containing viruses, but at higher network loads, the IDS may be unable to \nmonitor a sufficient number of network data packets to correctly match the virus \nagainst its virus signature database, enabling a virus to slip through unnoticed. \nƒ \nAn intruder may even use a DoS attack as a diversion measure, launching an \nobvious attack against one entry point while quietly (and hopefully unnoticed) \nattacking another entry point. Even if it is detected by an IDS, the IDS's \nwarnings/alarms may be ignored or lost due to the chaos being caused by the blatant \nDoS attack that is occurring simultaneously. \nƒ \nAn intruder that has successfully accessed a server may need to reboot the server \nbefore he or she can gain higher privileges. One way to trick a LAN administrator into \nrebooting the server (which is what the intruder wants) is to launch a DoS attack \nagainst the compromised server. (Therefore, it always pays to check that a server's \nstartup procedures have not been altered before rebooting a server, especially when \nrecovering from a DoS attack.) \nDoS Attack Countermeasures \nUnfortunately, many organizations are completely unprepared for a DoS attack, relying on \nthe get-lucky defense strategy. It may be infeasible to design a network to withstand every \npossible form of DoS attack. However, because many forms of DoS attack do have \ncorresponding countermeasures that can be put in place to avoid or reduce the severity of a \nDoS attack, it may therefore make sense to ensure that a network and its critical services \nare able to withstand the most common DoS attacks that intruders are currently using. \nDoS Attack Detection \nA DoS countermeasure may only work if the DoS attack can actually be detected. Some \nattackers may try to disguise their initial onslaught (for instance, by using multiple source \nnetwork addresses) so that the DoS attack either goes completely unnoticed by the on-duty \nsecurity staff or the deployment of any countermeasure is delayed. \n \n" }, { "page_number": 72, "text": " \n65 \nEXAMPLE DOS COUNTERMEASURE \nAn example of a countermeasure that can be employed against an ICMP (ping) DoS attack \non a Web site is to have the Web site's ISP(s) throttle back the level of ICMP requests, \nreducing the amount of phony traffic that actually reaches the target Web server(s). Many \nhigh-end routers have throttling capabilities built into them. Therefore, an organization may \nwant to check with its ISP to see if the provider has this capability. If so, the organization \nshould find out what the procedures are for deploying this feature should a Web site \nbecome the subject of an ICMP DoS attack. \nTo help detect unusual rises in system utilizations (which are often the first observable \nsigns of a DoS attack), some organizations create a resource utilization baseline during a \nperiod of normal activity. Significant deviations from this norm (baseline) can be used to \nalert the system's support staff that a DoS attack may be occurring. \nWhatever DoS attack detection mechanisms have been deployed, they should be tested to \nensure that they are effective and that the on-duty security staff is promptly alerted when a \nDoS attack is initiated, especially a stealthy one. \nDoS Attack Emulation \nAlthough small-scale DoS attacks can be mimicked by simply running a DoS program from \na single machine connected to the target network, larger-scale tests that seek to mimic a \nDDoS attack may need to utilize many machines and large amounts of network bandwidth, \nand may therefore prove to be quite time consuming and resource intensive to set up and \nrun. As an alternative to using many generic servers to generate a DDoS attack, hardware \nappliances such as those listed in Table 3.15 can be used to create huge volumes of \nnetwork traffic (more than tens of thousands of network connection requests per second \nand millions of concurrent network connections) and even come with attack modules (which \nare updateable) designed to emulate the most common DoS attacks. \n \nTable 3.15: Sample List of DoS Emulation Tools and Services \nNAME \nASSOCIATED WEB SITE \nFirewallStressor \nwww.antara.net \nExodus \nwww.exodus.com \nMercury Interactive \nwww.mercuryinteractive.com \nSmartBits \nwww.spirentcom.com \nWebAvalanche \nwww.caw.com \nRather than having to set up an expensive test environment for only a few short DDoS \nattack tests (that will hopefully not have to be repeatable), another option is to use an online \nservice. (Table 3.15 lists some sample vendors that offer this service.) For a relatively small \nfee, these online vendors use their own site to generate the huge volumes of network traffic \nthat typically characterize a DDoS attack and direct this network traffic over the Internet to \nthe target network—an approach that can be much more cost effective than an organization \ntrying to build its own large-scale load generators. \nIn addition, some of the traditional load-testing tools listed in Table 3.16 can also be utilized \nto simulate DoS attacks that necessitate creating large volumes of Web site requests. \n" }, { "page_number": 73, "text": " \n66 \n \nTable 3.16: Sample List of Traditional Load-Testing Tools \nNAME \nASSOCIATED WEB SITE \nAstra LoadTest and LoadRunner \nwww.mercuryinteractive.com \ne-Load \nwww.empirix.com \nOpenSTA \nwww.sourceforge.net \nPortent \nwww.loadtesting.com \nQALoad \nwww.compuware.com \nRemoteCog \nwww.fiveninesolutions.com \nSilkPerformer \nwww.segue.com \nTestStudio \nwww.rational.com \nVeloMeter \nwww.velometer.com \nWeb Application Stress Tool (WAST-\"Homer\") and \nWeb Capacity Analysis Tool (WCAT) \nwww.microsoft.com \nWebLoad \nwww.radview.com \nWeb Performance Trainer \nwww.webperfcenter.com \nWebSizr \nwww.technovations.com \nWebSpray \nwww.redhillnetworks.com \nTable 3.17 summarizes the checks that the testing team can perform to help evaluate how \nprepared an organization is against a DoS (or DDoS) attack. \n \nTable 3.17: DoS Attack Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nHas a documented strategy been developed to defend against DoS \nattacks? \n□ \n□ \nIs there a documented inventory of the specific DoS attacks for which \ncountermeasures have been put in place? \n□ \n□ \nHave the procedures that the on-duty security staff should follow \nwhen the network is under a DoS attack been documented? \n□ \n□ \nWhen emulating each of the defended DoS attacks, are the attacks \ndetected by the on-duty security staff? \n□ \n□ \nDoes the on-duty security staff always follow the procedures \ndocumented in the DoS policy documentation? \n□ \n□ \nIs the degradation suffered by the network and/or the services \nrunning on the network still acceptable while the network is \nexperiencing \na \nDoS \nattack \nthat \nhas \nan \nimplemented \ncountermeasure? \n \n" }, { "page_number": 74, "text": " \n67 \nSummary \nThe network infrastructure that each network application resides on represents the \nelectronic foundation of the application. No matter how good an application's security \nprocedures are, the application can be undermined by vulnerabilities in the underlying \nnetwork that the application depends on for its network connectivity. This chapter has \noutlined a series of steps and techniques (summarized in Figure 3.5) that a security-testing \nteam can follow (or customize to their unique situation) to first define the scope and then \nconduct a network security-testing effort. \n \nFigure 3.5: Network security-testing approach summary. \nOne final point is worth emphasizing: In order for the testing effort to be as comprehensive \nand systematic as possible, the security-testing team must be granted access to highly \nsensitive documentation such as a diagram depicting the network's topology. It goes \nwithout saying that the testing team should take every feasible precaution to prevent this \nsensitive information from being leaked to a potential attacker (external or internal) and that \nany test results (regardless of whether or not they demonstrate the existence of a security \nvulnerability) must be kept under lock and key and only distributed on a need-to-know \nbasis. Finding any of these artifacts could save an intruder a considerable amount of time \nand effort, and increase the possibility of a successful penetration. Additionally, the testing \nteam should be careful to clean up after themselves, making sure that once the testing is \ncomplete, any testing tools that could be utilized by an attacker are removed from all of the \ndevices they were installed upon. \n" }, { "page_number": 75, "text": " \n68 \nChapter 4: System Software Security \nOverview \nThis book uses the term system software to refer to the group of commercial and open-\nsource software products that are developed and distributed by an external organization. \nThese include operating systems, database management systems, Java 2 Platform \nEnterprise Edition (J2EE) implementations, file-sharing utilities, and communication tools. \nTable 4.1 lists some specific examples of such products. \n \nTable 4.1: Sample List of System Software Products \nNAME \nASSOCIATED WEB SITE \nApache \nwww.apache.org \nLinux \nwww.redhat.com \nNotes \nwww.lotus.com \nMS SQL Server \nwww.microsoft.com \npcAnywhere \nwww.symantec.com \nWebLogic \nwww.bea.com \nTypically, whatever Web application that an organization has deployed or will want to \ndeploy will depend upon a group of system software products. Before developing any \napplication software, an organization would be well advised to evaluate any system \nsoftware that the application is expected to utilize. Such an evaluation would ensure that \nthe planned system software and the specific installation configuration do not have any \nsignificant security issues. Determining security flaws or weaknesses early is important, as \ntrying to retrofit a set of patches or workarounds to mitigate these system software security \nvulnerabilities can cause significant reworking. For example, applications might have to be \nreconfigured, as the original ones were developed using different, often default, system \nsoftware configurations. Or perhaps, worse still, the applications would need to be ported to \na new platform, because the original platform was found to be inherently unsafe. \nThis chapter looks at the tests that should be considered to ensure that any system \nsoftware that is going to be deployed has been configured to remove or minimize any \nsecurity vulnerabilities associated with this group of software products and thereby provide \na firm foundation on which Web applications can be built. \n \nSecurity Certifications \nAlthough virtually every system software vendor will claim its product is secure, some \nproducts are designed to be more secure than others. For instance, some products \ndifferentiate the tasks that need to be performed by an administrator from those that are \ntypically only needed by a user of the system, thereby denying most users of the system \naccess to the more security-sensitive administrative functions. This is just one of the ways \nWindows NT/2000 is architected differently than Windows 9.x. \nWhen evaluating system software products for use on a Web site, an organization would \nideally want to review each proposed product's architecture to ensure that it has been \nsufficiently secure. Unfortunately, for all but the largest organizations (typically \n" }, { "page_number": 76, "text": " \n69 \ngovernments), such an undertaking is likely to be cost prohibitive. To mitigate this problem, \nthe security industry has developed a common criteria for evaluating the inherent security of \nsoftware products. The goal of the common criteria is to allow certified testing labs to \nindependently evaluate, or certify, software products against an industry-standard criteria, \nthereby allowing potential users of the software to determine the level of security a product \nprovides, without each user having to individually evaluate each tool. \n \nSECURITY CERTIFICATION HISTORY \nCirca 1985, the U.S. Department of Defense (www.defenselink.mil) defined seven levels of \nsystem software security, A1, B1, B2, B3, C1, C2, and D1 (A1 being the highest level of \nsecurity), for the purpose of providing a common set of guidelines for evaluating the \nsecurity of software products from different vendors. The exact criteria used to assign \nproducts to different levels were spelled out in a document commonly referred to as the \nOrange book. During the early 1990s several European governments jointly developed a \nEuropean equivalent of the Orange book. These guidelines were called the European \nInformation Technology Security Evaluation and Certification Scheme, or ITsec (see \nwww.cesg.gov.uk/assurance/iacs/itsec/). \nBoth sets of guidelines have been superseded by a new set of general concepts and \nprinciples developed under the auspices of the International Organization for \nStandardization (www.iso.ch). This new standard (ISO 15408) is being referred to as the \nCommon Criteria for Information Technology Security Evaluation, or Common Criteria (CC) \nfor short. \nAlthough comparatively few products have currently completed the evaluation process, the \ncommon criteria is becoming more widely recognized, which in turn should lead to more \nproducts being submitted for testing. Additional information about the common criteria and \nthe status of the products that have been or are in the process of being evaluated can be \nfound at www.commoncriteria.org. \n \nPatching \nThe early versions of system software products used to support Web sites often contained \nobscure security holes that could potentially be exploited by a knowledgeable attacker. \nThanks to an army of testers that knowingly (or unknowingly) tested each new version of \nsoftware before and after its release, the later releases of these products have become \nmuch more secure. Unfortunately, more is a relative term; many of these products still have \nwell-documented security vulnerabilities that, if not patched, could be exploited by attackers \nwho have done their homework. \nIf a security issue with a particular version of a system software product exists, typically the \nproduct's end-users can't do much about it until the developers of the product (vendor or \nopen-source collaborators/distributor) are able to develop a patch or workaround. \nFortunately, most high-profile system software vendors are particularly sensitive to any \npotential security hole that their software might contain and typically develop a fix (patch) or \nworkaround very quickly. \nSystem software patches are only useful if they are actually installed. Therefore, rather than \nactually testing the product itself for as-yet-undiscovered security holes, the security-testing \nteam would be much better advised to review the work of others to determine what known \nsecurity issues relate to the products used on the Web site under testing. A common, \nmanual approach to researching known security issues is to view entries on online bug-\ntracking forums or incident response centers such as those listed in Table 4.2. In addition, \n" }, { "page_number": 77, "text": " \n70 \nmany vendors also post up-to-date information on the status of known security defects \n(sometimes referred to by vendors as features) and what the appropriate fix or workaround \nis on their own Web site. \nTable 4.2: Web Sites of Bug- and Incident-Tracking Centers \nWEB SITE NAME \nWEB ADDRESS \nCERT® Coordination Center \nwww.cert.org \nComputer Incident Advisory Capability (CIAC) \nwww.ciac.org \nComputer Security Resource Center (CSRC) \nhttp://csrc.nist.gov \nCommon Vulnerabilities and Exposures (CVE) \nwww.cve.mitre.org \nFederal Computer Incident Response Center \n(FedCIRC) \nwww.fedcirc.gov \nInformation System Security \nwww.infosyssec.com \nInternet Security Systems™ (ISS) \nwww.iss.net \nNational \nInfrastructure \nProtection \nCenter \n(NIPC) \nwww.nipc.gov \nNTBugtraq \nwww.ntbugtraq.com \nPacket Storm \nhttp://packetstorm.decepticons.org \nSystem \nAdministration, \nNetworking, \nand \nSecurity (SANS) \nwww.sans.org \nSecurity Bugware \nwww.securitybugware.org \nSecurityFocus (bugtraq) \nwww.securityfocus.com \nSecurityTracker \nwww.securitytracker.com \nVmyths.com \nwww.vmyths.com \nWhitehats \nwww.whitehats.com \nWindows and .NET Magazine Network \nwww.ntsecurity.net \nIn addition, many vendors also post up-to-date information on the status of known security \ndefects (sometimes referred to by vendors as features) and what the appropriate fix or \nworkaround is on their Web site. \n \nOPEN-SOURCE VERSUS CLOSED-SOURCE DEBATE \nA debate still exists as to whether a proprietary (closed-source) product such as Windows is \nmore or less secure than an open-source product such as Linux or OpenBSD. Open-source \nadvocates claim that a product is much less likely to contain security holes when the source \ncode is tested and reviewed by hundreds of individuals from diverse backgrounds. \nHowever, proponents of the proprietary approach reason that an attacker is much less likely \nto find any security holes that might exist if they do not have access to the product's source \ncode. \n \nHOT FIXES, PATCHES, SERVICE PACKS, POINT RELEASES, AND BETA VERSIONS \n" }, { "page_number": 78, "text": " \n71 \nDifferent vendors use different terms to describe their software upgrades. Often these \nnames are used to infer that different degrees of regression testing have been performed \nprior to the upgrade being released. The situation isn't helped when the vendor offers such \ngeneric advice as \"this upgrade should only be installed if necessary.\" Therefore, before \ninstalling any upgrade, try to determine what level of regression testing the vendor has \nperformed. An organization should consider running its own tests to verify that upgrading \nthe latest version will not cause more problems than it fixes. For example, an upgrade fixes \na minor security hole, but it impacts an application's performance and functionality. \nInstead of manually researching all the security holes and nuances of a system software \nproduct, the security-testing team could utilize an automated security assessment tool or \nonline service. Such a tool or service can be used to probe an individual machine or group \nof machines to determine what known security issues are present and remain unpatched. \nTable 4.3 lists some of the tools available for this task. \n \nTable 4.3: Sample List of System Software Assessment Tools \nNAME \nASSOCIATED WEB SITE \nHFNetChk and Personal Security Advisor \nwww.microsoft.com \nHotfix Reporter \nwww.maximized.com \nInternet Scanner \nwww.iss.net \nNessus \nwww.nessus.org \nQuickinspector \nwww.shavlik.com \nSecurity Analyzer \nwww.netiq.com \nTitan \nwww.fish.com \nWhichever approach is used, the goal is typically not to find new system software defects, \nbut to ascertain what (if any) security patches or workarounds need to be implemented in \norder to mitigate existing problems. \nFor some organizations, installing patches every couple of weeks on every machine in the \norganization may consume an unacceptable amount of resources. For instance, a risk-\nadverse organization may want to run a full set of regression tests to ensure that the \nfunctionality of any existing application isn't altered by the workaround, or that the Web \nsite's performance isn't noticeably degraded by a new patch. In some instances, a patch \nmay even turn on features that have previously been disabled or removed, or it may alter \nexisting security settings. Security policies should therefore be reviewed to ensure that they \ndescribe under what circumstances a patch or workaround should be implemented and \nwhat regression tests should be performed to ensure that any newly installed patch has not \nunknowingly changed any security configuration setting or otherwise depreciated the \ncapabilities of the Web site. \nTable 4.4 lists a series of checks that could be utilized to evaluate how well security \npatches are being implemented. \n \nTable 4.4: System Software Patching Checklist \nYES \nNO \nDESCRIPTION \n" }, { "page_number": 79, "text": " \n72 \nTable 4.4: System Software Patching Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nIs there a documented policy in place that describes under what \ncircumstances and how a security patch should be implemented? \n(This is especially important when multiple patches are to be applied, \nas the installation order may be critical.) \n□ \n□ \nHave all the known security issues for each system software product \nthat is or will be used by the Web site been researched and \ndocumented? \n(The \nresearch \nshould \ninclude \nevaluating \nany \nconsequences of installing the patch.) \n□ \n□ \nHave all the security patches deemed necessary by the documented \npolicy been obtained from a legitimate source? (It's not unheard of for \na supposed security patch to actually contain a Trojan horse.) \n□ \n□ \nHave tests been designed that can demonstrate the existence of the \nsecurity hole(s) that needs to be patched? (This is necessary if \nconfirmation is needed that the security hole has indeed been fixed \nby the correct application of the patch.) \n□ \n□ \nHave all the security patches and workarounds deemed necessary by \nthe policy been implemented on every affected machine? \n□ \n□ \nIn the event an issue is discovered with a newly installed patch, is a \nprocess in place that would enable the patch to be rolled back \n(uninstalled)? \n□ \n□ \nIs the person(s) responsible for monitoring new security issues aware \nof his or her responsibility and does he or she have the resources to \naccomplish this task? \n \nHardening \nHardening is a term used to describe a series of software configuration customizations that \ntypically remove functionality and/or reduce privileges, thereby making it harder for an \nintruder to compromise the security of a system. Unfortunately, the default installation \noptions for many system software products are usually not selected based on security \nconsiderations, but rather on ease of use, thus necessitating hardening customizations. \nTherefore, in addition to checking for known security holes, it also makes sense to \nsimultaneously check for known features that, if left unaltered, could be exploited by an \nintruder. \n \nROCKING THE BOAT \nOne of reasons that system administrators delay (or do not) apply system software patches \nis because they are afraid of \"rocking the boat.\" Because a system administrator may not \nhave time to thoroughly regression test a new patch, he or she probably fears that the new \npatch may destabilize the system. Holding off implementing the new patch until the next \nscheduled system upgrade (when the system will be rigorously tested) will allow an \norganization to find any unexpected consequences of installing the patch. Unfortunately, \nduring this period of time, the organization may be vulnerable to an attack through any of \nthe security holes that could have been filled by the uninstalled patch. \n" }, { "page_number": 80, "text": " \n73 \n \nLOCKING DOWN THE OPERATING SYSTEM \nRather than requiring a system administrator to manually harden an operating system, \norganizations now offer products such as those listed in Table 4.5 that attempt to provide \nan extra level of protection to the operating system. These products often work by disabling \nor locking down all administrative-level services, which can then only be accessed using a \nsecure password administered by the protecting product. \n \nTable 4.5: Sample List of Operating System Protection Tools \nNAME \nASSOCIATED WEB SITE \nBastille Linux \nwww.bastille-linux.org \nEnGarde \nwww.engardelinux.org \nIISLockdown \nwww.microsoft.com \nImmunix \nwww.immunix.org \nServerLock \nwww.watchguard.com \nTable 4.6 is a generic list of tests that can be used to form the basis of a system software-\nhardening checklist, while Allen (2001) outlines processes for hardening several different \nplatforms. \n \nTable 4.6: System Software Hardening Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nHave vendor- and industry-recommended hardening customizations \nbeen researched and documented for each system software product \nthat is or will be used by the Web site? \n□ \n□ \nHave all the procedures used to harden each system software \nproduct been documented? \n□ \n□ \nHave all the documented system software hardening procedures \nbeen implemented on every affected machine? \n \nMasking \nThe more information an intruder can obtain about the brand, version, and installation \noptions of any system software product installed upon the Web site (such as what brand \nand version of operating system is being used by the Web server), the easier it will be for \nthe intruder to exploit any known security holes for this particular version of the product. \nFor instance, buffer overflow attacks (discussed in more detail in Chapter 6) are typically \noperating system and architecture specific (for example, a buffer overflow attack that works \non an NT/Alpha platform is unlikely to work on a NT/Intel or UNIX/Alpha platform). \nTherefore, to exploit this kind of attack, the operating system and hardware platform must \nfirst be deduced or guessed. Additionally, when designing new exploits, authors often need \nto recreate an equivalent system software and hardware architecture environment in order \nto compile and/or test their newly discovered exploit(s). \n" }, { "page_number": 81, "text": " \n74 \nGiven the usefulness of knowing this kind of information, it makes sense that an \norganization would want to minimize this knowledge. Unfortunately, many products give up \nthis kind of information all too easily. For instance, much of this information can be obtained \nvia hello (banner) or error messages that the product sends by default when somebody \ntries to initiate a connection with it. Intruders trying to deduce the brand and version of a \nproduct will often use a technique called banner grabbing to trick a machine into sending \ninformation that uniquely identifies the brand and version of the products being used by the \nWeb site. To reduce this information leakage, many organizations choose to mask their \nWeb sites, replacing these helpful default messages with legal warnings, blank or \nuninformative messages, or false banners that match the default response from a \ncompletely different brand of system software and therefore hopefully cause an intruder to \nwaste his time using an ineffective set of exploits. \nA security tester shouldn't have to rely on manual efforts (such as the ones illustrated in the \n\"Banner Grabbing\" sidebar). Rather, several tools now exist that will attempt to identify \n(fingerprint) a target by running a series of probes. Some even offer features designed to \nmake this activity less likely to be noticed by any intrusion-detection system (IDS) that might \nbe installed on the target Web site and tip off an organization that its Web site was being \nfingerprinted (also known as enumerated). Table 4.7 lists some sample fingerprinting tools \nand services, while Scambray 2001 provides more detailed information on the techniques \nused by intruders to fingerprint a target, and Klevinsky (2002) provides guidance on how to \nuse many of the fingerprinting tools used by penetration testers and intruders alike. \n \nTable 4.7: Sample List of Fingerprinting Tools and Services \nNAME \nASSOCIATED WEB SITE \nCerberus Internet Scanner \nwww.cerberus-infosec.co.uk \nCheops \nwww.marko.net \nHackerShield \nwww.bindview.com \nNetcat \nwww.atstake.com \nNmap \nwww.insecure.org \nSuper Scan/Fscan \nwww.foundstone.com \nWhat's That Site Running? \nwww.netcraft.com \nUnfortunately, it's not always possible to completely mask the identify of an operating \nsystem due to the Transmission Control Protocol/Internet Protocol (TCP/IP) being \nimplemented slightly differently by various operating system vendors. For example, TCP \nfeatures and packet sequence numbering may differ among vendors. However, many \nnovice attackers and some fingerprinting tools may still be fooled or at least delayed by a \nfalse banner. \nIf an organization has decided to use a nondescriptive legal warning or false banners on \nsome or all of their machines, then these machines should be checked to ensure that this \nrequirement has been implemented correctly. In the case of false banners designed to \ndeceive an intruder's fingerprinting effort, an assessment of the effectiveness of the \ndeception can be made by using several of the automated probing tools to see if they can \nsuccessfully see through this ploy. Table 4.8 summarizes these checks. \n \n" }, { "page_number": 82, "text": " \n75 \nTable 4.8: System Software Masking Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nIs there a documented policy in place that describes under what \ncircumstances a default banner should be replaced with a blank legal \nwarning or false banner? \n□ \n□ \nIf a legal warning is to be used, has it been approved by the legal \ndepartment? \n□ \n□ \nHave all the banner modifications deemed necessary by the policy \nbeen implemented on every affected machine? \n□ \n□ \nIf false banners are to be used, are they deceptive enough to trick a \nsignificant number of automated probing tools? \n \nBANNER GRABBING \nIt is very easy to identify the version of system software being used by a Web site and \nthereby hone in on known bugs or features with this specific product version. For example, \nthe following error messages were generated when an attempt was made to start a Telnet \nsession to two different Web sites using a port number not normally used by the Telnet \napplication (if installed, the Telnet application is normally configured to communicate on \nport 23). \nFrom a command-line prompt, enter telnet www.wiley.com 80 (80 is the port number used \nby HTTP) or telnet www.wileyeurope.com 25 (25 is the port number used by SMTP). \nExample 1 \nC:\\telnet www.wiley.com 80 \n \nHTTP/1.1 400 Bad Request \nServer: Netscape Enterprize/3.6 SP3 \n \nYour browser sent a message this server could not understand. \n \nExample 2 \nC:\\telnet www.wileyeurope.com 25 \n \n220 xysw31.hosting.wileyeurope.com ESMTP server (Post.Office v3.5.3 \nrelease 223 ID# 0-83542U500L100S0V35) ready Tue, 16 Jan 2002 \n17:58:18 - \n0500 \nBoth Telnet connections should subsequently fail (it may be necessary to hit Enter a few \ntimes), because Telnet is not configured to work on either of the requested port numbers, \nbut not before the target machine sends an error message that identifies the brand and \nversion of system software that is currently running. \n \n" }, { "page_number": 83, "text": " \n76 \nServices \nThis book will use the term service to describe all the system software services, processes, \nor daemons (in UNIX terminology) installed on a machine that can communicate with the \nnetwork it is attached to. Before a service can communicate over the network, it must first \nbe bound to one or more network interface cards (NICs) and communication channels \n(ports). \nWhenever a service is started on a machine, the operating system will typically grant the \nservice the same security privileges the user account that initiated the service had. \nUnfortunately, if a service were to be tricked into executing malicious commands, these \nundesirable instructions would be executed using the same privileges that the service \ninherited from the account that owns this service. For example, if the Web server service \nwas running with administrative (or in UNIX lingo, root) privileges, an intruder could be able \nto trick the Web server into emailing the intruder the operating system's password file (a file \nthat normally only the administrator can access). Had the Web server service been running \nwith a lower privileged account, then chances are that the Web server service itself would \nhave been refused access to this system file by the operating system. It is therefore \nimportant to check that any service running on a machine is only granted the minimum \nprivileges needed to perform its legitimate functions and any unneeded services are \ndisabled (or ideally uninstalled). \nGenerally speaking, common network services such as the Hypertext Transfer Protocol \n(HTTP), Finger, or the Simple Mail Transfer Protocol (SMTP) use predefined (or well-\nknown) port numbers. These numbers are assigned by the Internet Assigned Numbers \nAuthority (IANA, www.iana.org), an independent organization with the aim of minimizing \nnetwork conflicts among different products and vendors. Table 4.9 lists some sample \nservices and the port numbers that the IANA has reserved for them. \n \nTable 4.9: Sample IP Services and Their Assigned Port Numbers \nPORT NUMBER \nSERVICE \n7 \nEcho \n13 \nDayTime \n17 \nQuote of the Day (QOTD) \n20 and 21 \nFile Transfer Protocol (FTP) \n22 \nSecure Socket Shell (SSH) \n23 \nTelnet \n25 \nSMTP \n53 \nDomain Name System (DNS) \n63 \nWhois \n66 \nSQL*net (Oracle) \n70 \nGopher \n79 \nFinger \n80 \nHTTP \n" }, { "page_number": 84, "text": " \n77 \nTable 4.9: Sample IP Services and Their Assigned Port Numbers \nPORT NUMBER \nSERVICE \n88 \nKerberos \n101 \nHost Name Server \n109 \nPost Office Protocol 2 (POP2) \n110 \nPost Office Protocol 3 (POP3) \n113 \nIDENT \n115 \nSimple File Transfer Protocol (SFTP) \n137, \n138, \nand \n139 \nNetBIOS \n143 \nInternet Message Access Protocol (IMAP) \n161 and 162 \nSimple Network Management Protocol (SNMP) \n194 \nInternet Relay Chat (IRC) \n443 \nHypertext Transfer Protocol over Secure Socket Layer \n(HTTPS) \nFor an intruder to communicate with and/or try to compromise a machine via a network \nconnection, the intruder must utilize at least one port. Obviously, the fewer ports that are \nmade available to an intruder, the more likely it is that the intruder is going to be detected. \nIn just the same way, the fewer the number of doors and windows a bank has, the easier it \nis for the bank to monitor all of its entrances and the less likely it is that an intruder would be \nable to enter the bank unnoticed. Unfortunately, a single NIC could have up to 131,072 \ndifferent ports for a single IP address, 65,536 for TCP/IP, and another 65,536 for User \nDatagram Protocol (UDP)/IP. (Appendix A describes IP, TCP, and UDP in more detail.) \n \n \nPORT NUMBER ASSIGNMENTS \nPort numbers 0 through 1023 are typically only available to network services started by a \nuser with administrator-level privileges, and are therefore sometimes referred to as \nprivileged ports. An intruder who has only been able to acquire a nonadministrator account \non a machine may therefore be forced to utilize a nonprivileged port (1024 or higher) when \ntrying to communicate to the compromised machine. \nThe set of nonprivileged port numbers (1024 to 65535) has been divided into two groups: \nthe registered group (1024 to 49151) and the private (dynamic) group (49152 to 65535). \nThe registered ports differ from the well-known ports in that they are typically used by \nnetwork services that are being executed using nonadministrator level accounts. The \nprivate group of ports is unassigned and is often used by a network service that does not \nhave a registered (or well-known) port assigned to it, or by a registered network service that \ntemporarily needs additional ports to improve communication. In such circumstances the \nnetwork service must first listen on the candidate private port and determine if it is already \nin use by another network service. If the port is free, then the network service will \ntemporally acquire (dynamically assign) this port. \n \n" }, { "page_number": 85, "text": " \n78 \nOnce a port is closed, any request made to a machine via the closed port will result in a \n\"this port is closed\" acknowledgment from the machine. A better defensive strategy is to \nmake the closed port a stealth port; a request to a stealth port will not generate any kind of \nacknowledgement from the target machine. This lack of acknowledgement will typically \ncause the requesting (attacker's) machine to have to wait until its own internal time-out \nmechanism gives up waiting for a reply. The advantage of a stealth port over a closed port \nis that the intruder's probing efforts are going to be slowed, possibly generating frustration, \nand potentially causing the intruder to go and look elsewhere for more accommodating \ntargets. \nWhile checking to see whether ports are open can be performed manually by logging on to \neach machine and reviewing the services running (as depicted in Figure 4.1), it's not always \nclear what some of the services are or which ports (if any) they are using, especially if more \nthan one NIC is installed. \n \nFigure 4.1: List of active processes running. \nFortunately, a number of easy-to-use tools can automate this test; these tools are referred \nto as port scanners and often come as part of a suite of security-testing tools. Chirillo \n(2001) goes into considerable depth on securing some of the most commonly used ports \nand services, while Klevinsky (2002) provides an overview of many of the tools that can be \nused to automated a port scan. \n \n \nNote \nFoundstone (www.foundstone.com) provides a utility Fport, which can be \nused to map running services to the ports that they are actually using. \n \nREMAPPING SERVICES \n" }, { "page_number": 86, "text": " \n79 \nIf a potentially dangerous service such as Telnet (port 23) absolutely needs to be made \navailable to external sources, the local area network (LAN) administrator may decide to try \nand hide the service by remapping it to a nonprivileged port (that is a port number above \n1023), where it is less likely (but still possible) for an intruder to discover this useful service. \nIf the LAN administrator is using such a technique, make sure that any port-scanning tool \nused for testing is able to detect these remapped services. \n \nDUAL NICS \nA machine that has two NICs can potentially have different services running on each card. \nFor instance, a LAN administrator may have enabled the NetBIOS service on the inward-\nfacing side of a Web server to make file uploads easier for the Webmaster, but disabled it \non the outward-facing side to stop intruders from uploading their graffiti or hacking tool of \nchoice. Although increasing the complexity of a network (which increases the risk of human \nerror), using multiple NICs for some of the servers can potentially improve security and \nperformance, and add additional configuration flexibility. \n \nEXAMPLE PORT SCAN \nIt appears from the following sample nmapNT port scan report that this machine is running \nquite a few services, several of which could potentially be used to compromise this \nmachine. nmap is a UNIX-based tool originally written by Fyodor. nmapNT is a version of \nnmap that was ported to the Windows platform by eEye Digital Security (www.eeye.com). \nInteresting ports on www.wiley.com (123.456.789.123): \n(The 1508 ports scanned but not shown below are in state: closed) \nPort State Service \n21/tcp open ftp \n25/tcp open smtp \n79/tcp open finger \n80/tcp open http \n81/tcp open hosts2-ns \n106/tcp open pop3pw \n110/tcp open pop-3 \n135/tcp open loc-srv \n280/tcp open http-mgmt \n443/tcp open https \n1058/tcp open nim \n1083/tcp open ansoft-lm-1 \n1433/tcp open ms-sql-s \n4444/tcp open krb524 \n5631/tcp open pcanywheredata \nTables 4.10 and 4.11 list some sample port-scanning tools and services. \n \nTable 4.10: Sample List of Port-Scanning Tools \n" }, { "page_number": 87, "text": " \n80 \nNAME \nASSOCIATED WEB SITE \nCerberus Internet Scanner (formally NTInfoScan or \nNTIS) \nwww.cerberus-\ninfosec.co.uk \nCyperCop Scanner \nwww.nai.com \nFirewalk \nwww.packetfactory.net \nHackerShield \nwww.bindview.com \nHostscan \nwww.savant-software.com \nInternet Scanner \nwww.iss.net \nIpEye/WUPS \nwww.ntsecurity.nu \nNessus \nwww.nessus.org \nNetcat \nwww.atstake.com \nNetcop \nwww.cotse.com \nNetScan Tools \nwww.nwpsw.com \nNmap \nwww.insecure.org \nNmapNT \nwww.eeye.com \nSAINT/SATAN \nwww.wwdsi.com \nSARA \nwww.www-arc.com \nScanport \nwww.dataset.fr \nStrobe \nwww.freebsd.org \nSuper Scan/Fscan \nwww.foundstone.com \nTwwwscan \nwww.search.iland.co.kr \nWhisker \nwww.wiretrip.net \nWinscan \nwww.prosolve.com \n \nTable 4.11: Sample List of Port-Scanning Services \nNAME \nASSOCIATED WEB SITE \nShields Up \nwww.grc.com \nMIS CDS \nwww.mis-cds.com \nSecurityMetrics \nwww.securitymetrics.com \nSymantec \nwww.norton.com \nA port scan should be performed against each machine that forms part of the Web site. \nIdeally, this scan should be initiated from a machine on the inside of any installed firewall, \nsince an external port scan won't be able to tell if a service is disabled or if an intermediate \nfirewall is blocking the request. The results of the port scan should be compared to the \nservices that are absolutely required to be running on this machine and any \nadditional/unexpected services should be either disabled (and, if possible, uninstalled) or \njustified. \n" }, { "page_number": 88, "text": " \n81 \nA more stringent check would be to ensure that any unused ports were not only closed, but \nalso configured to be stealthy and not provide any information on their status. \nUnfortunately, stealthy ports, while potentially slowing down an attacker's scan, may also \nslow down the testing team as they attempt to scan the machine for unnecessary services. \nTable 4.12 summarizes these checks. \n \nTable 4.12: System Software Services Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nHas each machine been reviewed and have any unnecessary \nservices been stopped and, if possible, uninstalled? \n□ \n□ \nAre the services that are active run under the lowest privileged \naccount possible? \n□ \n□ \nHas each machine been configured to respond in a stealthy manner \nto requests directed at a closed port? \n \nREMOTE-ACCESS SERVICES \nPerhaps one of the most popular side-entry doors available to attackers, remote-access \nutilities (such as Symantec's PCAnywhere, Windows's RAS, or UNIX's SSL/RSH) provide \nlegitimate users with easy access to corporate resources from geographically remote \nlocations. (The system administrator no longer has to make 3 A.M. trips into the office to \nreboot a server.) \nUnfortunately, these convenient applications have two fundamental vulnerabilities. The first \nrisk is that the client machine (for instance, a traveling salesperson's laptop) could be \nstolen, depending upon how the client is authenticated. The theft could potentially use this \nstolen machine to gain direct access to the corporate network by simply turning the \nmachine on and double-clicking an icon on the desktop. The second risk is that the server-\nside component of the service running on the corporate network does a poor job of \nauthenticating the client. Here the requester of the service is wrongly identified as a \nlegitimate user and not an intruder trying to gain unauthorized access, especially when \naccess is via an unsanctioned modem installed on a machine behind the corporate firewall. \n \nDirectories and Files \nEach individual directory (or folder in Windows terminology) and file on a machine can have \ndifferent access privileges assigned to different user accounts. In theory, this means each \nuser can be assigned the minimum access privileges they need in order to perform their \nlegitimate tasks. Unfortunately, because maintaining Draconian directory and file privileges \nis often labor intensive, many machines typically enable users (and the services that they \nhave spawned) more generous access than they actually need. \nTo reduce the likelihood of human error when assigning directory and file access privileges, \nsome LAN administrators will group files together based on their access privilege needs. \nFor example, programs or scripts that only need to be granted execute access could be \nplaced in a directory that restricts write access, thereby inhibiting an intruder's ability to \nupload a file into a directory with execute privileges. \nMany products will by default install files that are not absolutely necessary; vendor demos \nand training examples are typical. Since some of these unneeded files may contain \n" }, { "page_number": 89, "text": " \n82 \ncapabilities or security holes that could be exploited by an intruder, the safest approach is \nto either not install them or promptly remove them after the installation is complete. \nIntruders are particularly interested in directories that can be written to. Gaining write and/or \nexecute access to a directory on a target machine, even a temp directory that does not \ncontain any sensitive data, can be extremely useful. An intruder looking to escalate his or \nher limited privileges will often need such a resource to upload hacking tools (rootkits, \nTrojan horses, or backdoors) on to the target machine and then execute them. \nFor an intruder to gain access to a directory, two things must happen. First, the intruder \nmust determine the name and directory path of a legitimate directory, and second, the \nintruder must determine the password used to protect the directory (the topic of the next \nsection of this chapter). In order to reference the target directory, the intruder must figure \nout or guess the name and directory path of a candidate directory. This is an easy step if \ndefault directory names and structures are used, or the intruder is able to run a find utility \nagainst the target machine. \nIn an effort to supplement the built-in file security offered by an operating system, several \nproducts now provide an additional level of authorization security. Typically, these products \nprovide directory and file access using their own proprietary access mechanisms, thereby \npotentially mitigating any security hole or omission that could be exploited in the underlying \noperating system. Table 4.13 lists some of these products. \n \nTable 4.13: Sample List of Directory and File Access Control Tools \nNAME \nASSOCIATED WEB SITE \nArmoredServer \nwww.armoredserver.com \nAppLock \nwww.watchguard.com \nAppShield \nwww.sanctuminc.com \nAuthoriszor 2000 \nwww.authoriszor.com \nEntercept \nwww.entercept.com \nInterDo \nwww.kavado.com \nPitBull LX \nwww.argus-systems.com \nSecureEXE/SecureStack \nwww.securewave.com \nStormWatch \nwww.okena.com \nVirtualvault \nwww.hp.com \n \nFILE-SEARCHING TECHNIQUE EXAMPLE \nA complete listing of all the files present in a Web server's directory can be easily obtained \nif the webmaster has not disabled a Web server's automatic directory service or redirected \nsuch requests to a default resource (such as a Web page named index.html). Simply \nadding a trailing forward slash (/) to the end of a URL entered via a browser's URL entry \nline (such as http://www.wiley.com/cgi-bin/) would display the entire contents of the \ndirectory. \n \nDIRECTORY-MAPPING TECHNIQUE EXAMPLE \n" }, { "page_number": 90, "text": " \n83 \nThe following is a simple exploit intended to show how easy it could be for an Intruder to \nmap out a directory structure. A feature with early versions of Microsoft IIS can display the \ndirectory \nstructure \nof \na \nWeb \nsite \nas \npart \nof \nan \nerror \nmessage. \nEntering \nwww.wiley.com/index.idc from a browser's URL entry line would result in a response such \nas this one: \nError Performing Query \nThe query file F:\\wwwroot\\primary\\wiley\\index.idc could not be \nopened. \nThe file may not exist or you may have insufficient permission to \nopen \nthe file. \nRegardless of any third-party access control products that an organization may have \ndeployed, the directory and file permissions for critical machines should still be checked to \nensure that the permissions are no more liberal than they have to be. Fortunately, this \npotentially tedious manual task can to a large degree be automated using either \ncommercial tools designed to find permission omissions or tools originally designed for \nattackers with other intentions. Table 4.14 lists some sample file-share-checking tools and \nTable 4.15 offers a checklist for checking the security of directories and files. \n \nTable 4.14: Sample List of File-Share-Scanning Tools \nNAME \nASSOCIATED WEB SITE \nLegion \nwww.hackersclub.com \nNbtdump \nwww.atstake.com \nNetBIOS Auditing Tool (NAT) \nwww.nmrc.org \nIP Network Browser \nwww.solarwinds.net \nWinfo \nwww.ntsecurity.nu \n \nTable 4.15: System Software Directories and Files Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nIs there a documented policy in place that describes how directory \nand file privileges are assigned? \n□ \n□ \nIs the directory structure on each machine appropriate? For example, \nare so many directories being used that administrative human errors \nare likely? Have read-only files been separated from execute-only \nfiles? \n□ \n□ \nHave only the minimum access privileges been assigned to each \nuser account? \n□ \n□ \nHave services running on each machine been reviewed to make sure \nthat any features that might give away information on the machine's \ndirectory structure have been disabled? An example would be a Web \nserver's automatic directory service. \n□ \n□ \nUsing a file share scanner located inside the firewall, can any \ninappropriate file shares be detected? \n" }, { "page_number": 91, "text": " \n84 \n \nUserIDs and Passwords \nIn most organizations, the assets on a network that could be of any use to an intruder are \ntypically protected using combinations of userIDs and passwords. An intruder is therefore \ngoing to be forced to try and guess or deduce a useful combination. However, not all \nuserID/password combinations are created equal; some are more useful than others. An \nintruder would ideally like to capture the account of an administrator, allowing him or her to \ndo pretty much anything he or she wants on any machine that the captured account has \nadministrative rights on. If the administrator's account proves to be unobtainable but a lower \nprivileged account is susceptible, an experienced attacker may still be able to manipulate \nan improperly configured operating system into granting administrative privileges, a \ntechnique often referred to as account escalation. Care should therefore be taken to ensure \nthat not only are the administrators' userIDs and passwords sufficiently protected, but also \nany lower-level accounts. \nWEAK PASSWORD PROTECTION EXAMPLE \nSome Web servers provide the Webmaster with the ability to require client authentication \n(such as the .htaccess and .htgroup files that can be placed inside an Apache Web server \ndirectory) before displaying the contents of a directory. Upon requesting a protected \ndirectory, the Web server sends a request to the browser, which results in the browser \ndisplaying a simple userID/password pop-up window. Unfortunately, the data sent back to \nthe server using this method isn't encrypted (it uses a base 64 conversion algorithm to \nencode the data) and is therefore extremely easy for an eavesdropper to decode. Web \nserver client authentication should therefore not be relied upon to protect the contents of a \ndirectory. \nSome system software products use weak or no encryption to store and/or transmit their \nuserIDs and passwords from the client to the server component of the product, affording an \neavesdropper with the chance to capture unencrypted or easily decipherable passwords. If \nthe same password used for this service is the same as the password used for an \nadministrative-level account, learning these weak passwords may not only allow an intruder \nto manipulate the service, but also compromise additional resources. \nAlthough an attacker would like to compromise a single machine, compromising several \nmachines is definitely more desirable. This can happen relatively quickly if other machines \n(or even entire networks) have previously been configured to trust the compromised \nadministrator account. Alternatively, the LAN administrator may have used the same \npassword for several administrative accounts, thereby making network administration \neasier, but also increasing the probability that if the password on one machine is deduced, \nthe entire network may be compromised. \nEven if different userIDs and passwords are used for each local administrative account and \nthese accounts are granted limited trusts, an entire network may still be compromised if an \nintruder can get past the security of the machine used by the network for network security \nauthentication, that is, the network controller. Capturing the network controller (or backup \ncontroller) allows an intruder complete access to the entire network and possibly any other \nnetwork that trusts any of the accounts on the compromised network. Given the risk \nattached with compromising an administrator account on a network controller, many LAN \nadministrators choose to use exceptionally long userIDs and passwords for these critical \naccounts. \nOne of the leading causes of network compromises is the use of easily guessable or \ndecipherable passwords. It is therefore extremely important that an organization defines \n" }, { "page_number": 92, "text": " \n85 \nand (where possible) enforces a password policy. When defining such a policy, an \norganization should consider the trade-off between the relative increase in security of using \na hard-to-crack password with the probable increase in inconvenience. Policies that are \ndifficult to follow can actually end up reducing security. For example, requiring users to use \na long, cryptic password may result in users writing down their passwords (sometimes even \nputting it on a post-it note on their monitor), making it readily available to a potential \nattacker walking by. Requiring users to frequently change their password may result in \nsome users using unimaginative (and therefore easily predictable) password sequences, \nsuch as password1, password2, and password3. Even if the access control system is smart \nenough to deduce blatant sequences, users may still be able to craft a sequence that is \neasy for them to remember but still acceptable to the access control system, such as \npassjanword, passfebword, and passmarword. As the following sections demonstrate, an \nintruder could acquire a userID/password combination in several distinct ways. \n \nSINGLE SIGN-ON \nSingle sign-on (SSO) is a user authentication technique that permits a user to access \nmultiple resources using only one name and password. Although SSO is certainly more \nuser-friendly than requiring users to remember multiple userID's and passwords, from a \nsecurity perspective it is a double-edged sword. Requiring users to only remember one \nuserID and password should mean that they are more willing to use a stronger and hence \nmore secure (albeit harder to remember) password, but on the other hand, should this \nsingle password be cracked, an intruder would have complete access to all the resources \navailable to the compromised user account, a potentially disastrous scenario for a highly \nprivileged account. \nManual Guessing of UserIDs and Passwords \nTypically easy to attempt, attackers simply guess at userID/password combinations until \nthey either get lucky or give up. This approach can be made much more successful if the \nintruder is able to first deduce a legitimate userID. \nObtaining a legitimate userID may not be as hard as you might think. When constructing a \nuserID, many organizations use a variation of an employee's first name and last name. A \nsample userID format can be obtained, for example, by viewing an email address posted on \nthe organization's Web site. Discovering the real name of a LAN administrator may be all \nthat is needed to construct a valid userID, and that information is easily obtained by \nacquiring a copy of the organization's internal telephone directory. Or perhaps an intruder \ncould look up the technical support contact posted by a domain name registrar to find a \ndomain name owned by the organization. \nMany system software products are initially configured with default userIDs and passwords; \nit goes without saying that these commonly known combinations should be changed \nimmediately (www.securityparadigm.com even maintains a database of such accounts). \nWhat is less well known is that some vendors include userIDs and passwords designed to \nenable the vendor to log in remotely in order to perform routine maintenance, or in some \ncases the organization's own testing team may have created test accounts intended to help \ndiagnose problems remotely. If any of the products installed at an organization, such as \nfirewalls, payroll packages, customer relationship management systems, and so on, use \nthis feature, the organization should consider whether or not this remote access feature \nshould be disabled, or at the very least the remote access password changed. \n \nEXAMPLE PASSWORD POLICY GUIDELINES \n" }, { "page_number": 93, "text": " \n86 \nEnforceable Guidelines \nƒ Minimum length of x characters \nƒ Must not contain any part of userID or user's name \nƒ Must contain at least one character from at least four of the following categories: \no Uppercase \no Lowercase \no Numeral \no Nonalphanumeric, such as !@#$%^&*() \no Nonvisual, such as control characters like carriage return \nƒ Must be changed every X number of weeks \nƒ Must not be the same as a password used in the last X generations \nƒ Account is locked out for X minutes after Y failed password attempts within Z period \nof time \nHard-to-Enforce Guidelines \nƒ Do not use words found in an English (or another language) dictionary \nƒ Do not use names of family, friends, or pets (information often known by coworkers) \nƒ Do not use easy-to-obtain personal information such as parts of a \no Mailing address \no Social security number \no Telephone number \no Driving license number \no Car license plate number \no Cubical number \n \nUSERID ADVERTISEMENTS \nIn an effort to improve customer relations, many organizations have started to advertise the \nemail address of a senior employee, such as \"Please email the branch manager, Jon \nMedlin, at with any complaints or suggestions for improvement.\" \nAlthough providing direct access to senior management may help improve customer \ncommunications, if the format used for the email address is the same as the format used for \nthe user's ID, the organization may also be inadvertently providing a potential attacker with \nuserIDs for accounts with significant privileges. \n \nIf an intruder is truly just guessing at passwords, then perhaps the easiest way to thwart this \napproach is to configure the system to lock out the account under attack after a small \nnumber of failed login attempts. Typically, lockout periods range from 30 minutes to several \nhours, or in some cases even require the password to be reset. \n \nTHE NULL PASSWORD \nSome organizations have adopted an easy-to-administer policy of not assigning a password \nto a user account until the first time it is used, the password being assigned by the first \nperson to log in to the account. \nObviously, accounts that have no (or null) password are going to be extremely easy for an \nattacker to guess and should therefore be discouraged. \n" }, { "page_number": 94, "text": " \n87 \nAutomated Guessing of UserIDs and Passwords \nSeveral tools now exist that can be used to systematically guess passwords; these tools \ntypically employ one (or both) of two basic guessing strategies. The quickest strategy is to \nsimply try a list of commonly used passwords. Most of the tools come with lists that can be \nadded to or replaced (particularly useful if the passwords are expected to be non-English \nwords). Hackersclub (www.hackersclub.com) maintains a directory of alternative wordlists. \nThe second approach is to use a brute-force strategy. As the name implies, a brute-force \napproach does not try to get lucky by only trying a comparative handful of passwords; \ninstead, it attempts every single possible combination of permissible characters until it \ncracks the password. The biggest drawback with a brute-force approach is time. The better \nthe password, the longer it will take a brute-force algorithm to crack the password. Table \n4.16 lists some sample password-cracking tools. \n \nTable 4.16: Sample List of Password-Deciphering/Guessing/Assessment Tools \nNAME \nASSOCIATED WEB SITE \nBrutus \nwww.antifork.org/hoobie.net \nCerberus Internet Scanner \nwww.cerberus-infosec.co.uk \nCrack \nwww.users.dircon.co.uk/~crypto \nCyberCop Scanner[a] \nwww.nai.com \n(dot)Security \nwww.bindview.com \nInactive Account Scanner \nwww.waveset.com \nLegion and NetBIOS Auditing Tool (NAT) \nwww.hackersclub.com \nLOphtcrack \nwww.securitysoftwaretech.com \nJohn the Ripper, SAMDump, PWDump, \nPWDump2, PWDump3 \nwww.nmrc.org \nSecurityAnalyst \nwww.intrusion.com \nTeeNet \nwww.phenoelit.de \nWebCrack \nwww.packetstorm.decepticons.org \n[a]Effective July 1, 2002, Network Associates has transitioned the CyberCop product line \ninto maintenance mode. \nSuppose the intruder intends to use an automated password tool remotely—and this \napproach is thwarted by locking out the account after a small number of failed attempts. To \nget in, the intruder would need to obtain a copy of the password file. But once that was in \nhand, the intruder could then run a brute-force attack against the file. Using only modest \nhardware resources, some tools can crack weak passwords within a few hours, while \nstronger passwords will take much longer. Skoudis (2001) provides additional details on \nhow to attack password files for the purpose of determining just how secure sets of \npasswords are. \nIn theory, no matter how strong a password is, if the file that contains the password can be \nacquired, it can eventually be cracked by a brute-force attack. However, in practice, using \nlong passwords that utilize a wide variety of characters can require an intruder to spend \n" }, { "page_number": 95, "text": " \n88 \nseveral weeks (or even months) trying to crack a file, a fruitless effort, if the passwords are \nroutinely changed every week. \nOne approach to evaluating the effectiveness of the passwords being selected is to ask the \nLAN administrator to provide you with legitimate copies of all the password files being used \non the production system. From a standalone PC (that is not left unattended), an attempt is \nmade to crack each of these passwords using a password-guessing tool that uses as input \na list of commonly used words (this file is obviously language specific). Any accounts that \nare guessed should be immediately changed. \nA second time-consuming test would be to run a brute-force attack against each of these \nfiles, since in theory any password could be deciphered given enough time; this test should \nbe time-boxed. For example, any password that can be cracked within 24 hours using \nmodest hardware resources should be deemed unacceptable. \n \nPASSWORD FILE NAMES \nPassword files on UNIX systems are generally named after some variation of the word \npassword, such as passwd. The Windows NT/2000 family of systems name their password \nfiles SAM (short for Service Account Manager). \nParticular care should be taken to destroy all traces of the copied password files, temporary \nfiles, and generated reports once the testing is complete, least these files fall into the wrong \nhands. \nEven if only strong passwords are used, it still makes sense to try and ensure that these \npassword files are not readily available to an intruder. An organization should therefore \nconsider designing a test to see if an unauthorized person can acquire a copy of these files. \nAlthough security may be quite tight on the production version of these files, it's quite \npossible that backup files either located on the machine itself or offsite are quite easily \naccessible. For example, the file used by Windows NT/2000 to store its passwords is \nprotected by the operating system while it is running. However, the operating system also \nautomatically creates a backup copy of this file, which may be accessible. A simple search \nof a machine's hard drive using *sam*.* should locate the production and backup version(s) \nof this file. \nA less obvious place to find clues to valid passwords is in the log files that some system \nsoftware products use to store failed login attempts. For example, knowing that user Tim \nWalker failed to login with a password of margarey may be all an intruder needs to know in \norder to deduce the valid password is margaret. Such log files, if used, should therefore be \nchecked to ensure that the failed password is also encrypted to stop an intruder from \nviewing these useful clues. \nGaining Information via Social Engineering \nCovered in more detail in Chapter 7, social engineering refers to techniques used by \nintruders to trick unsuspecting individuals into divulging useful information. A classic \nexample is that of an intruder calling an organization's help desk and asking them to reset \nthe password of the employee the intruder is pretending to be. \nDisgruntled Employees Committing Illicit Acts \nAlthough many organizations seriously consider the risk of a trusted employee taking \nadvantage of his or her privileged position to commit (or attempt to commit) an illicit act (a \n" }, { "page_number": 96, "text": " \n89 \ntopic covered in more detail in Chapter 7), others chose to ignore this possibility. Obvious \nprecautions include ensuring that employees are only granted access to resources they \nabsolutely need, and accounts used by former employees are deactivated as soon as (or \nbefore) they leave. \nTable 4.17 summarizes some of the checks that should be considered when evaluating the \nprotection afforded to a system's userIDs and passwords. \n \nTable 4.17: System Software UserIDs and Passwords Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nIs there a documented policy in place that describes how userIDs and \npasswords are assigned, maintained, and removed? \n□ \n□ \nWhen an employee leaves (voluntarily or involuntarily), are his or her \npersonal user accounts deactivated and are the passwords changed \nin a timely manner for any shared accounts that he or she had \nknowledge of? \n□ \n□ \nAre the procedures for handling forgotten and compromised \npasswords always followed? \n□ \n□ \nAre security access logs monitored for failed logins? For instance, \nhow long or how many tries does it take before someone responds to \na legitimate account using invalid passwords? \n□ \n□ \nDoes the system lock out an account for X minutes after Y failed \npassword attempts within Z period of time? \n□ \n□ \nAre different administrative userIDs and/or passwords used for each \nmachine? \n□ \n□ \nHave all default accounts been removed, disabled, or renamed, \nespecially any guest accounts? \n□ \n□ \nHave all remote access accounts been disabled? Or at least have \ntheir passwords been changed? \n□ \n□ \nAre variations of people's names not used when assigning userIDs? \n□ \n□ \nDo none of the critical accounts use common (and therefore easily \nguessable) words for passwords? \n□ \n□ \nAre hard-to-guess or decipher passwords (as defined by the \norganization's password policy) used for all critical accounts? \n□ \n□ \nAre the details of any failed login attempts sufficiently protected from \nunauthorized access? \n \nUser Groups \nMost system software products support the concept of user groups. Instead of (or in \naddition to) assigning individual user accounts system privileges, privileges are assigned to \nuser groups; each user account is then made a member of one (or more) user groups and \nthereby inherits all the privileges that have been bestowed upon the user group. Using user \ngroups can make security administration much easier, as a whole group of user accounts \n" }, { "page_number": 97, "text": " \n90 \ncan be granted a new permission by simply adding the new privilege to a user group that \nthey are already a member of. \n \nSHADOWED PASSWORD FILES \nSome operating systems store their passwords in files that are hidden from all but \nadministrative-level accounts; such files are typically referred as shadowed password files \nand obviously afford greater protection than leaving the password file(s) in plain sight of an \nattack with nonadministrative privileges. \nThe danger with user groups is that sometimes, rather than creating a new user group, a \nsystem administrator will add a user account to an existing user group that has all the \nneeded privileges, plus a few unneeded ones. Thus, the system administrator grants the \nuser account (and any services running under this account) greater powers than it actually \nneeds. Of course, creating user groups that are so well defined that each user group only \nhas a single member defeats the whole purpose of defining security privileges by groups \ninstead of individual accounts. \n \nSEPARATION OF DUTIES \nA common practice for system administrators who need to access a machine as an \nadministrator but would also like to access the machine (or initiate services) with \nnonadministrative privileges is to create two accounts. The administrator would thus only \nlog into the administrator-level account to perform administrative-level tasks and use the \nless privileged account for all other work, reducing the possibility that he or she \ninadvertently initiates a service with administrative privileges. \nTable 4.18 summarizes some of the checks that should be considered when evaluating the \nappropriateness of user-group memberships. \n \nTable 4.18: System Software User-Group Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nIs there a documented policy in place that describes how user groups \nare created, maintained, and removed? \n□ \n□ \nDo the user groups appear to have privileges that are too general, \nresulting in some user accounts being granted excessive privileges? \n□ \n□ \nDo the user groups appear to have privileges that are too specific, \nresulting in so many user groups that the system administrator is \nmore likely to make an error while assigning privileges? \n□ \n□ \nHas each user account be assigned to the appropriate user group(s)? \n \nSummary \nIt is not enough to review and test a Web site's network topology and configuration (the \nsubject of Chapter 3), since a poorly configured or unplugged security hole in a system \nsoftware product installed upon the Web site could provide an attacker with an easy entry \npoint. \n" }, { "page_number": 98, "text": " \n91 \nAlthough few organizations have the resources to test another company's system software \nproducts for new vulnerabilities, it's not particularly desirable to discover that a known \nsecurity patch or workaround in a system software product has not been applied until after \nthe Web applications that will utilize this system software have been written, possibly \nnecessitating an unscheduled enhancement to the Web application. System software \nproducts should therefore always be evaluated from a security perspective before being \npressed into service. \n \n" }, { "page_number": 99, "text": " \n92 \nChapter 5: Client-Side Application Security \nThink how convenient it would be if, once the security of the underlying infrastructure of a \nWeb site—the network devices and system software—had been tested and found to be \nsecure, anyone could host any Web application on this site and be confident that it would \nalso be secure. Unfortunately, this is an unrealistic scenario, as each Web application \nbrings with it the potential to introduce a whole new set of security vulnerabilities. For \nexample, the seemingly most secure of Web servers, with a perfectly configured firewall, \nwould provide no protection from an attacker who had been able to capture the userID and \npassword of a legitimate user of a Web application, possibly by simply sneaking a look at a \ncookie stored on the user's hard drive (see the Cookies section that follows for a more \ndetailed explanation of this potential vulnerability). Therefore, in addition to any testing done \nto ensure that a Web site's infrastructure is secure (the subject of Chapters 3 and 4), each \nand every Web application that is to be deployed on this infrastructure also needs to be \nchecked to make sure that the Web application does not contain any security \nvulnerabilities. \nApplication Attack Points \nMost Web applications are built on a variation of the multitier client-server model depicted in \nFigure 5.1. Unlike a standalone PC application, which runs entirely on a single machine, a \nWeb application is typically composed of numerous chunks of code (HTML, JavaScript, \nactive server page [ASP], stored procedures, and so on) that are distributed to many \nmachines for execution. \n \n \nFigure 5.1: Multitier client/server design. \nEssentially, an attacker can try to compromise a Web application from three distinct entry \npoints. This chapter will focus on the first two potential attack points: the machine used to \nrun the client component of the application, and the communication that takes place \nbetween the client-side component of the application and its associated server-side \ncomponent(s). The next chapter will focus on the third potential entry point, the server-side \nportion of a Web application (for example, the application and database tiers of the \napplication). Ghosh (1998) discusses numerous e-commerce vulnerabilities, which he \ngroups into client, transmission, and server attack categories. \n \nClient Identification and Authentication \nFor many Web applications, users are first required to identify themselves before being \nallowed to use some (or any) of the features available via the application. For some \nsystems, this identification may be as simple as asking for an email address or a \npseudonym by which the user wishes to be known. In such circumstances, the Web \napplication may make no attempt to verify the identity of the user. For many Web \napplications, however, this laissez-faire level of identification is unacceptable, such \n" }, { "page_number": 100, "text": " \n93 \napplications may not only ask users to identify themselves, but also to authenticate that the \nindividuals are who they claim to be. \nPerhaps one of the most challenging aspects of a Web application's design is devising a \nuser authentication mechanism that yields a high degree of accuracy while at the same \ntime not significantly impacting the usability of the application or its performance. Given that \nin the case of an Internet-based Web application, users could be thousands of miles away \nin foreign jurisdictions and using all manner of client-side hardware and system software, \nthis is far from a trivial task. Consequently, this is an area that will warrant comprehensive \ntesting. \nMost Web applications rely on one of three different techniques for establishing the \nauthenticity of a user: relying upon something users know (such as passwords), something \nthey have (like physical keys), or some physical attribute of themselves (for instance, \nfingerprints). These three strategies can all be implemented in a number of different ways, \neach with a different trade-off between cost, ease of use, and accuracy. Whichever \napproach is selected by the application's designers, it should be tested to ensure that the \napproach provides the anticipated level of accuracy. \nThe accuracy of an authentication mechanism can be measured in two ways: (1) the \npercentage of legitimate users who attempt to authenticate themselves but are rejected by \nthe system (sometimes referred to as the false rejection rate) and (2) the percentage of \nunauthorized users who are able to dupe the system into wrongly thinking they are \nauthorized to use the system (sometimes referred to as the false acceptance rate). \nUnfortunately, obtaining a low false acceptance rate (bad guys have a hard time getting in) \ntypically results in a high false rejection rate (good guys are frequently turned away). For \nexample, increasing the number of characters that users must use for their passwords may \nmake it harder for an attacker to guess (or crack) any, but it may also increase the \nfrequency with which legitimate users forget this information. \nThe risks associated with a false acceptance, compared to a false rejection, are quite \ndifferent. A false acceptance may allow intruders to run amok, exploiting whatever \napplication privileges the system had mistakenly granted them. Although a single false \nrejection may not be very significant, over time a large volume of false rejections can have \na noticeable effect on an application. For instance, a bank that requires multiple, hard-to-\nremember passwords may make its Web application so user-unfriendly that its adoption \nrate among its clients is much lower than its competitor's. This would result in a larger \npercentage of its clients visiting physical branches of the bank or using its telephone \nbanking system (both of which are more costly services to provide) and consequently giving \nits competitor a cost-saving advantage. \nFrom a testing perspective, the testing team should attempt to determine if a Web \napplication's false acceptance and false rejection rates are within the limits originally \nenvisaged by the application's designers (and users). Additionally, because there is no \nguarantee that a particular authentication technique has been implemented correctly, the \nmethod by which the authentication takes place should be evaluated to ensure the \nauthentication process itself can't be compromised (such as an attacker eavesdropping on \nan unencrypted application password being sent over the network). To this end, the \nfollowing sections outline some of the techniques that might be implemented in order to \nauthenticate a user of a Web application. Krutz (2001) and Smith (2001) provide additional \ninformation on user authentication strategies. \n" }, { "page_number": 101, "text": " \n94 \nRelying upon What the User Knows: The Knows-Something Approach \nThe authenticity of the user is established by asking the user to provide some item of \ninformation that only the real user would be expected to know. The classic userID and \npassword combination is by far the most common implementation of this authentication \nstrategy. Variations of this method of authentication include asking the user to answer a \nsecret question (such as \"What's your mother's maiden name?\") or provide a valid credit \ncard number, together with corresponding billing address information. \nAlthough the issues associated with application-level userIDs and passwords are similar to \nthose affecting system software userIDs and passwords (see Chapter 4 for a discussion on \nthis topic), an organization that has developed its own applications may have more flexibility \nin its userID and password implementations. Specifically, the organization may choose to \nenforce more or less rigorous standards than those implemented in the products developed \nby outside vendors. For instance, the organization may check that any new password \nselected by a user is not to be found in a dictionary, or the organization may enforce a one-\nuser, one-machine policy (that is, no user can be logged on to more than one machine, and \nno machine can simultaneously support more than one user). \nTherefore, in addition to the userID checklist in Table 4.17, the testing team may also want \nto consider including some or all of the checks in Table 5.1, depending upon what was \nspecified in the applications security specifications. \n \nTable 5.1: Application UserID and Password Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nDoes the application prohibit users from choosing common (and \ntherefore easily guessable) words for passwords? \n□ \n□ \nDoes the application prohibit users from choosing weak (and \ntherefore easily deciphered) passwords, such as ones that are only \nfour characters long? \n□ \n□ \nIf users are required to change their passwords every X number of \nweeks, does the application enforce this rule? \n□ \n□ \nIf users are required to use different passwords for Y generations of \npasswords, does the application enforce this rule? \n□ \n□ \nIs the application capable of allowing more than one user on the \nsame client machine to access the server-side component of the \napplication at the same time? \n□ \n□ \nIs the application capable of allowing one user to access the server-\nside component of the application from two or more client machines \nat the same time? \n□ \n□ \nIs the authentication method's false rejection rate acceptable? Is it \nmeasured by the number of calls made to the help desk for forgotten \npasswords? \n□ \n□ \nIs the authentication method's false acceptance rate acceptable? For \nexample, assuming no additional information, an attacker has a 1 in \n10,000 chance of correctly guessing a 4-digit numerical password. \n" }, { "page_number": 102, "text": " \n95 \nRelying upon What the User Has: The Has-Something Approach \nInstead of relying on a user's memory, the Web application could require that users actually \nhave in their possession some artifact or token that is not easily reproducible. The token \ncould be physical (such as a key or smart card), software based (a personal certificate), or \nassigned to a piece of hardware by software (for instance, a media access control [MAC] \naddress could be assigned to an Ethernet card, a telephone number to a telephone, or an \nIP address to a network device). \nIf this has-something approach is used to authenticate a user to a Web application, the \ntesting team will need to test the enforcement of the technique. For instance, if the \nauthorized user Lee Copeland has a home telephone number of (123) 456-7890, then the \norganization may decide to allow any device to access the organization's intranet \napplications if accessed from this number. The testing team could verify that this \nauthentication method has been implemented correctly by first attempting to gain access to \nthe applications from Lee's home and then attempting access from an unauthorized \ntelephone number such as Lee's next-door neighbor's. \nSole reliance on an authentication method such as the telephone number or network \naddress of the requesting machine may not make for a very secure defense. For instance, \nLee's kids could access the organization's application while watching TV, or a \nknowledgeable intruder may be able to trick a system into thinking he is using an authorized \nnetwork address when in fact he isn't (a technique commonly referred to as spoofing). \nScenarios such as these illustrate why many of these has-something authentication \nmethods are used in conjunction with a knows-something method. Two independent \nauthentication methods provide more security (but perhaps less usability) than one. So in \naddition to the telephone number requirement, Lee would still need to authenticate himself \nwith his userID and password combination. \nThe following section describes some of the most common has-something authentication \ntechniques. \nPersonal Certificates \nA personal certificate is a small data file, typically obtained from an independent certification \nauthority (CA) for a fee; however, organizations or individuals can manufacture their own \ncertificates using a third-party product. (For more information on the use of open-source \nproducts to generate certificates, go to www.openca.org.) This data file, once loaded into a \nmachine, uses an encrypted ID embedded in the data file to allow the owner of the \ncertificate to send encrypted and digitally signed information over a network. Therefore, the \nrecipient of the information is assured that the message has not been forged or tampered \nwith en route. \nPersonal certificates can potentially be a more secure form of user authentication than the \nusual userID and password combination. However, personal certificates to date have not \nproven to be popular with the general public (perhaps because of privacy issues and their \nassociated costs). So keep in mind that any Web application aimed at this group of users \nand requiring the use of a personal certificate may find few people willing to participate. In \nthe case of an extranet and intranet application, where an organization may have more \npolitical leverage with the application's intended user base, personal certificates may be an \nacceptable method of authentication. Table 5.2 lists some firms that offer personal \ncertificates and related services (and therefore provide more detailed information on how \npersonal certificates work). \n \n" }, { "page_number": 103, "text": " \n96 \nTable 5.2: Sample Providers of Personal Certificates and Related Services \nNAME \nASSOCIATED WEB SITE \nBT Ignite \nwww.btignite.com \nEntrust \nwww.entrust.com \nGlobalSign \nwww.globalsign.net \nSCM Microsystems \nwww.scmmicro.com \nThawte Consulting \nwww.thawte.com \nVeriSign \nwww.verisign.com \nBrands (2000), Feghhi (1998), and Tiwana (1999) provide additional information on digital \ncertificates. \nSmart Cards \nSmart cards are physical devices that contain a unique ID embedded within them. With this \ndevice and its personal identification number (PIN), the identity of the person using the \ndevice can be inferred (although not all cards require a PIN). \nA SecurID smart card is an advanced smart card that provides continuous authentication by \nusing encryption technology to randomly generate new passwords every minute. It provides \nan extremely robust authorization mechanism between the smart card and synchronized \nserver. Table 5.3 lists some firms that offer smart cards and related services. Hendry \n(2001), Rankl (2001), and Wilson (2001) provide additional information on smart cards. \n \nTable 5.3: Sample Providers of Smart Cards and Related Services \nNAME \nASSOCIATED WEB SITE \nActivcard \nwww.activcard.com \nDallas Semiconductor Corporation \nwww.ibutton.com \nDatakey \nwww.datakey.com \nLabcal Technologies \nwww.labcal.com \nMotus Technologies \nwww.motus.com \nRSA Security \nwww.rsasecurity.com \nSignify Solutions \nwww.signify.net \nVASCO \nwww.vasco.com \nMAC Addresses \nThe MAC address is intended to be a globally unique, 48-bit serial number (for instance, \n2A-53-04-5C-00-7C) and is embedded into every Ethernet network interface card (NIC) that \nhas ever been made. An Ethernet NIC used by a machine to connect it to a network can be \nprobed by another machine connected to the network directly or indirectly (for example, via \nthe Internet) to discover this unique number. It is therefore possible for a Web server to \nidentify the MAC address of any visitor using an Ethernet card to connect to the Web site. \n" }, { "page_number": 104, "text": " \n97 \nFor example, the Windows commands winipcfg and ipconfig, or the UNIX command ifconfig \ncan be used to locally view an Ethernet card's MAC address. \nPrivacy concerns aside, the MAC address can be used to help authenticate the physical \ncomputer being used to communicate to a Web server. But because some Ethernet cards \nenable their MAC addresses to be altered, this authentication technique cannot be \nguaranteed to uniquely identify a client machine. Therefore, it should not be relied upon as \nthe sole means of authentication. \nIP Addresses \nEvery data packet that travels the Internet carries with it the IP network address of the \nmachine that originated the request. By examining this network source address, a receiver \nshould be able (in theory) to authenticate the sender of the data. Unfortunately, this form of \nauthentication suffers from two major problems; proxy servers hide the network address of \nthe sender, replacing the original sender's network address with their own. This means that \nwhen proxy servers are involved, IP address verification is typically only going to be able to \nverify the organization or the Internet service provider (ISP) that owns the proxy server, not \nthe individual machine that originated the request. The second problem with IP address \nauthentication is that the source IP address is relatively easy to alter, or spoof, and it \ntherefore should not be relied upon as the sole means of identifying a client. \nTelephone Numbers \nUsing a telephone number for authentication is a technique used by many credit card \nissuers to confirm delivery of a credit card to the card's legitimate owner. The credit card is \nactivated once a confirmation call has been received from the owner's home telephone \nnumber. \nRequiring users to access an organization's remote access server (RAS) from a specific \ntelephone number (often authenticated via a callback mechanism) is a common way of \nrestricting access to applications on an intranet or extranet. Unfortunately, an attacker can \nsubvert a callback mechanism by using call forwarding to forward the callback to an \nunintended destination. This would make this form of authentication undependable, if used \nas the sole method of authentication. \nTable 5.4 lists some tests that should be considered if the application uses some form of \ntoken to determine who a user is. \n \nTable 5.4: Secure Token Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nIs there a documented policy in place that describes how application \nsecurity tokens are assigned and removed from circulation? \n□ \n□ \nAre the procedures for handling lost tokens adequate and always \nfollowed? \n□ \n□ \nCan the token be counterfeited? If technically feasible, how likely is it \nthat this would actually take place? \n□ \n□ \nDoes the application enable the same token to be simultaneously \nused more than once? \n□ \n□ \nIs the authentication method's false rejection rate acceptable? \n" }, { "page_number": 105, "text": " \n98 \nTable 5.4: Secure Token Checklist \nYES \nNO \nDESCRIPTION \n \n□ \nIs the authentication method's false acceptance rate acceptable? \nRelying upon What the User Is: The Biometrics Approach \nUndoubtedly the most secure of the three authentication approaches, a biometric device \nmeasures some unique physical property of a user that cannot easily be forged or altered, \nthereby providing an extremely accurate method of identifying an individual user. Biometric \nauthentication methods include fingerprints, hand geometry, face geometry, eye patterns \n(iris and/or retina), signature dynamics (speed, pressure, and outline), keyboard typing \npatterns, and voice. Ideally, a combination of two or more of these methods should be \nused, as advances in technology have made some of these measurements easier to fake. \nThis approach also has the most resistance to adoption by the general population. \nHowever, in situations where the use of a biometric device can be deployed without \nadoption concerns (such as in the military), it is often the method of choice for Web \napplications that need unequivocal confirmation of who the user is. One drawback of a \nbiometric measurement is what happens after an ID has been compromised. For example, \nsuppose the input data sent from the scanner has been compromised because of \neavesdropping or with the assistance of an infamous evil twin. Unfortunately, there is \ntypically no way to issue the user a new identifying characteristic. (Eye surgery would seem \na little drastic.) An additional drawback is the fact that some of the measurements are more \nsusceptible than others to false acceptances and rejections. Nanavati et al. (2002) provide \nadditional information on biometrics. \nTable 5.5 lists some firms that offer biometric devices and related services. (For more \ndetailed information on how they work, go to the individual Web sites listed.) \n \nTable 5.5: Sample Providers of Biometric Devices and Related Services \nNAME \nASSOCIATED WEB SITE \nActivCard \nwww.activcard.com \nCyber-SIGN \nwww.cybersign.com \nDigitalPersona \nwww.digitalpersona.com \nIdentix \nwww.identix.com \nInterlink Electronics \nwww.interlinkelec.com \nIridian Technologies \nwww.iridiantech.com \nKeyware \nwww.keyware.com \nSAFLINK \nwww.saflink.com \nSecuGen \nwww.secugen.com \nVisionics \nwww.visionics.com \nTable 5.6 lists some tests that should be considered if the application is to use a biometric \ndevice to authenticate a user's identity. \n" }, { "page_number": 106, "text": " \n99 \n \nTable 5.6: Biometric Device Checklist \nYES \nNO \nDESCRIPTION \n \n \nIs there a documented policy in place that describes how biometric \nmeasurements are originally captured and authenticated? \n□ \n□ \nAre the procedures for handling compromised measurements \nadequate and always followed? \n□ \n□ \nCan the biometric measurement be faked? If technically feasible, how \nlikely is it that this would actually take place? \n□ \n□ \nIs the false rejection rate too high? For example, the measuring \ndevice could be too sensitive. \n□ \n□ \nIs the false acceptance rate too high? For example, the measuring \ndevice could not be sensitive enough. \n \nUser Permissions \nIt would be convenient if all legitimate users of an application were granted the same \npermissions. Alas, this situation rarely occurs. (For example, many Web-based applications \noffer more extensive information to subscription-paying members than they do to \nnonpayers.) Permissions can be allocated to users in many ways, but generally speaking, \nrestrictions to privileges take one of three forms: functional restrictions, data restrictions, \nand cross-related restrictions. Barman (2001), Peltier (2001), and Wood (2001) all provide \nguidance on developing user security permissions. \nFunctional Restrictions \nUsers can be granted or denied access to an application's various functional capabilities. \nFor example, any registered user of a stock-trading Web application may get a free 15-\nminute-delayed stock quote, but only users who have opened a trading account with the \nstockbroker are granted access to the real-time stock quotes. \nOne of the usability decisions a Web application designer has to make is to decide whether \nor not features that a user is restricted from using should be displayed on any Web page \nthe user can access. For instance, in the previous example, should an ineligible user be \nable to see the real-time stock quote menu option, selecting this option will result in some \nsort of \"Access denied\" message being displayed. The following are some of the arguments \nfor displaying restricted features: \nƒ \nSeeing that the feature exists, users may be enticed into upgrading their status \n(through legitimate methods). \nƒ \nIn the case of an intranet application, when an employee is promoted to a more \nprivileged position, the amount of additional training that he or she needs in order to \nuse these new privileges may be reduced. The employee would already have a great \ndegree of familiarity with the additional capabilities that he or she has just been \ngranted. \nƒ \nHaving only one version of a user interface should reduce the effort needed to build \nan online or paper-based user manual. The manual would certainly be easy to follow, \nas any screen captures or directions in the manual should exactly match the screens \nthat each user will see, regardless of any restrictions. \nƒ \nHaving only one version of a user interface will probably reduce the amount and \ncomplexity of coding needed to build the application. \n" }, { "page_number": 107, "text": " \n100 \nSome drawbacks also exist, however, among which are the following: \nƒ \nOne of the simplest forms of security is based on the need-to-know principal. Users \nare only granted access to the minimum amount of information they need to know in \norder to perform their expected tasks. Merely knowing the existence of additional \nfeatures may be more information than they need, as it could entice them into trying \nto acquire this capability through illegal channels. \nƒ \nSome legitimate users may find the error messages generated when they try to \naccess forbidden areas frustrating. They might think, \"If I can't access this feature, \nwhy offer me the option?\" or even assume the application is broken. \nƒ \nToo many inaccessible options may overcomplicate a user interface, thereby \nincreasing a user's learning curve and generating additional work for the help desk. \nWhichever approach is taken, the application's user interface should be tested to ensure \nthat the same style of interface is used consistently across the entire application, reducing \nthe probability of errors while at the same time improving usability. \nOne recurring problem with function-only restrictions is that these controls may be \ncircumvented if the user is able to get direct access to the data files or database. \n(Unfortunately, this is an all-too-common occurrence with the advent of easy-to-use \nreporting tools.) The database and data files should be checked to ensure that a \nknowledgeable insider couldn't circumvent an application's functional security measures by \naccessing the data directly. For instance, via an Open Database Connectivity (ODBC) \nconnection from an Excel spreadsheet. \nData Restrictions \nInstead of restricting access to data indirectly by denying a user access to some of an \napplication's functionality, the application could directly block access to any data the user is \nnot authorized to manipulate or even view. For example, sales representatives may be \nallowed to view any of the orders for their territory but not the orders for their peers. \nCorrespondingly, a regional sales manager may be able to run reports on any or all of the \nreps that report to him or her, but not for any other rep. \nFunctional and Data Cross-Related Restrictions \nMany applications use a combination of functional and data restrictions. For instance, in \naddition to only being able to see orders in their own territory, a rep may not be allowed to \nclose out a sales quarter, an action that can only be performed by the vice president of \nsales or the CFO. \nLess common are situations in which access to a particular function is based on the data \nthat the user is trying to manipulate with the function. For example, reps may be allowed to \nalter the details of an order up until it is shipped, after which they are denied this ability, a \nprivilege that is only available to regional managers. A more complicated example would be \na financial analyst who is restricted from trading in a stock for 72 hours after another analyst \nat the same firm changes their buy/sell recommendation. \nEach of these three forms of restrictions (functional, data, or a hybrid of both) can be \nenforced using one or more different implementations (for example, via application code, \nstored procedures, triggers, or database views). Regardless of the approach used to \nimplement the restrictions, the application should be tested to ensure that each category of \nuser is not granted too many or too few application permissions. Table 5.7 summaries \nthese checks. \n \n" }, { "page_number": 108, "text": " \n101 \nTable 5.7: User Permissions Checklist \nYES \nNO \nDESCRIPTION \n□ \n□ \nIs there a documented policy in place that describes under what \ncircumstances users will be granted access to the application's \nfunctional capabilities and data? \n□ \n□ \nIs there a documented policy in place that describes how user \napplication privileges may be altered or removed? \n□ \n□ \nDoes the application's user interface take a consistent approach to \ndisplaying (or hiding) functions that the user is currently not \nauthorized to use? \n□ \n□ \nCan any of the users access a function that should be restricted? \n \n \nCan all the users access every function that they should be permitted \nto use? \n□ \n□ \nCan all the users access data that they should be permitted to use? \n \nTesting for Illicit Navigation \nOne of the features of the Internet is that users are able to jump around a Web application \nfrom page to page in potentially any order. Browser options such as Go, History, Favorites, \nBookmarks, Back, Forward, and Save pages only add to the flexibility. In an attempt to \nensure that a user deliberately attempting to access Web pages in an inappropriate \nsequence (such as trying to go to the ship to Web page without first going through the \npayment collection page) cannot compromise a site's navigational integrity and security, \ndesigners may have to utilize one or more techniques to curtail illicit activities. If such \nprecautions have been built into the application, they should be tested to ensure that they \nhave been implemented correctly. \nHTTP Header Analysis \nSome Web sites will use the information contained in a Hypertext Transfer Protocol (HTTP) \nheader (the Referer field) to ascertain the Web page that the client has just viewed and \nthereby determine if the client is making a valid navigational request. Although an attacker \ncould easily alter this field, many attackers may not suspect that this defense is being \nemployed and therefore will not consider altering this field. \nHTTP Header Expiration \nTo reduce the ease with which an attacker can try to navigate to a previously viewed page \n(instead of being forced to download a fresh copy), the HTTP header information for a Web \npage can be manipulated via HTTP-EQUIV meta tags Cache-control, Expires, or Pragma to \nforce the page to be promptly flushed from the requesting browser's memory. \nUnfortunately, only novice attackers are likely to be thwarted by this approach (as previous \nviewed Web pages can always be saved to disk). However, if this defense has been \ndesigned into the application, it should still be checked to ensure that it has been \nimplemented. \n" }, { "page_number": 109, "text": " \n102 \nClient-Side Application Code \nSome Web applications rely on client-side mobile code to restrict access to sensitive pages \n(mobile code is discussed in more detail later in this chapter). For instance, before entering \na restricted Web page, a client-side script could be used to launch a login popup window. If \nthe Web application uses such a mechanism, it should be tested to ensure that a user \nturning off scripting, Java applets, or ActiveX controls in his or her browser before \nattempting to access the restricted page does not allow the user to circumvent this \nrestriction. \nSession IDs \nBy placing an item of unique data on the client (discussed in more detail in the Client-Side \nData section), a Web application can uniquely identify each visitor. Using this planted \nidentifier (sometimes referred to as session ID), a Web application can keep track of where \na user has been and thereby deduce where he or she may be permitted to go. \nThe effectiveness of this approach to a large degree depends on how and where this \nidentifier is stored on the client machine (as will be described in the Client-Side Data \nsection), with some methods being safer than others. \nNavigational Tools \nIf access to a large number of Web pages needs to be checked using several different user \nprivileges, it may make sense to create a test script using one of the link-checking tools \nlisted in Table 5.8. The script can then be played back via different userIDs. One can also \nproduce a report of all the Web pages that were previously accessible (when the test script \nwas created using a userID with full privileges) but have since become unobtainable due to \nthe reduced security privileges assigned to each of the userIDs. \n \nTable 5.8: Sample Link-Checking Tools \nNAME \nASSOCIATED WEB SITE \nAstra SiteManager \nwww.mercuryinteractive.com \ne-Test Suite \nwww.empirix.com \nLinkbot \nwww.watchfire.com \nSite Check \nwww.rational.com \nSiteMapper \nwww.trellian.com \nWebMaster \nwww.coast.com \n \nHIDING CLIENT-SIDE CODE \nStoring code in a separate file (for instance, hiddencode.js) is a technique used by some \ndevelopers to avoid a user casually viewing client-side source code that controls security \nfunctions. The Web page that needs this code then references this file, thereby avoiding the \nneed to embed the code in the HTML used to construct the Web page (which would allow a \nviewer to easily view the code alongside the HTML code). Here's an example: \n